From greg@cosc.canterbury.ac.nz  Tue Aug  1 00:45:02 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 11:45:02 +1200 (NZST)
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended
 slicing for lists)
In-Reply-To: <Pine.LNX.4.10.10007290934240.5008-100000@localhost>
Message-ID: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>

I think there are some big conceptual problems with allowing
negative steps in a slice.

With ordinary slices, everything is very clear if you think
of the indices as labelling the points between the list
elements.

With a step, this doesn't work any more, and you have to
think in terms of including the lower index but excluding the
upper index.

But what do "upper" and "lower" mean when the step is negative?
There are several things that a[i:j:-1] could plausibly mean:

   [a[i], a[i-1], ..., a[j+1]]

   [a[i-1], a[i-2], ..., a[j]]

   [a[j], a[j-1], ..., a[i+1]]

   [a[j-1], a[j-2], ..., a[i]]

And when you consider negative starting and stopping values,
it just gets worse. These have no special meaning to range(),
but in list indexing they do. So what do they mean in a slice
with a step? Whatever is chosen, it can't be consistent with
both.

In the face of such confusion, the only Pythonic thing would
seem to be to disallow these things.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Tue Aug  1 01:01:45 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 12:01:45 +1200 (NZST)
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: <200007281147.GAA04007@cj20424-a.reston1.va.home.com>
Message-ID: <200008010001.MAA10295@s454.cosc.canterbury.ac.nz>

> The way I understand this, mixing indices and slices is used all
> the time to reduce the dimensionality of an array.

I wasn't really suggesting that they should be disallowed.
I was trying to point out that their existence makes it
hard to draw a clear distinction between indexing and slicing.

If it were the case that

   a[i,j,...,k]

was always equivalent to

   a[i][j]...[k]

then there would be no problem -- you could consider each
subscript individually as either an index or a slice. But
that's not the way it is.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Tue Aug  1 01:07:08 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 12:07:08 +1200 (NZST)
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEPDGMAA.tim_one@email.msn.com>
Message-ID: <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>

Tim Peters:

> The problem isn't that repr sticks in backslash escapes, the problem is that
> repr gets called when repr is inappropriate.

Seems like we need another function that does something in
between str() and repr(). It would be just like repr() except
that it wouldn't put escape sequences in strings unless
absolutely necessary, and it would apply this recursively
to sub-objects.

Not sure what to call it -- goofy() perhaps :-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From bwarsaw@beopen.com  Tue Aug  1 01:25:43 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 31 Jul 2000 20:25:43 -0400 (EDT)
Subject: [Python-Dev] Should repr() of string should observe locale?
References: <LNBBLJKPBEHFEDALKOLCKEPDGMAA.tim_one@email.msn.com>
 <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>
Message-ID: <14726.6407.729299.113509@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    GE> Seems like we need another function that does something in
    GE> between str() and repr().

I'd bet most people don't even understand why there has to be two
functions that do almost the same thing.

-Barry


From guido@beopen.com  Tue Aug  1 04:32:18 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 31 Jul 2000 22:32:18 -0500
Subject: [Python-Dev] test_re fails with re==pre
In-Reply-To: Your message of "Mon, 31 Jul 2000 23:59:34 +0200."
 <20000731215940.28A11E266F@oratrix.oratrix.nl>
References: <20000731215940.28A11E266F@oratrix.oratrix.nl>
Message-ID: <200008010332.WAA25069@cj20424-a.reston1.va.home.com>

> Test_re now works fine if re is sre, but it still fails if re is pre.
> 
> Is this an artifact of the test harness or is there still some sort of
> incompatibility lurking in there?

It's because the tests are actually broken for sre: it prints a bunch
of "=== Failed incorrectly ..." messages.  We added these as "expected
output" to the test/output/test_re file.  The framework just notices
there's a difference and blames pre.

Effbot has promised a new SRE "real soon now" ...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Aug  1 05:01:34 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 31 Jul 2000 23:01:34 -0500
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
In-Reply-To: Your message of "Tue, 01 Aug 2000 11:45:02 +1200."
 <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>
Message-ID: <200008010401.XAA25180@cj20424-a.reston1.va.home.com>

> I think there are some big conceptual problems with allowing
> negative steps in a slice.
> 
> With ordinary slices, everything is very clear if you think
> of the indices as labelling the points between the list
> elements.
> 
> With a step, this doesn't work any more, and you have to
> think in terms of including the lower index but excluding the
> upper index.
> 
> But what do "upper" and "lower" mean when the step is negative?
> There are several things that a[i:j:-1] could plausibly mean:
> 
>    [a[i], a[i-1], ..., a[j+1]]
> 
>    [a[i-1], a[i-2], ..., a[j]]
> 
>    [a[j], a[j-1], ..., a[i+1]]
> 
>    [a[j-1], a[j-2], ..., a[i]]
> 
> And when you consider negative starting and stopping values,
> it just gets worse. These have no special meaning to range(),
> but in list indexing they do. So what do they mean in a slice
> with a step? Whatever is chosen, it can't be consistent with
> both.
> 
> In the face of such confusion, the only Pythonic thing would
> seem to be to disallow these things.

You have a point!  I just realized today that my example L[9:-1:-1]
does *not* access L[0:10] backwards, because of the way the first -1
is interpreted as one before the end of the list L. :(

But I'm not sure we can forbid this behavior (in general) because the
NumPy folks are already using this.  Since these semantics are up to
the object, and no built-in objects support extended slices (yet), I'm
not sure that this behavior has been documented anywhere except in
NumPy.

However, for built-in lists I think it's okay to forbid a negative
step until we've resolved this...

This is something to consider for patch 100998 which currently
implements (experimental) extended slices for lists...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From ping@lfw.org  Tue Aug  1 01:02:40 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Mon, 31 Jul 2000 17:02:40 -0700 (PDT)
Subject: [Python-Dev] Reordering opcodes (PEP 203 Augmented Assignment)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEIGDCAA.MarkH@ActiveState.com>
Message-ID: <Pine.LNX.4.10.10007311701050.5008-100000@localhost>

On Mon, 31 Jul 2000, Mark Hammond wrote:
> IDLE and Pythonwin are able to debug arbitary programs once they have
> started - and they are both written in Python.

But only if you start them *in* IDLE or Pythonwin, right?

> * You do not want to debug the IDE itself, just a tiny bit of code running
> under the IDE.  Making the IDE take the full hit simply because it wants to
> run a debugger occasionally isnt fair.

Well, running with trace hooks in place is no different from
the way things run now.

> The end result is that all IDEs will run with debugging enabled.

Right -- that's what currently happens.  I don't see anything wrong
with that.

> * Python often is embedded, for example, in a Web Server, or used for CGI.
> It should be possible to debug these programs directly.

But we don't even have a way to do this now.  Attaching to an
external running process is highly system-dependent trickery.

If printing out tracebacks and other information isn't enough
and you absolutely have to step the program under a debugger,
the customary way of doing this now is to run a non-forking
server under the debugger.  In that case, you just start a
non-forking server under IDLE which sets -g, and you're fine.


Anyway, i suppose this is all rather moot now that Vladimir has a
clever scheme for tracing even without SET_LINENO.  Go Vladimir!
Your last proposal sounded great.


-- ?!ng



From Fredrik Lundh" <effbot@telia.com  Tue Aug  1 07:20:01 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 08:20:01 +0200
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>
Message-ID: <001a01bffb80$87514860$f2a6b5d4@hagrid>

greg wrote:

> I think there are some big conceptual problems with allowing
> negative steps in a slice.

wasn't "slices" supposed to work the same way as "ranges"?

from PEP-204:

    "Extended slices do show, however, that there is already a
    perfectly valid and applicable syntax to denote ranges in a way
    that solve all of the earlier stated disadvantages of the use of
    the range() function"

> In the face of such confusion, the only Pythonic thing would
> seem to be to disallow these things.

...and kill PEP-204 at the same time.

</F>



From tim_one@email.msn.com  Tue Aug  1 07:16:41 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 02:16:41 -0400
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <14726.6407.729299.113509@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEPGNAA.tim_one@email.msn.com>

[Barry A. Warsaw]
> I'd bet most people don't even understand why there has to be two
> functions that do almost the same thing.

Indeed they do not.  The docs are too vague about the intended differences
between str and repr; in 1.5.2 and earlier, string was just about the only
builtin type that actually had distinct str() and repr() implementations, so
it was easy to believe that strings were somehow a special case with unique
behavior; 1.6 extends that (just) to floats, where repr(float) now displays
enough digits so that the output can be faithfully converted back to the
float you started with.  This is starting to bother people in the way that
distinct __str__ and __repr__ functions have long frustrated me in my own
classes:  the default (repr) at the prompt leads to bloated output that's
almost always not what I want to see.  Picture repr() applied to a matrix
object!  If it meets the goal of producing a string sufficient to reproduce
the object when eval'ed, it may spray megabytes of string at the prompt.
Many classes implement __repr__ to do what __str__ was intended to do as a
result, just to get bearable at-the-prompt behavior.  So "learn by example"
too often teaches the wrong lesson too.  I'm not surprised that users are
confused!

Python is *unusual* in trying to cater to more than one form of to-string
conversion across the board.  It's a mondo cool idea that hasn't got the
praise it deserves, but perhaps that's just because the current
implementation doesn't really work outside the combo of the builtin types +
plain-ASCII strings.  Unescaping locale printables in repr() is the wrong
solution to a small corner of the right problem.




From Fredrik Lundh" <effbot@telia.com  Tue Aug  1 07:27:15 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 08:27:15 +0200
Subject: [Python-Dev] Reordering opcodes (PEP 203 Augmented Assignment)
References: <Pine.LNX.4.10.10007311701050.5008-100000@localhost>
Message-ID: <006401bffb81$89a7ed20$f2a6b5d4@hagrid>

ping wrote:

> > * Python often is embedded, for example, in a Web Server, or used =
for CGI.
> > It should be possible to debug these programs directly.
>=20
> But we don't even have a way to do this now.  Attaching to an
> external running process is highly system-dependent trickery.

not under Python: just add an import statement to the script, tell
the server to reload it, and off you go...

works on all platforms.

</F>



From paul@prescod.net  Tue Aug  1 07:34:53 2000
From: paul@prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 02:34:53 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBAEEGDCAA.mhammond@skippinet.com.au>
Message-ID: <39866F8D.FCFA85CB@prescod.net>

Mark Hammond wrote:
> 
> >   Interesting; I'd understood from Paul that you'd given approval to
> > this module.
> 
> Actually, it was more more along the lines of me promising to spend some
> time "over the next few days", and not getting to it.  However, I believe
> it was less than a week before it was just checked in.

It was checked in the day before the alpha was supposed to go out. I
thought that was what you wanted! On what schedule would you have
preferred us to do it?

> I fear this may be a general symptom of the new flurry of activity; no-one
> with a real job can keep up with this list, meaning valuable feedback on
> many proposals is getting lost.  For example, DigiCool have some obviously
> smart people, but they are clearly too busy to offer feedback on anything
> lately.  That is a real shame, and a good resource we are missing out on.

>From my point of view, it was the appearance of _winreg that prompted
the "flurry of activity" that led to winreg. I would never have bothered
with winreg if I were not responding to the upcoming "event" of the
defacto standardization of _winreg. It was clearly designed (and I use
the word loosely) by various people at Microsoft over several years --
with sundry backwards and forwards compatibility hacks embedded.

I'm all for slow and steady, deliberate design. I'm sorry _winreg was
rushed but I could only work with the time I had and the interest level
of the people around. Nobody else wanted to discuss it. Nobody wanted to
review the module. Hardly anyone here even knew what was in the OLD
module.

> I am quite interested to hear from people like Gordon and Bill
> about their thoughts.

I am too. I would *also* be interested in hearing from people who have
not spent the last five years with the Microsoft API because _winreg was
a very thin wrapper over it and so will be obvious to those who already
know it.

I have the feeling that an abstraction over the APIs would never be as
"comfortable" as the Microsoft API you've been using for all of these
years.
-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"




From paul@prescod.net  Tue Aug  1 08:16:30 2000
From: paul@prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 03:16:30 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
Message-ID: <3986794E.ADBB938C@prescod.net>

(reorganizing the important stuff to the top)

Mark Hammond wrote:
> Still-can't-see-the-added-value-ly,

I had no personal interest in an API for the windows registry but I
could not, in good conscience, let the original one become the 
standard Python registry API. 

Here are some examples:

(num_subkeys, num_values, last_modified ) = winreg.QueryInfoKey( key )
for i in range( num_values ):
	(name,value)=winreg.EnumValue( key, i )
		if name==valuename: print "found"

Why am I enumerating but not using the Python enumeration protocol? Why
do I have to get a bogus 3-tuple before I begin enumerating? Where else
are the words "Query" and "Enum" used in Python APIs?

and

winreg.SetValueEx( key, "ProgramFilesDir", None, winreg.REG_SZ,
r"c:\programs" )

Note that the first argument is the key object (so why isn't this a
method?) and the third argument is documented as bogus. In fact, in
the OpenKey documentation you are requested to "always pass 0 please".

All of that was appropriate when winreg was documented "by reference" to
the Microsoft documentation but if it is going to be a real, documented
module in the Python library then the bogus MS junk should go.

The truth is I would prefer NOT to work on winreg and leave both 
versions out of the library. But Unless someone else is willing to 
design and implement a decent API, I took that burden upon myself 
rather than see more weird stuff in the Python API.

So the value add is:

 * uses Python iteration protocol
 * uses Python mapping protocol
 * uses Python method invocation syntax
 * uses only features that will be documented
 * does not expose integers as object handles (even for HKLM etc.)
 * uses inspectable, queryable objects even as docstrings
 * has a much more organized module dictionary (do a dir( _winreg))

If you disagree with those principles then we are in trouble. If you
have quibbles about specifics then let's talk.

> Ive just updated the test suite so that test_winreg2.py actually works.
> 
> It appears that the new winreg.py module is still in a state of flux, but
> all work has ceased.  The test for this module has lots of placeholders
> that are not filled in. Worse, the test code was checked in an obviously
> broken state (presumably "to be done", but guess who the bunny who had to
> do it was :-(

The tests ran fine on my machine. Fred had to make minor changes before
he checked it in for me because of module name changes. It's possible
that he mistyped a search and replace or something...or that I had a 
system dependency. Since I changed jobs I no longer have access to 
Visual C++ and have not had luck getting GCC to compile _winreg. This
makes further testing difficult until someone cuts a Windows binary 
build of Python (which is perpetually imminent).

The test cases are not all filled in. The old winreg test tested each
method on average one time. The new winreg tries harder to test each in
a variety of situations. Rather than try to keep all cases in my head I
created empty function bodies. Now we have clear documentation of what
is done and tested and what is to be tested still. Once an alpha is cut,
(or I fix my compiler situation) I can finish that process.

> Browsing the source made it clear that the module docstrings are still
> incomplete (eg "For information on the key API, open a key and look at its
> docstring.").  

The docstrings are not complete, but they are getting better and the old
winreg documentation was certainly not complete either! I admit I got
into a little bit of recursive projects wherein I didn't want to write
the winreg, minidom, SAX, etc. documentation twice so I started working
on stuff that would extract the docstrings and generate LaTeX. That's
almost done and I'll finish up the documentation. That's what the beta
period is for, right?

> Eg, the specific example I had a problem with was:
> 
> key[value]
> 
> Returns a result that includes the key index!  This would be similar to a
> dictionary index _always_ returning the tuple, and the first element of the
> tuple is _always_ the key you just indexed.

There is a very clearly labelled (and docstring-umented) getValueData
method:

key.getValueData("FOO") 

That gets only the value. Surely that's no worse than the original:

winreg.QueryValue( key, "FOO" )

If this is your only design complaint then I don't see cause for alarm
yet.

Here's why I did it that way:

You can fetch data values by their names or by their indexes. If
you've just indexed by the name then of course you know it. If you've
just fetched by the numeric index then you don't. I thought it was more
consistent to have the same value no matter how you indexed. Also, when
you get a value, you should also get a type, because the types can be
important. In that case it still has to be a tuple, so it's just a
question of a two-tuple or a three-tuple. Again, I thought that the
three-tuple was more consistent. Also, this is the same return value
returned by the existing EnumValue function.

> Has anyone else actually looked at or played with this, and still believe
> it is an improvement over _winreg?  I personally find it unintuitive, and
> will personally continue to use _winreg.  If we can't find anyone to
> complete it, document it, and stand up and say they really like it, I
> suggest we pull it.

I agree that it needs more review. I could not get anyone interested in
a discussion of how the API should look, other than pointing at old
threads.

You are, of course, welcome to use whatever you want but I think it
would be productive to give the new API a workout in real code and then
report specific design complaints. If others concur, we can change it.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"




From mwh21@cam.ac.uk  Tue Aug  1 07:59:11 2000
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 01 Aug 2000 07:59:11 +0100
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
In-Reply-To: "Fredrik Lundh"'s message of "Tue, 1 Aug 2000 08:20:01 +0200"
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> <001a01bffb80$87514860$f2a6b5d4@hagrid>
Message-ID: <m34s55a2m8.fsf@atrus.jesus.cam.ac.uk>

"Fredrik Lundh" <effbot@telia.com> writes:

> greg wrote:
> 
> > I think there are some big conceptual problems with allowing
> > negative steps in a slice.
> 
> wasn't "slices" supposed to work the same way as "ranges"?

The problem is that for slices (& indexes in general) that negative
indices have a special interpretation:

range(10,-1,-1)
range(10)[:-1]

Personally I don't think it's that bad (you just have to remember to
write :: instead of :-1: when you want to step all the way back to the
beginning).  More serious is what you do with out of range indices -
and NumPy is a bit odd with this one, it seems:

>>> l = Numeric.arrayrange(10)
>>> l[30::-2]
array([0, 8, 6, 4, 2, 0])

What's that initial "0" doing there?  Can someone who actually
understands NumPy explain this?

Cheers,
M.

(PS: PySlice_GetIndices is in practice a bit useless because when it
fails it offers no explanation of why!  Does any one use this
function, or should I submit a patch to make it a bit more helpful (&
support longs)?)

-- 
    -Dr. Olin Shivers,
     Ph.D., Cranberry-Melon School of Cucumber Science
                                           -- seen in comp.lang.scheme



From tim_one@email.msn.com  Tue Aug  1 08:57:06 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 03:57:06 -0400
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEFEGNAA.tim_one@email.msn.com>

[Greg Ewing]
> Seems like we need another function that does something in
> between str() and repr(). It would be just like repr() except
> that it wouldn't put escape sequences in strings unless
> absolutely necessary, and it would apply this recursively
> to sub-objects.
>
> Not sure what to call it -- goofy() perhaps :-)

In the previous incarnation of this debate, a related (more str-like than
repr-like) intermediate was named ssctsoos().  Meaning, of course <wink>,
"str() special casing the snot out of strings".  It was purely a hack, and I
was too busy working at Dragon at the time to give it the thought it needed.
Now I'm too busy working at PythonLabs <0.5 wink>.

not-a-priority-ly y'rs  - tim




From MarkH@ActiveState.com  Tue Aug  1 08:59:22 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 1 Aug 2000 17:59:22 +1000
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <39866F8D.FCFA85CB@prescod.net>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>

I am going to try very hard to avoid antagonistic statements - it doesnt
help anyone or anything when we are so obviously at each others throats.

Let me start by being conciliatory:  I do appreciate the fact that you made
the effort on the winreg module, and accept it was done for all the right
reasons.  The fact that I dont happen to like it doesnt imply any personal
critisism - I believe we simply have a philosophical disagreement.  But
then again, they are the worst kinds of disagreements to have!

> > Actually, it was more more along the lines of me promising to
> spend some
> > time "over the next few days", and not getting to it.
> However, I believe
> > it was less than a week before it was just checked in.
>
> It was checked in the day before the alpha was supposed to go out. I
> thought that was what you wanted! On what schedule would you have
> preferred us to do it?

Im not sure, but one that allowed everyone with relevant input to give it.
Guido also stated he was not happy with the process.  I would have
preferred to have it miss the alpha than to go out with a design we are not
happy with.

> >From my point of view, it was the appearance of _winreg that prompted
> the "flurry of activity" that led to winreg. I would never have bothered
> with winreg if I were not responding to the upcoming "event" of the
> defacto standardization of _winreg. It was clearly designed (and I use
> the word loosely) by various people at Microsoft over several years --
> with sundry backwards and forwards compatibility hacks embedded.

Agreed.  However, the main problem was that people were assuming win32api
was around to get at the registry.  The win32api module's registry
functions have been around for _ages_.  None of its users have ever
proposed a more Pythonic API.  Thus I find it a little strange that someone
without much experience in the API should find it such an abomination,
while experienced users of the API were clearly happy (ok - maybe "happy"
isnt the right word - but no unhappy enough to complain :-)

If nothing else, it allows the proliferation of documentation on the Win32
API to apply to Python.  This is clearly not true with the new module.

This is also a good case for using the .NET API.  However, it still would
not provide Python indexing, iteration etc.  However, as I state below, Im
not convinced this is a problem.

> I'm all for slow and steady, deliberate design. I'm sorry _winreg was
> rushed but I could only work with the time I had and the interest level
> of the people around. Nobody else wanted to discuss it. Nobody wanted to
> review the module. Hardly anyone here even knew what was in the OLD
> module.

I dont belive that is fair.  As I said, plenty of people has used win32api,
and were sick of insisting their users install my extensions.  distutils
was the straw that broke the serpents back, IIRC.

It is simply the sheer volume of people who _did_ use the win32api registry
functions that forced the new winreg module.

The fact that no one else wanted to discuss it, or review it, or generally
seemed to care should have been indication that the new winreg was not
really required, rather than taken as proof that a half-baked module that
has not had any review should be checked in.

> I am too. I would *also* be interested in hearing from people who have
> not spent the last five years with the Microsoft API because _winreg was
> a very thin wrapper over it and so will be obvious to those who already
> know it.

Agreed - but it isnt going to happen.  There are not enough people on this
list who are not experienced with Windows, but also intend getting that
experience during the beta cycle.  I hope you would agree that adding an
experimental module to Python simply as a social experiment is not the
right thing to do.  Once winreg is released, it will be too late to remove,
even if the consesus is that it should never have been released in the
first place.

> I have the feeling that an abstraction over the APIs would never be as
> "comfortable" as the Microsoft API you've been using for all of these
> years.

Again agreed - although we should replace the "you've" with "you and every
other Windows programmer" - which tends to make the case for _winreg
stronger, IMO.

Moving to the second mail:

> All of that was appropriate when winreg was documented "by reference" to
> the Microsoft documentation but if it is going to be a real, documented
> module in the Python library then the bogus MS junk should go.

I agree in principal, but IMO it is obvious this will not happen.  It hasnt
happened yet, and you yourself have moved into more interesting PEPs.  How
do you propose this better documentation will happen?

> The truth is I would prefer NOT to work on winreg and leave both
> versions out of the library.

Me too - it has just cost me work so far, and offers me _zero_ benefit (if
anyone in the world can assume that the win32api module is around, it
surely must be me ;-).  However, this is a debate you need to take up with
the distutils people, and everyone else who has asked for registry access
in the core.  Guido also appears to have heard these calls, hence we had
his complete support for some sort of registry module for the core.

> So the value add is:
...
> If you disagree with those principles then we are in trouble. If you
> have quibbles about specifics then let's talk.

I'm afraid we are in a little trouble ;-)  These appear dubious to me.  If
I weigh in the number of calls over the years for a more Pythonic API over
the win32api functions, I become more convinced.

The registry is a tree structure similar to a file system.  There havent
been any calls I have heard to move the os.listdir() function or the glob
module to a more "oo" style?  I dont see a "directory" object that supports
Python style indexing or iteration.  I dont see any of your other benefits
being applied to Python's view of the file system - so why is the registry
so different?

To try and get more productive:  Bill, Gordon et al appear to have the
sense to stay out of this debate.  Unless other people do chime in, Paul
and I will remain at an impasse, and no one will be happy.  I would much
prefer to move this forward than to vent at each other regarding mails
neither of us can remember in detail ;-)

So what to do?  Anyone?  If even _one_ experienced Windows developer on
this list can say they believe "winreg" is appropriate and intuitive, I am
happy to shut up (and then simply push for better winreg documentation ;-)

Mark.



From Moshe Zadka <moshez@math.huji.ac.il>  Tue Aug  1 09:36:29 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Tue, 1 Aug 2000 11:36:29 +0300 (IDT)
Subject: [Python-Dev] Access to the Bug Database
Message-ID: <Pine.GSO.4.10.10008011134540.9510-100000@sundial>

Hi!

I think I need access to the bug database -- but in the meantime,
anyone who wants to mark 110612 as closed is welcome to.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tim_one@email.msn.com  Tue Aug  1 09:40:53 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 04:40:53 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFFGNAA.tim_one@email.msn.com>

FWIW, I ignored all the winreg modules, and all the debate about them.  Why?
Just because Mark's had been in use for years already, so was already
battle-tested.  There's no chance that any other platform will ever make use
of this module, and given that its appeal is thus solely to Windows users,
it was fine by me if it didn't abstract *anything* away from MS's Win32 API.
MS's APIs are hard enough to understand without somebody else putting their
own layers of confusion <0.9 wink> on top of them.

May as well complain that the SGI-specific cd.open() function warns that if
you pass anything at all to its optional "mode" argument, it had better be
the string "r" (maybe that makes some kind of perverse sense to SGI weenies?
fine by me if so).

So, sorry, but I haven't even looked at Paul's code.  I probably should,
but-- jeez! --there are so many other things that *need* to get done.  I did
look at Mark's (many months ago) as part of helping him reformat it to
Guido's tastes, and all I remember thinking about it then is "yup, looks a
whole lot like the Windows registry API -- when I need it I'll be able to
browse the MS docs lightly and use it straight off -- good!".

So unless Mark went and did something like clean it up <wink>, I still think
it's good.




From tim_one@email.msn.com  Tue Aug  1 10:27:59 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 05:27:59 -0400
Subject: [Python-Dev] Access to the Bug Database
In-Reply-To: <Pine.GSO.4.10.10008011134540.9510-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFGGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> I think I need access to the bug database

Indeed, you had no access to the SF bug database at all.  Neither did a
bunch of others.  I have a theory about that:  I mentioned several weeks ago
that IE5 simply could not display the Member Permissions admin page
correctly, after it reached a certain size.  I downloaded a stinking
Netscape then, and that worked fine until it reached *another*, larger size,
at which point the display of some number of the bottom-most entries (like
where moshez lives) gets totally screwed up *sometimes*.  My *suspicion* is
that if an admin changes a permission while either IE5 or NS is in this
screwed-up state, it wreaks havoc with the permissions of the members whose
display lines are screwed up.  It's a weak suspicion <wink>, but a real one:
I've only seen NS screw up some contiguous number of the bottom-most lines,
I expect all the admins are using NS, and it was all and only a contiguous
block of developers at the bottom of the page who had their Bug Manager
permissions set to None (along with other damaged values) when I brought up
the page.

So, admins, look out for that!

Anyway, I just went thru and gave every developer admin privileges on the SF
Bug Manager.  Recall that it will probably take about 6 hours to take
effect, though.

> -- but in the meantime, anyone who wants to mark 110612 as
> closed is welcome to.

No, they're not:  nobody who knows *why* the bug is being closed should even
think about closing it.  It's still open.

you're-welcome<wink>-ly y'rs  - tim




From Vladimir.Marangozov@inrialpes.fr  Tue Aug  1 10:53:36 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 1 Aug 2000 11:53:36 +0200 (CEST)
Subject: [Python-Dev] SET_LINENO and python options
In-Reply-To: <14725.63622.190585.197392@beluga.mojam.com> from "Skip Montanaro" at Jul 31, 2000 05:07:02 PM
Message-ID: <200008010953.LAA02082@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> Isn't that what the code object's co_lnotab is for?  I thought the idea was
> to dispense with SET_LINENO altogether and just compute line numbers using
> co_lnotab on those rare occasions (debugging, tracebacks, etc) when you
> needed them.

Don't worry about it anymore. It's all in Postponed patch #101022 at SF.
It makes the current "-O" the default (and uses co_lnotab), and reverts
back to the current default with "-d".

I give myself a break on this. You guys need to test it now and report
some feedback and impressions. If only to help Guido making up his mind
and give him a chance to pronounce on it <wink>. 

[?!ng]
> Anyway, i suppose this is all rather moot now that Vladimir has a
> clever scheme for tracing even without SET_LINENO.  Go Vladimir!
> Your last proposal sounded great.

Which one? They are all the latest <wink>.
See also the log msg of the latest tiny patch update at SF.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From nhodgson@bigpond.net.au  Tue Aug  1 11:47:12 2000
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Tue, 1 Aug 2000 20:47:12 +1000
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
Message-ID: <010501bffba5$db4ebf90$8119fea9@neil>

> So what to do?  Anyone?  If even _one_ experienced Windows developer on
> this list can say they believe "winreg" is appropriate and intuitive, I am
> happy to shut up (and then simply push for better winreg documentation ;-)

   Sorry but my contribution isn't going to help much with breaking the
impasse.

   Registry code tends to be little lumps of complexity you don't touch once
it is working. The Win32 Reg* API is quite ugly - RegCreateKeyEx
takes/returns 10 parameters but you only normally want 3 and the return
status and everyone asks for KEY_ALL_ACCESS until the installation testers
tell you it fails for non-Administrators. So it would be good if the API was
simpler and defaulted everything you don't need to set.

   But I only hack the registry about once a year with Python. So if its
closer to the Win32 API then that helps me to use existing knowledge and
documentation.

   When writing an urllib patch recently, winreg seemed OK. Is it complete
enough? Are the things you can't do with it important for its role? IMO, if
winreg can handle the vast majority of cases (say, 98%) then its a useful
tool and people who need RegSetKeySecurity and similar can go to win32api.
Do the distutils developers know how much registry access they need?

   Enough fence sitting for now,

   Neil





From MarkH@ActiveState.com  Tue Aug  1 12:08:58 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 1 Aug 2000 21:08:58 +1000
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <010501bffba5$db4ebf90$8119fea9@neil>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCELIDCAA.MarkH@ActiveState.com>

Just to clarify (or confuse) the issue:

>    When writing an urllib patch recently, winreg seemed OK. Is
> it complete
> enough? Are the things you can't do with it important for its
> role? IMO, if
> winreg can handle the vast majority of cases (say, 98%) then its a useful
> tool and people who need RegSetKeySecurity and similar can go to
> win32api.

Note that Neil was actually using _winreg - the exposure of the raw Win32
API.  Part of my applying the patch was to rename the usage of "winreg" to
"_winreg".

Between the time of you writing the original patch and it being applied,
the old "winreg" module was renamed to "_winreg", and Paul's new
"winreg.py" was added.  The bone of contention is the new "winreg.py"
module, which urllib does _not_ use.

Mark.



From jim@interet.com  Tue Aug  1 14:28:40 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Tue, 01 Aug 2000 09:28:40 -0400
Subject: [Python-Dev] InfoWorld July 17 looks at Zope and Python
References: <397DB146.C68F9CD0@interet.com> <398654A8.37EB17BA@prescod.net>
Message-ID: <3986D088.E82E2162@interet.com>

Paul Prescod wrote:
> 
> Would you mind giving me the jist of the review? 20-word summary, if you
> don't mind.

Please note that I don't necessarily agree with the
reviews.  Also, there is no such thing as bad publicity.

Page 50: "Zope is a powerful application server.  Version
2.2 beta scales well, but enterprise capability, Python
language raise costs beyond the competition's."

Author claims he must purchase ZEO for $25-50K which is
too expensive.  Zope is dedicated to OOP, but shops not
doing OOP will have problems understanding it.  Python
expertise is necessary, but shops already know VB, C++ and
JavaScript.

Page 58:  "After many tutorials, I'm still waiting to
become a Zope addict."

Zope is based on Python, but that is no problem because
you do most programming in DTML which is like HTML.  It is
hard to get started in Zope because of lack of documentation,
it is hard to write code in browser text box, OOP-to-the-max
philosophy is unlike a familiar relational data base.
Zope has an unnecessarily high nerd factor.  It fails to
automate simple tasks.


My point in all this is that we design features to
appeal to computer scientists instead of "normal users".

JimA


From billtut@microsoft.com  Tue Aug  1 14:57:37 2000
From: billtut@microsoft.com (Bill Tutt)
Date: Tue, 1 Aug 2000 06:57:37 -0700
Subject: [Python-Dev] New winreg module really an improvement?
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A610A@red-msg-07.redmond.corp.microsoft.com>

Mark wrote: 
> To try and get more productive:  Bill, Gordon et al appear to have the
> sense to stay out of this debate.  Unless other people do chime in, Paul
> and I will remain at an impasse, and no one will be happy.  I would much
> prefer to move this forward than to vent at each other regarding mails
> neither of us can remember in detail ;-)

I'm actually in the process of checking it out, and am hoping to compose
some comments on it later today.
I do know this about abstracting the registry APIs. If it doesn't allow you
to do everything you can do with the normal APIs, then you've failed in your
abstraction. (Which is probably why I've never yet seen a successful
abstraction of the API. :) )
The registry is indeed a bizarre critter. Key's have values, and values have
values. Ugh.... It's enough to drive a sane man bonkers, and here I was
being annoyed by the person who originally designed the NT console APIs,
silly me....

Bill




From gmcm@hypernet.com  Tue Aug  1 17:16:54 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 1 Aug 2000 12:16:54 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
References: <39866F8D.FCFA85CB@prescod.net>
Message-ID: <1246975873-72274187@hypernet.com>

[Mark]
> To try and get more productive:  Bill, Gordon et al appear to
> have the sense to stay out of this debate.  

Wish I had that much sense...

I'm only +0 on the philosophy of a friendly Pythonic wrapper: 
the registry is only rarely the "right" solution. You need it 
when you have small amounts of persistent data that needs to 
be available to multiple apps and / or Windows. I actively 
discourage use of the registry for any other purposes. So 
making it easy to use is of very low priority for me. 

In addition, I doubt that a sane wrapper can be put over it. At 
first blush, it looks like a nested dict. But the keys are 
ordered. And a leaf is more like a list of tuples [(value, data), ]. 
But if you pull up regedit and look at how it's used, the (user-
speak) "value" may be a (MS-speak) "key", "value" or "data". 
Just note the number of entries where a keyname has one 
(value, data) pair that consists of ("(Default)", "(value not 
set)"). Or the number where keyname must be opened, but 
the (value, data) pair is ("(Default)", something). (It doesn't 
help that "key" may mean "keyname" or "keyhandle", and 
"name" may mean "keyname" or "valuename" and "value" 
may mean "valuename" or "datavalue".)

IOW, this isn't like passing lists (instead of FD_SETs) to  
select. No known abstract container matches the registry. My 
suspicion is that any attempt to map it just means the user 
will have to understand both the underlying API and the 
mapping.

As a practical matter, it looks to me like winreg (under any but 
the most well-informed usage) may well leak handles. If so, 
that would be a disaster. But I don't have time to check it out.

In sum:
 - I doubt the registry can be made to look elegant
 - I use it so little I don't really care

- Gordon


From paul@prescod.net  Tue Aug  1 17:52:45 2000
From: paul@prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 12:52:45 -0400
Subject: [Python-Dev] Winreg recap
Message-ID: <3987005C.9C45D7B6@prescod.net>

I specifically asked everyone here if an abstraction was a good idea. I
got three + votes and no - votes. One of the + votes requested that we
still ship the underlying module. Fine. I was actually pointed (on
python-dev) to specs for an abstraction layer that AFAIK had been
designed *on Python-dev*.

Back then, I said:

> > I've just had a chance to look at the winreg module. I think that it is
> > too low-level.

Mark Hammond said:
> I agree. There was a proposal (from Thomas Heller, IIRC) to do just this.
> I successfully argued there should be _2_ modules for Python - the raw
> low-level API, which guarantees you can do (almost) anything.  A
> higher-level API could cover the 80% of cases.
> ...
> I have no real problem with your proposed design, as long as it it written
> in Python, _using_ the low-level API.  It could be called "registry" or I
> would even be happy for "winreg.pyd" -> "_winreg.pyd" and your new module
> to be called "winreg.py"

Gordon pointed me to the spec. I took it and expanded on it to cover a
wider range of cases.

So now I go off and code it up and in addition to complaining about one
detail, I'm also told that there is no real point to having a high level
API. Windows users are accustomed to hacks and pain so crank it up!

> FWIW, I ignored all the winreg modules, and all the debate about them.  Why?
> Just because Mark's had been in use for years already, so was already
> battle-tested.  There's no chance that any other platform will ever make use
> of this module, and given that its appeal is thus solely to Windows users,
> it was fine by me if it didn't abstract *anything* away from MS's Win32 API.

It is precisely because it is for Windows users -- often coming from VB,
JavaScript or now C# -- that it needs to be abstracted.

I have the philosophy that I come to Python (both the language and the
library) because I want things to be easy and powerful at the same time.
Whereever feasible, our libraries *should* be cleaner and better than
the hacks that they cover up. Shouldn't they? I mean even *Microsoft*
abstracted over the registry API for VB, JavaScript, C# (and perhaps
Java). Are we really going to do less for our users?

To me, Python (language and library) is a refuge from the hackiness of
the rest of the world.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From paul@prescod.net  Tue Aug  1 17:53:31 2000
From: paul@prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 12:53:31 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBAEEGDCAA.mhammond@skippinet.com.au> <200007281206.HAA04102@cj20424-a.reston1.va.home.com>
Message-ID: <3987008B.35D5C2A2@prescod.net>

Guido van Rossum wrote:
> 
> I vaguely remember that I wasn't happy with the way this was handled
> either, but was too busy at the time to look into it.  (I can't say
> whether I'm happy with the module or not, since I've never tried to
> use it.  But I do feel unhappy about the process.)

I was also unhappy with the process but from a differEnt perspective.

A new module appeared in the Python library: _winreg It was based on
tried and true code: _winreg, but it's API had many placeholder
arguments (where Microsoft had placeholder arguments) used function call
syntax for things that were clearly methods (as Microsoft did for C),
had an enumeration mechanism that seems, to me, be very unPythonic, 
had many undocumented features and constants, and the documented 
methods and properties often have weird Microsoft conventions 
(e.g. SetValueEx).

The LaTeX documentation for the old winreg says of one method: "This is
Lame Lame Lame, DO NOT USE THIS!!!"

Now I am still working on new winreg. I got involved in a recursive 
project to avoid writing the docs twice in two different formats. We 
are still in the beta period so there is no need to panic about 
documentation yet.

I would love nothing more than to hear that Windows registry handling is
hereby delayed until Python 7 or until someone more interested wants to
work on it for the love of programming. But if that is not going to
happen then I will strongly advise against falling back to _winreg which
is severely non-Pythonic.

> I vaguely remember that Paul Prescod's main gripe with the _winreg API
> was that it's not object-oriented enough -- but that seems his main
> gripe about most code these days. :-)

In this case it wasn't a mild preference, it was a strong allergic
reaction!

> Paul, how much experience with using the Windows registry did you have
> when you designed the new API?

I use it off and on. There are still corners of _winreg that I don't
understand. That's part of why I thought it needed to be covered up with
something that could be fully documented. To get even the level of
understanding I have, of the *original* _winreg, I had to scour the Web.
The perl docs were the most helpful. :)

Anyhow, Mark isn't complaining about me misunderstanding it, he's
complaining about my mapping into the Python object model. That's fair.
That's what we python-dev is for.

As far as Greg using _winreg, my impression was that that code predates
new winreg. I think that anyone who reads even just the docstrings for
the new one and the documentation for the other is going to feel that 
the new one is at the very least more organized and thought out. Whether
it is properly designed is up to users to decide.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From guido@beopen.com  Tue Aug  1 19:20:23 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 13:20:23 -0500
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: Your message of "Tue, 01 Aug 2000 03:16:30 -0400."
 <3986794E.ADBB938C@prescod.net>
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
 <3986794E.ADBB938C@prescod.net>
Message-ID: <200008011820.NAA30284@cj20424-a.reston1.va.home.com>

Paul wrote:
> I had no personal interest in an API for the windows registry but I
> could not, in good conscience, let the original one become the 
> standard Python registry API.

and later:
> I use it off and on. There are still corners of _winreg that I don't
> understand. That's part of why I thought it needed to be covered up with
> something that could be fully documented. To get even the level of
> understanding I have, of the *original* _winreg, I had to scour the Web.
> The perl docs were the most helpful. :)

I believe this is the crux of the problem.  Your only mistake was that
you criticized and then tried to redesign a (poorly designed) API that
you weren't intimately familiar with.

My boss tries to do this occasionally; he has a tendency to complain
that my code doesn't contain enough classes.  I tell him to go away --
he only just started learning Python from a book that I've never seen,
so he wouldn't understand...

Paul, I think that the best thing to do now is to withdraw winreg.py,
and to keep (and document!) the _winreg extension with the
understanding that it's a wrapper around poorly designed API but at
least it's very close to the C API.  The leading underscore should be
a hint that this is not a module for every day use.

Hopefully someday someone will eventually create a set of higher level
bindings modeled after the Java, VB or C# version of the API.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From fdrake@beopen.com  Tue Aug  1 18:43:16 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 1 Aug 2000 13:43:16 -0400 (EDT)
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
 <3986794E.ADBB938C@prescod.net>
 <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <14727.3124.622333.980689@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > and to keep (and document!) the _winreg extension with the
 > understanding that it's a wrapper around poorly designed API but at
 > least it's very close to the C API.  The leading underscore should be
 > a hint that this is not a module for every day use.

  It is documented (as _winreg), but I've not reviewed the section in
great detail (yet!).  It will need to be revised to not refer to the
winreg module as the preferred API.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Moshe Zadka <moshez@math.huji.ac.il>  Tue Aug  1 19:30:48 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Tue, 1 Aug 2000 21:30:48 +0300 (IDT)
Subject: [Python-Dev] Bug Database
Message-ID: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>

I've just had a quick view over the database, and saw what we can prune at
no cost:

110647 -- Segmentation fault in "%.1200d" % 1. Fixed for me...
110649 -- Core dumps on compiling big expressions ('['+'1,'*100000+'1]'). 
          Fixed for me -- now throws a SyntaxError
110653 -- Complain about how 
          class foo:
	
		def __init__(self):
			self.bar1 = bar

		def bar(self):
			pass
         Creates cycles. A notabug if I ever saw one.
110654 -- 1+0j tested false. The bug was fixed.
110679 -- math.log(0) dumps core. Gives OverflowError for me...(I'm using
          a different OS, but the same CPU family (intel))
110710 -- range(10**n) gave segfault. Works for me -- either works, or throws
          MemoryError
110711 -- apply(foo, bar, {}) throws MemoryError. Works for me. (But might
          be an SGI problem)
110712 -- seems to be a duplicate of 110711
110715 -- urllib.urlretrieve() segfaults under kernel 2.2.12. Works for
          me with 2.2.15. 
110740, 110741, 110743, 110745, 110746, 110747, 110749, 110750 -- dups of 11715

I've got to go to sleep now....

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From jeremy@beopen.com  Tue Aug  1 19:47:47 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 1 Aug 2000 14:47:47 -0400 (EDT)
Subject: [Python-Dev] Bug Database
In-Reply-To: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
Message-ID: <14727.6995.164586.983795@bitdiddle.concentric.net>

Thanks for doing some triage, Moshe!

I am in the process of moving the bug database from jitterbug to
SourceForge.  There are still a few kinks in the process, which I am
trying to finish today.  There are two problems you will see with the
current database:

    * Many bugs do not contain all the followup messages that
    Jitterbug has.  I am working to add them.

    * There are many duplicates of some bugs.  The duplicates are the
    result of the debugging process for my Jitterbug -> SF conversion
    script.  I will remove these before I am done.  Any bug numbered
    higher than 110740 is probably a duplicate at this point.

The conversion process has placed most of the Jitterbug entries in the
SF bug tracker.  The PR number is in the SF summary and most of the
relevant Jitterbug headers (submittor, data, os, version) are part of
the body of the SF bug.  Any followups to the Jitterbug report are
stored as followup comments in SF.

The SF bug tracker has several fields that we can use to manage bug
reports.

* Category: Describes what part of Python the bug applies to.  Current
values are parser/compiler, core, modules, library, build, windows,
documentation.  We can add more categories, e.g. library/xml, if that
is helpful.

* Priority: We can assign a value from 1 to 9, where 9 is the highest
priority.  We will have to develop some guidelines for what those
priorities mean.  Right now everthing is priority 5 (medium).  I would
hazard a guess that bugs causing core dumps should have much higher
priority.

* Group: These reproduce some of the Jitterbug groups, like trash,
platform-specific, and irreproducible.  These are rough categories
that we can use, but I'm not sure how valuable they are.

* Resolution: What we plan to do about the bug.

* Assigned To: We can now assign bugs to specific people for
resolution.

* Status: Open or Closed.  When a bug has been fixed in the CVS
repository and a test case added to cover the bug, change its status
to Closed.

New bug reports should use the sourceforge interface.

Jeremy


From guido@beopen.com  Tue Aug  1 21:14:39 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 15:14:39 -0500
Subject: [Python-Dev] Bug Database
In-Reply-To: Your message of "Tue, 01 Aug 2000 14:47:47 -0400."
 <14727.6995.164586.983795@bitdiddle.concentric.net>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
 <14727.6995.164586.983795@bitdiddle.concentric.net>
Message-ID: <200008012014.PAA31076@cj20424-a.reston1.va.home.com>

> * Category: Describes what part of Python the bug applies to.  Current
> values are parser/compiler, core, modules, library, build, windows,
> documentation.  We can add more categories, e.g. library/xml, if that
> is helpful.

Before it's too late, would it make sense to try and get the
categories to be the same in the Bug and Patch managers?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From m.favas@per.dem.csiro.au  Tue Aug  1 21:30:42 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Wed, 02 Aug 2000 04:30:42 +0800
Subject: [Python-Dev] regression test failure in test_tokenize?
Message-ID: <39873372.1C6F8CE1@per.dem.csiro.au>

Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:

./python Lib/test/regrtest.py test_tokenize.py 
test_tokenize
test test_tokenize failed -- Writing: "57,4-57,5:\011NUMBER\011'3'",
expected: "57,4-57,8:\011NUMBER\011'3."
1 test failed: test_tokenize

Test produces (snipped):
57,4-57,5:      NUMBER  '3'

Test should produce (if supplied output correct):
57,4-57,8:      NUMBER  '3.14'

Is this just me, or an un-checked checkin? (I noticed some new sre bits
in my current CVS version.)

Mark

-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913


From akuchlin@mems-exchange.org  Tue Aug  1 21:47:57 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 1 Aug 2000 16:47:57 -0400
Subject: [Python-Dev] regression test failure in test_tokenize?
In-Reply-To: <39873372.1C6F8CE1@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Wed, Aug 02, 2000 at 04:30:42AM +0800
References: <39873372.1C6F8CE1@per.dem.csiro.au>
Message-ID: <20000801164757.B27333@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 04:30:42AM +0800, Mark Favas wrote:
>Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:
>Is this just me, or an un-checked checkin? (I noticed some new sre bits
>in my current CVS version.)

test_tokenize works fine using the current CVS on Linux; perhaps this
is a 64-bit problem in sre manifesting itself?

--amk


From Fredrik Lundh" <effbot@telia.com  Tue Aug  1 22:16:15 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 23:16:15 +0200
Subject: [Python-Dev] regression test failure in test_tokenize?
References: <39873372.1C6F8CE1@per.dem.csiro.au> <20000801164757.B27333@kronos.cnri.reston.va.us>
Message-ID: <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid>

andrew wrote:
> On Wed, Aug 02, 2000 at 04:30:42AM +0800, Mark Favas wrote:
> >Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:
> >Is this just me, or an un-checked checkin? (I noticed some new sre =
bits
> >in my current CVS version.)
>=20
> test_tokenize works fine using the current CVS on Linux; perhaps this
> is a 64-bit problem in sre manifesting itself?

I've confirmed (and fixed) the bug reported by Mark.  It was a nasty
little off-by-one error in the "branch predictor" code...

But I think I know why you didn't see anything: Guido just checked
in the following change to re.py:

*** 21,26 ****
  #
 =20
! engine =3D "sre"
! # engine =3D "pre"
 =20
  if engine =3D=3D "sre":
--- 21,26 ----
  #
 =20
! # engine =3D "sre"
! engine =3D "pre"
 =20
  if engine =3D=3D "sre":

</F>



From guido@beopen.com  Tue Aug  1 23:21:51 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 17:21:51 -0500
Subject: [Python-Dev] regression test failure in test_tokenize?
In-Reply-To: Your message of "Tue, 01 Aug 2000 23:16:15 +0200."
 <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid>
References: <39873372.1C6F8CE1@per.dem.csiro.au> <20000801164757.B27333@kronos.cnri.reston.va.us>
 <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid>
Message-ID: <200008012221.RAA05722@cj20424-a.reston1.va.home.com>

> But I think I know why you didn't see anything: Guido just checked
> in the following change to re.py:
> 
> *** 21,26 ****
>   #
>   
> ! engine = "sre"
> ! # engine = "pre"
>   
>   if engine == "sre":
> --- 21,26 ----
>   #
>   
> ! # engine = "sre"
> ! engine = "pre"
>   
>   if engine == "sre":

Ouch.  did I really?  I didn't intend to!  I'll back out right away...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From barry@scottb.demon.co.uk  Wed Aug  2 00:01:29 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Wed, 2 Aug 2000 00:01:29 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000701bff108$950ec9f0$060210ac@private>
Message-ID: <000801bffc0c$6d985490$060210ac@private>

If someone in the core of Python thinks a patch implementing
what I've outlined is useful please let me know and I will
generate the patch.

	Barry



From MarkH@ActiveState.com  Wed Aug  2 00:13:31 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Wed, 2 Aug 2000 09:13:31 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000801bffc0c$6d985490$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIENBDCAA.MarkH@ActiveState.com>

> If someone in the core of Python thinks a patch implementing
> what I've outlined is useful please let me know and I will
> generate the patch.

Umm - I'm afraid that I dont keep my python-dev emils for that long, and
right now I'm too lazy/busy to dig around the archives.

Exactly what did you outline?  I know it went around a few times, and I
can't remember who said what.  For my money, I liked Fredrik's solution
best (check Py_IsInitialized() in Py_InitModule4()), but as mentioned that
only solves for the next version of Python; it doesnt solve the fact 1.5
modules will crash under 1.6/2.0

It would definately be excellent to get _something_ in the CNRI 1.6
release, so the BeOpen 2.0 release can see the results.

But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,

Mark.




From jeremy@beopen.com  Wed Aug  2 00:56:27 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 1 Aug 2000 19:56:27 -0400 (EDT)
Subject: [Python-Dev] Bug Database
In-Reply-To: <200008012014.PAA31076@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
 <14727.6995.164586.983795@bitdiddle.concentric.net>
 <200008012014.PAA31076@cj20424-a.reston1.va.home.com>
Message-ID: <14727.25515.570860.775496@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

  >> * Category: Describes what part of Python the bug applies to.
  >> Current values are parser/compiler, core, modules, library,
  >> build, windows, documentation.  We can add more categories,
  >> e.g. library/xml, if that is helpful.

  GvR> Before it's too late, would it make sense to try and get the
  GvR> categories to be the same in the Bug and Patch managers?

Yes, as best we can do.  We've got all the same names, though the
capitalization varies sometimes.

Jeremy


From gstein@lyra.org  Wed Aug  2 02:26:51 2000
From: gstein@lyra.org (Greg Stein)
Date: Tue, 1 Aug 2000 18:26:51 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PC _winreg.c,1.7,1.8
In-Reply-To: <200007280344.UAA12335@slayer.i.sourceforge.net>; from mhammond@users.sourceforge.net on Thu, Jul 27, 2000 at 08:44:43PM -0700
References: <200007280344.UAA12335@slayer.i.sourceforge.net>
Message-ID: <20000801182651.S19525@lyra.org>

This could be simplified quite a bit by using PyObject_AsReadBuffer() from
abstract.h ...

Cheers,
-g

On Thu, Jul 27, 2000 at 08:44:43PM -0700, Mark Hammond wrote:
> Update of /cvsroot/python/python/dist/src/PC
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv12325
> 
> Modified Files:
> 	_winreg.c 
> Log Message:
> Allow any object supporting the buffer protocol to be written as a binary object.
> 
> Index: _winreg.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/PC/_winreg.c,v
> retrieving revision 1.7
> retrieving revision 1.8
> diff -C2 -r1.7 -r1.8
> *** _winreg.c	2000/07/16 12:04:32	1.7
> --- _winreg.c	2000/07/28 03:44:41	1.8
> ***************
> *** 831,837 ****
>   				*retDataSize = 0;
>   			else {
> ! 				if (!PyString_Check(value))
> ! 					return 0;
> ! 				*retDataSize = PyString_Size(value);
>   				*retDataBuf = (BYTE *)PyMem_NEW(char,
>   								*retDataSize);
> --- 831,844 ----
>   				*retDataSize = 0;
>   			else {
> ! 				void *src_buf;
> ! 				PyBufferProcs *pb = value->ob_type->tp_as_buffer;
> ! 				if (pb==NULL) {
> ! 					PyErr_Format(PyExc_TypeError, 
> ! 						"Objects of type '%s' can not "
> ! 						"be used as binary registry values", 
> ! 						value->ob_type->tp_name);
> ! 					return FALSE;
> ! 				}
> ! 				*retDataSize = (*pb->bf_getreadbuffer)(value, 0, &src_buf);
>   				*retDataBuf = (BYTE *)PyMem_NEW(char,
>   								*retDataSize);
> ***************
> *** 840,847 ****
>   					return FALSE;
>   				}
> ! 				memcpy(*retDataBuf,
> ! 				       PyString_AS_STRING(
> ! 				       		(PyStringObject *)value),
> ! 				       *retDataSize);
>   			}
>   			break;
> --- 847,851 ----
>   					return FALSE;
>   				}
> ! 				memcpy(*retDataBuf, src_buf, *retDataSize);
>   			}
>   			break;
> 
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins@python.org
> http://www.python.org/mailman/listinfo/python-checkins

-- 
Greg Stein, http://www.lyra.org/


From guido@beopen.com  Wed Aug  2 05:09:38 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 23:09:38 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
Message-ID: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>

We still don't have a new license for Python 1.6; Bob Kahn and Richard
Stallman need to talk before a decision can be made about how to deal
with the one remaining GPL incompatibility.  While we're all waiting,
we're preparing the CNRI 1.6 release at SourceForge (part of the deal
is that the PythonLabs group finishes the 1.6 release for CNRI).  The
last thing I committed today was the text (dictated by Bob Kahn) for
the new LICENSE file that will be part of the 1.6 beta 1 release.
(Modulo any changes that will be made to the license text to ensure
GPL compatibility.)

Since anyone with an anonymous CVS setup can now read the license
anyway, I might as well post a copy here so that you can all get used
to it...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

======== LICENSE =======================================================

A. HISTORY OF THE SOFTWARE

Python originated in 1991 at Stichting Mathematisch Centrum (CWI) in
the Netherlands as an outgrowth of a language called ABC.  Its
principal author was Guido van Rossum, although it included smaller
contributions from others at CWI and elsewhere.  The last version of
Python issued by CWI was Python 1.2.  In 1995, Mr. van Rossum
continued his work on Python at the Corporation for National Research
Initiatives (CNRI) in Reston, Virginia where several versions of the
software were generated.  Python 1.6 is the last of the versions
developed at CNRI.



B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING Python 1.6, beta 1


1. CNRI LICENSE AGREEMENT 

        PYTHON 1.6, beta 1

        CNRI OPEN SOURCE LICENSE AGREEMENT


IMPORTANT: PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY.

BY CLICKING ON "ACCEPT" WHERE INDICATED BELOW, OR BY COPYING,
INSTALLING OR OTHERWISE USING PYTHON 1.6, beta 1 SOFTWARE, YOU ARE
DEEMED TO HAVE AGREED TO THE TERMS AND CONDITIONS OF THIS LICENSE
AGREEMENT.

1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6, beta 1 software
in source or binary form and its associated documentation, as released
at the www.python.org Internet site on August 5, 2000 ("Python
1.6b1").

2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6b1
alone or in any derivative version, provided, however, that CNRI's
License Agreement is retained in Python 1.6b1, alone or in any
derivative version prepared by Licensee.

Alternately, in lieu of CNRI's License Agreement, Licensee may
substitute the following text (omitting the quotes): "Python 1.6, beta
1, is made available subject to the terms and conditions in CNRI's
License Agreement.  This Agreement may be located on the Internet
using the following unique, persistent identifier (known as a handle):
1895.22/1011.  This Agreement may also be obtained from a proxy server
on the Internet using the URL:http://hdl.handle.net/1895.22/1011".

3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6b1or any part thereof, and wants to make the
derivative work available to the public as provided herein, then
Licensee hereby agrees to indicate in any such work the nature of the
modifications made to Python 1.6b1.

4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.

5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.

6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.

7. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of Virginia, excluding conflict of
law provisions.  Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between CNRI and Licensee.  This License Agreement does not grant
permission to use CNRI trademarks or trade name in a trademark sense
to endorse or promote products or services of Licensee, or any third
party.

8. By clicking on the "ACCEPT" button where indicated, or by copying
installing or otherwise using Python 1.6b1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.

        ACCEPT



2. CWI PERMISSIONS STATEMENT AND DISCLAIMER

Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands.  All rights reserved.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.

STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

========================================================================


From guido@beopen.com  Wed Aug  2 05:42:30 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 23:42:30 -0500
Subject: [Python-Dev] BeOpen statement about Python license
Message-ID: <200008020442.XAA01587@cj20424-a.reston1.va.home.com>

Bob Weiner, BeOpen's CTO, has this to say about the Python license:

  Here's the official word from BeOpen.com regarding any potential
  license change on Python 1.6 (the last CNRI Python release) and
  subsequent versions:

    The Python license is fully open source compliant, as certified by
    the Open Source Initiative.  That means that if you look at
    www.opensource.org/osd.html, then this license complies with those
    9 precepts, allowing broad freedom of use, distribution and
    modification.

    The Python license will continue to allow fully proprietary
    software development.

    The license issues are down to one point which we are working to
    resolve together with CNRI and involving potential
    GPL-compatibility.  It is a small point regarding a requirement
    that the license be interpreted under the terms of Virginia law.
    One lawyer has said that this doesn't affect GPL-compatibility but
    Richard Stallman of the FSF has felt differently; he views it as a
    potential additional restriction of rights beyond those listed in
    the GPL.  So work continues to resolve on this point before the
    license is published or attached to any code.  We are presently
    waiting for follow-up from Stallman on this point.

  In summary, BeOpen.com is actively working to keep Python the
  extremely open platform it has traditionally been and to resolve
  legal issues such as this in ways that benefit Python users
  worldwide.  CNRI is working along the same lines as well.

  Please assure yourselves and your management that Python continues
  to allow for both open and closed software development.

  Regards,

  Bob Weiner

I (Guido) hope that this, together with the draft license text that I
just posted, clarifies matters for now!  I'll post more news as it
happens,

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Wed Aug  2 07:12:54 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 08:12:54 +0200
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <200008012122.OAA22327@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Tue, Aug 01, 2000 at 02:22:20PM -0700
References: <200008012122.OAA22327@slayer.i.sourceforge.net>
Message-ID: <20000802081254.V266@xs4all.nl>

On Tue, Aug 01, 2000 at 02:22:20PM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src/Lib
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv22316

> Modified Files:
> 	re.py 
> Log Message:
> My fix to the URL accidentally also switched back to the "pre" module.
> Undo that!

This kind of thing is one of the reasons I wish 'cvs commit' would give you
the entire patch you're about to commit in the log-message-edit screen, as
CVS: comments, rather than just the modified files. It would also help with
remembering what the patch was supposed to do ;) Is this possible with CVS,
other than an 'EDITOR' that does this for you ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From paul@prescod.net  Wed Aug  2 08:30:30 2000
From: paul@prescod.net (Paul Prescod)
Date: Wed, 02 Aug 2000 03:30:30 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
 <3986794E.ADBB938C@prescod.net> <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <3987CE16.DB3E72B8@prescod.net>

Guido van Rossum wrote:
> 
> ...
> 
> I believe this is the crux of the problem.  Your only mistake was that
> you criticized and then tried to redesign a (poorly designed) API that
> you weren't intimately familiar with.

I don't think that this has been demonstrated. We have one complaint
about one method from Mark and silence from everyone else (and about
everything else). The Windows registry is weird in its terminology, but
it isn't brain surgery.

Yes, I had to do some research on what various things do but I expect
that almost anyone would have had to do that. Some of the constants in
the module are meant to be used with functions that are not even exposed
in the module. This indicates to me that nobody has clearly thought out
all of the details (and also that _winreg is not a complete binding to
the API). I probably understand the original API as well as anyone and
more than most, by now.

Anyhow, the list at the bottom should demonstrate that I understand the
API at least as well as the Microsoftie that invented the .NET API for
Java, VB and everything else.

> Hopefully someday someone will eventually create a set of higher level
> bindings modeled after the Java, VB or C# version of the API.

Mark sent me those specs and I believe that the module I sent out *is*
very similar to that higher level API.

Specifically (>>> is Python version)

Equals (inherited from Object) 
>>> __cmp__

key.Name
>>> key.name

key.SubKeyCount
>>> len( key.getSubkeys() )

key.ValueCount
>>> len( key.getValues() )

Close
>>> key.close()

CreateSubKey
>>> key.createSubkey()

DeleteSubKey
>>> key.deleteSubkey()

DeleteSubKeyTree
>>> (didn't get around to implementing/testing something like this)

DeleteValue
>>> key.deleteValue()

GetSubKeyNames
>>> key.getSubkeyNames()

GetValue
>>> key.getValueData()

GetValueNames
>>> key.getValueNames()

OpenRemoteBaseKey
>>> key=RemoteKey( ... )

OpenSubKey
>>> key.openSubkey

SetValue
>>> key.setValue()

 ToString
>>> str( key )

My API also has some features for enumerating that this does not have.
Mark has a problem with one of those. I don't see how that makes the
entire API "unintuitive", considering it is more or less a renaming of
the .NET API.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From Fredrik Lundh" <effbot@telia.com  Wed Aug  2 08:07:27 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 2 Aug 2000 09:07:27 +0200
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>              <3986794E.ADBB938C@prescod.net>  <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <004d01bffc50$522fa2a0$f2a6b5d4@hagrid>

guido wrote:
> Paul, I think that the best thing to do now is to withdraw winreg.py,
> and to keep (and document!) the _winreg extension with the
> understanding that it's a wrapper around poorly designed API but at
> least it's very close to the C API.  The leading underscore should be
> a hint that this is not a module for every day use.

how about letting _winreg export all functions with their
win32 names, and adding a winreg.py which looks some-
thing like this:

    from _winreg import *

    class Key:
        ....

    HKEY_CLASSES_ROOT =3D Key(...)
    ...

where the Key class addresses the 80% level: open
keys and read NONE/SZ/EXPAND_SZ/DWORD values
(through a slightly extended dictionary API).

in 2.0, add support to create keys and write values of
the same types, and you end up supporting the needs
of 99% of all applications.

> Hopefully someday someone will eventually create a set of higher level
> bindings modeled after the Java, VB or C# version of the API.

how about Tcl?  I'm pretty sure their API (which is very
simple, iirc) addresses the 99% level...

</F>



From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug  2 08:00:40 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 2 Aug 2000 10:00:40 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python
 (fwd))
Message-ID: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>

Do we have a procedure for putting more batteries in the core? I'm
not talking about stuff like PEP-206, I'm talking about small, useful
modules like Cookies.py.


--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez

---------- Forwarded message ----------
Date: Tue, 01 Aug 2000 12:03:12 PDT
From: Brian Wisti <bwisti@hotmail.com>
To: tutor@python.org
Subject: Tangent to Re: [Tutor] CGI and Python




>In contrast, i've been motivated with questions like yours which pop up
>every now and then to create a separate chapter entrily devoted to CGI pro-
>gramming and in it, to provide an example that starts out simple and builds
>to something a little more complex.  there will be lots of screen captures
>too so that you can see what's going on.  finally, there will be a more
>"advanced" section towards the end which does the complicated stuff that
>everyone wants to do, like cookies, multivalued fields, and file uploads
>with multipart data.  sorry that the book isn't out yet... trying to get
>the weeds out of it right NOW!	;-)
>

I'm looking forward to seeing the book!

Got a question, that is almost relevant to the thread.  Does anybody know 
why cookie support isn't built in to the cgi module?  I had to dig around to 
find Cookie.py, which (excellent module that it is) should be in the cgi 
package somewhere.

Just a random thought from the middle of my workday...

Later,
Brian Wisti
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com


_______________________________________________
Tutor maillist  -  Tutor@python.org
http://www.python.org/mailman/listinfo/tutor



From mal@lemburg.com  Wed Aug  2 10:12:01 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 02 Aug 2000 11:12:01 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>
Message-ID: <3987E5E1.A2B20241@lemburg.com>

Guido van Rossum wrote:
> 
> We still don't have a new license for Python 1.6; Bob Kahn and Richard
> Stallman need to talk before a decision can be made about how to deal
> with the one remaining GPL incompatibility.  While we're all waiting,
> we're preparing the CNRI 1.6 release at SourceForge (part of the deal
> is that the PythonLabs group finishes the 1.6 release for CNRI).  The
> last thing I committed today was the text (dictated by Bob Kahn) for
> the new LICENSE file that will be part of the 1.6 beta 1 release.
> (Modulo any changes that will be made to the license text to ensure
> GPL compatibility.)
> 
> Since anyone with an anonymous CVS setup can now read the license
> anyway, I might as well post a copy here so that you can all get used
> to it...

Is the license on 2.0 going to look the same ? I mean we now
already have two seperate licenses and if BeOpen adds another
two or three paragraphs will end up with a license two pages
long.

Oh, how I loved the old CWI license...

Some comments on the new version:
 
> A. HISTORY OF THE SOFTWARE
> 
> Python originated in 1991 at Stichting Mathematisch Centrum (CWI) in
> the Netherlands as an outgrowth of a language called ABC.  Its
> principal author was Guido van Rossum, although it included smaller
> contributions from others at CWI and elsewhere.  The last version of
> Python issued by CWI was Python 1.2.  In 1995, Mr. van Rossum
> continued his work on Python at the Corporation for National Research
> Initiatives (CNRI) in Reston, Virginia where several versions of the
> software were generated.  Python 1.6 is the last of the versions
> developed at CNRI.
> 
> B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING Python 1.6, beta 1
> 
> 1. CNRI LICENSE AGREEMENT
> 
>         PYTHON 1.6, beta 1
> 
>         CNRI OPEN SOURCE LICENSE AGREEMENT
> 
> IMPORTANT: PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY.
> 
> BY CLICKING ON "ACCEPT" WHERE INDICATED BELOW, OR BY COPYING,
> INSTALLING OR OTHERWISE USING PYTHON 1.6, beta 1 SOFTWARE, YOU ARE
> DEEMED TO HAVE AGREED TO THE TERMS AND CONDITIONS OF THIS LICENSE
> AGREEMENT.
> 
> 1. This LICENSE AGREEMENT is between the Corporation for National
> Research Initiatives, having an office at 1895 Preston White Drive,
> Reston, VA 20191 ("CNRI"), and the Individual or Organization
> ("Licensee") accessing and otherwise using Python 1.6, beta 1 software
> in source or binary form and its associated documentation, as released
> at the www.python.org Internet site on August 5, 2000 ("Python
> 1.6b1").
> 
> 2. Subject to the terms and conditions of this License Agreement, CNRI
> hereby grants Licensee a nonexclusive, royalty-free, world-wide
> license to reproduce, analyze, test, perform and/or display publicly,
> prepare derivative works, distribute, and otherwise use Python 1.6b1
> alone or in any derivative version, provided, however, that CNRI's
> License Agreement is retained in Python 1.6b1, alone or in any
> derivative version prepared by Licensee.

I don't the latter (retaining the CNRI license alone) is not
possible: you always have to include the CWI license.
 
> Alternately, in lieu of CNRI's License Agreement, Licensee may
> substitute the following text (omitting the quotes): "Python 1.6, beta
> 1, is made available subject to the terms and conditions in CNRI's
> License Agreement.  This Agreement may be located on the Internet
> using the following unique, persistent identifier (known as a handle):
> 1895.22/1011.  This Agreement may also be obtained from a proxy server
> on the Internet using the URL:http://hdl.handle.net/1895.22/1011".

Do we really need this in the license text ? It's nice to have
the text available on the Internet, but why add long descriptions
about where to get it from to the license text itself ?
 
> 3. In the event Licensee prepares a derivative work that is based on
> or incorporates Python 1.6b1or any part thereof, and wants to make the
> derivative work available to the public as provided herein, then
> Licensee hereby agrees to indicate in any such work the nature of the
> modifications made to Python 1.6b1.

In what way would those indications have to be made ? A patch
or just text describing the new features ?
 
What does "make available to the public" mean ? If I embed
Python in an application and make this application available
on the Internet for download would this fit the meaning ?

What about derived work that only uses the Python language
reference as basis for its task, e.g. new interpreters
or compilers which can read and execute Python programs ?

> 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> INFRINGE ANY THIRD PARTY RIGHTS.
> 
> 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.

I would make this "...SOME STATES AND COUNTRIES...". E.g. in
Germany the above text would only be valid after an initial
6 month period after installation, AFAIK (this period is
called "Gewährleistung"). Licenses from other vendors usually
add some extra license text to limit the liability in this period
to the carrier on which the software was received by the licensee,
e.g. the diskettes or CDs.
 
> 6. This License Agreement will automatically terminate upon a material
> breach of its terms and conditions.

Immediately ? Other licenses usually include a 30-60 day period
which allows the licensee to take actions. With the above text,
the license will put the Python copy in question into an illegal
state *prior* to having even been identified as conflicting with the
license.
 
> 7. This License Agreement shall be governed by and interpreted in all
> respects by the law of the State of Virginia, excluding conflict of
> law provisions.  Nothing in this License Agreement shall be deemed to
> create any relationship of agency, partnership, or joint venture
> between CNRI and Licensee.  This License Agreement does not grant
> permission to use CNRI trademarks or trade name in a trademark sense
> to endorse or promote products or services of Licensee, or any third
> party.

Would the name "Python" be considered a trademark in the above
sense ?
 
> 8. By clicking on the "ACCEPT" button where indicated, or by copying
> installing or otherwise using Python 1.6b1, Licensee agrees to be
> bound by the terms and conditions of this License Agreement.
> 
>         ACCEPT
> 
> 2. CWI PERMISSIONS STATEMENT AND DISCLAIMER
> 
> Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
> The Netherlands.  All rights reserved.
> 
> Permission to use, copy, modify, and distribute this software and its
> documentation for any purpose and without fee is hereby granted,
> provided that the above copyright notice appear in all copies and that
> both that copyright notice and this permission notice appear in
> supporting documentation, and that the name of Stichting Mathematisch
> Centrum or CWI not be used in advertising or publicity pertaining to
> distribution of the software without specific, written prior
> permission.
> 
> STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
> THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
> FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
> OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

...oh how I loved this one ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jack@oratrix.nl  Wed Aug  2 10:43:05 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 02 Aug 2000 11:43:05 +0200
Subject: [Python-Dev] Winreg recap
In-Reply-To: Message by Paul Prescod <paul@prescod.net> ,
 Tue, 01 Aug 2000 12:52:45 -0400 , <3987005C.9C45D7B6@prescod.net>
Message-ID: <20000802094305.C3006303181@snelboot.oratrix.nl>

> I specifically asked everyone here if an abstraction was a good idea. I
> got three + votes and no - votes. One of the + votes requested that we
> still ship the underlying module. Fine. I was actually pointed (on
> python-dev) to specs for an abstraction layer that AFAIK had been
> designed *on Python-dev*.

This point I very much agree to: if we can abstract 90% of the use cases of 
the registry (while still giving access to the other 10%) in a clean interface 
we can implement the same interface for Mac preference files, unix dot-files, 
X resources, etc.

A general mechanism whereby a Python program can get at a persistent setting 
that may have factory defaults, installation overrides and user overrides, and 
that is implemented in the logical way on each platform would be very powerful.

The initial call to open the preference database(s) and give identity 
information as to which app you are, etc is probably going to be machine 
dependent, but from that point on there should be a single API.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug  2 11:16:40 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 2 Aug 2000 13:16:40 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
Message-ID: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>

Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me


--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas@xs4all.net  Wed Aug  2 11:41:12 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 12:41:12 +0200
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>; from moshez@math.huji.ac.il on Wed, Aug 02, 2000 at 01:16:40PM +0300
References: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>
Message-ID: <20000802124112.W266@xs4all.nl>

On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:

> Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

You can close bugs now, right, Moshe ? If not, you should be able to :P Just
do what I do: close them, assign them to yourself, set the status to 'Works
For Me', explain in the log message what you did to test it, and forward a
copy of the mail you get from SF to the original complainee.

A lot of the bugs are relatively old, so a fair lot of them are likely to be
fixed already. If they aren't fixed for the complainee (or someone else),
the bug can be re-opened and possibly updated at the same time.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug  2 12:05:06 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 2 Aug 2000 14:05:06 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <20000802124112.W266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>

On Wed, 2 Aug 2000, Thomas Wouters wrote:

> On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:
> 
> > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> 
> You can close bugs now, right, Moshe?

I can, but to tell the truth, after what Tim posted here about closing
bugs, I'd appreciate a few more eyeballs before I close them.

> A lot of the bugs are relatively old, so a fair lot of them are likely to be
> fixed already. If they aren't fixed for the complainee (or someone else),
> the bug can be re-opened and possibly updated at the same time.

Hmmmmm.....OK.
But I guess I'll still wait for a goahead from the PythonLabs team. 
BTW: Does anyone know if SF has an e-mail notification of bugs, similar
to that of patches? If so, enabling it to send mail to a mailing list
similar to patches@python.org would be cool -- it would enable much more
peer review.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Wed Aug  2 12:21:47 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 13:21:47 +0200
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>; from moshez@math.huji.ac.il on Wed, Aug 02, 2000 at 02:05:06PM +0300
References: <20000802124112.W266@xs4all.nl> <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
Message-ID: <20000802132147.L13365@xs4all.nl>

On Wed, Aug 02, 2000 at 02:05:06PM +0300, Moshe Zadka wrote:
> On Wed, 2 Aug 2000, Thomas Wouters wrote:
> > On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:

> > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

> > You can close bugs now, right, Moshe?

> I can, but to tell the truth, after what Tim posted here about closing
> bugs, I'd appreciate a few more eyeballs before I close them.

That's why I forward the message to the original submittor. The list of bugs
is now so insanely large that it's pretty unlikely a large number of
eyeballs will caress them. Marking them closed (or atleast marking them
*something*, like moving them to the right catagory) and forwarding the
summary to the submittor is likely to have them re-check the bug.

Tim was talking about 'closing it without reason', without knowing why it
should be closed. 'Works for me' is a valid reason to close the bug, if you
have the same (kind of) platform, can't reproduce the bug and have a strong
suspicion it's already been fixed. (Which is pretty likely, if the bugreport
is old.)

> BTW: Does anyone know if SF has an e-mail notification of bugs, similar
> to that of patches? If so, enabling it to send mail to a mailing list
> similar to patches@python.org would be cool -- it would enable much more
> peer review.

I think not, but I'm not sure. It's probably up to the project admins to set
that, but I think if they did, they'd have set it before. (Then again, I'm
not sure if it's a good idea to set it, yet... I bet the current list is
going to be quickly cut down in size, and I'm not sure if I want to see all
the notifications! :) But once it's running, it would be swell.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Vladimir.Marangozov@inrialpes.fr  Wed Aug  2 13:13:41 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Wed, 2 Aug 2000 14:13:41 +0200 (CEST)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021157040.20425-100000@sundial> from "Moshe Zadka" at Aug 02, 2000 01:16:40 PM
Message-ID: <200008021213.OAA06073@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

You get a compiled SRE object, right? But SRE is the new 're' and the old
're' is 'pre'. Make the example: import pre; pre.compile('[\\200-\\400]')
and I suspect you'll get the segfault (I did).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug  2 13:17:31 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 2 Aug 2000 15:17:31 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <200008021213.OAA06073@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008021512180.8980-100000@sundial>

On Wed, 2 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> 
> You get a compiled SRE object, right?

Nope -- I tested it with pre. 

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From jeremy@beopen.com  Wed Aug  2 13:31:55 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 2 Aug 2000 08:31:55 -0400 (EDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
References: <20000802124112.W266@xs4all.nl>
 <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
Message-ID: <14728.5307.820982.137908@bitdiddle.concentric.net>

>>>>> "MZ" == Moshe Zadka <moshez@math.huji.ac.il> writes:

  MZ> Hmmmmm.....OK.  But I guess I'll still wait for a goahead from
  MZ> the PythonLabs team.  BTW: Does anyone know if SF has an e-mail
  MZ> notification of bugs, similar to that of patches? If so,
  MZ> enabling it to send mail to a mailing list similar to
  MZ> patches@python.org would be cool -- it would enable much more
  MZ> peer review.

Go ahead and mark as closed bugs that are currently fixed.  If you can
figure out when they were fixed (e.g. what checkin), that would be
best.  If not, just be sure that it really is fixed -- and write a
test case that would have caught the bug.

SF will send out an email, but sending it to patches@python.org would
be a bad idea, I think.  Isn't that list attached to Jitterbug?

Jeremy


From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug  2 13:30:16 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 2 Aug 2000 15:30:16 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10008021528410.8980-100000@sundial>

On Wed, 2 Aug 2000, Jeremy Hylton wrote:

> SF will send out an email, but sending it to patches@python.org would
> be a bad idea, I think.

I've no problem with having a seperate mailing list I can subscribe to.
Perhaps it should be a mailing list along the lines of Python-Checkins....

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From guido@beopen.com  Wed Aug  2 15:02:00 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:02:00 -0500
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: Your message of "Wed, 02 Aug 2000 08:12:54 +0200."
 <20000802081254.V266@xs4all.nl>
References: <200008012122.OAA22327@slayer.i.sourceforge.net>
 <20000802081254.V266@xs4all.nl>
Message-ID: <200008021402.JAA02711@cj20424-a.reston1.va.home.com>

> > My fix to the URL accidentally also switched back to the "pre" module.
> > Undo that!
> 
> This kind of thing is one of the reasons I wish 'cvs commit' would give you
> the entire patch you're about to commit in the log-message-edit screen, as
> CVS: comments, rather than just the modified files. It would also help with
> remembering what the patch was supposed to do ;) Is this possible with CVS,
> other than an 'EDITOR' that does this for you ?

Actually, I have made it a habit to *always* do a cvs diff before I
commit, for exactly this reason.  That's why this doesn't happen more
often.  In this case I specifically remember reviewing the diff and
thinking that it was alright, but not scrolling towards the second
half of the diff. :(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Wed Aug  2 15:06:00 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:06:00 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 10:00:40 +0300."
 <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
Message-ID: <200008021406.JAA02743@cj20424-a.reston1.va.home.com>

> Do we have a procedure for putting more batteries in the core? I'm
> not talking about stuff like PEP-206, I'm talking about small, useful
> modules like Cookies.py.

Cookie support in the core would be a good thing.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From fdrake@beopen.com  Wed Aug  2 14:20:52 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:20:52 -0400 (EDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>
References: <20000802124112.W266@xs4all.nl>
 <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
 <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <14728.8244.745008.301891@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > SF will send out an email, but sending it to patches@python.org would
 > be a bad idea, I think.  Isn't that list attached to Jitterbug?

  No, but Barry is working on getting a new list set up for
SourceForge to send bug messages to.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gvwilson@nevex.com  Wed Aug  2 14:22:01 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Wed, 2 Aug 2000 09:22:01 -0400 (EDT)
Subject: [Python-Dev] CVS headaches / Subversion reminder
Message-ID: <Pine.LNX.4.10.10008020913180.7103-100000@akbar.nevex.com>

Those of you who are having troubles with (or have complaints about) CVS
on SourceForge might want to check out Subversion, a "better CVS" being
developed as part of Tigris:

    subversion.tigris.org

Jason Robbins (project manager, jrobbins@collab.net) told me in Monterey
that they are still interested in feature requests, alternatives, etc.
There may still be room to add features like showing the full patch during
checkin (as per Thomas Wouters' earlier mail).

Greg

p.s. I'd be interested in hearing from anyone who's ever re-written a
medium-sized (40,000 lines) C app in Python --- how did you decide how
much of the structure to keep, and how much to re-think, etc.  Please mail
me directly to conserve bandwidth; I'll post a summary if there's enough
interest.




From fdrake@beopen.com  Wed Aug  2 14:26:28 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:26:28 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
Message-ID: <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > > Do we have a procedure for putting more batteries in the core? I'm
 > > not talking about stuff like PEP-206, I'm talking about small, useful
 > > modules like Cookies.py.
 > 
 > Cookie support in the core would be a good thing.

  There's also some cookie support in Grail (limited); that uses a
Netscape-style client-side database.
  Note that the Netscape format is insufficient for the most recent
cookie specifications (don't know the RFC #), but I understood from
AMK that browser writers are expecting to actually implement that
(unlike RFC 2109).  If we stick to an in-process database, that
wouldn't matter, but I'm not sure if that solves the problem for
everyone.
  Regardless of the format, there's a little bit of work to do here.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Wed Aug  2 15:32:02 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:32:02 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 09:26:28 -0400."
 <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
Message-ID: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>

> Guido van Rossum writes:
>  > > Do we have a procedure for putting more batteries in the core? I'm
>  > > not talking about stuff like PEP-206, I'm talking about small, useful
>  > > modules like Cookies.py.
>  > 
>  > Cookie support in the core would be a good thing.
> 
>   There's also some cookie support in Grail (limited); that uses a
> Netscape-style client-side database.
>   Note that the Netscape format is insufficient for the most recent
> cookie specifications (don't know the RFC #), but I understood from
> AMK that browser writers are expecting to actually implement that
> (unlike RFC 2109).  If we stick to an in-process database, that
> wouldn't matter, but I'm not sure if that solves the problem for
> everyone.
>   Regardless of the format, there's a little bit of work to do here.

I think Cookie.py is for server-side management of cookies, not for
client-side.  Do we need client-side cookies too????

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug  2 14:34:29 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 2 Aug 2000 16:34:29 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor]
 CGI and Python (fwd))
In-Reply-To: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008021632340.13078-100000@sundial>

On Wed, 2 Aug 2000, Guido van Rossum wrote:

> I think Cookie.py is for server-side management of cookies, not for
> client-side.  Do we need client-side cookies too????

Not until we write a high-level interface to urllib which is similar
to the Perlish UserAgent module -- which is something that should
be done if Python wants to be a viable clients-side langugage.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From fdrake@beopen.com  Wed Aug  2 14:37:50 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:37:50 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
References: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
 <Pine.GSO.4.10.10008021632340.13078-100000@sundial>
 <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
Message-ID: <14728.9262.635980.220234@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > I think Cookie.py is for server-side management of cookies, not for
 > client-side.  Do we need client-side cookies too????

  I think this would be highly desirable; we've seen enough requests
for it on c.l.py.

Moshe Zadka writes:
 > Not until we write a high-level interface to urllib which is similar
 > to the Perlish UserAgent module -- which is something that should
 > be done if Python wants to be a viable clients-side langugage.

  Exactly!  It has become very difficult to get anything done on the
Web without enabling cookies, and simple "screen scraping" tools need
to have this support as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Wed Aug  2 15:05:41 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 10:05:41 -0400 (EDT)
Subject: [Python-Dev] test_parser.py
Message-ID: <14728.10933.534904.378463@cj42289-a.reston1.va.home.com>

  At some point I received a message/bug report referring to
test_parser.py, which doesn't exist in the CVS repository (and never
has as far as I know).  If someone has a regression test for the
parser module hidden away, I'd love to add it to the CVS repository!
It's time to update the parser module, and a good time to cover it in
the regression test!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Wed Aug  2 16:11:20 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 10:11:20 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Wed, 02 Aug 2000 11:12:01 +0200."
 <3987E5E1.A2B20241@lemburg.com>
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>
 <3987E5E1.A2B20241@lemburg.com>
Message-ID: <200008021511.KAA03049@cj20424-a.reston1.va.home.com>

> Is the license on 2.0 going to look the same ? I mean we now
> already have two seperate licenses and if BeOpen adds another
> two or three paragraphs will end up with a license two pages
> long.

Good question.  We can't really keep the license the same because the
old license is very specific to CNRI.  I would personally be in favor
of using the BSD license for 2.0.

> Oh, how I loved the old CWI license...

Ditto!

> Some comments on the new version:

> > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > license to reproduce, analyze, test, perform and/or display publicly,
> > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > alone or in any derivative version, provided, however, that CNRI's
> > License Agreement is retained in Python 1.6b1, alone or in any
> > derivative version prepared by Licensee.
> 
> I don't the latter (retaining the CNRI license alone) is not
> possible: you always have to include the CWI license.

Wow.  I hadn't even noticed this!  It seems you can prepare a
derivative version of the license.  Well, maybe.

> > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > substitute the following text (omitting the quotes): "Python 1.6, beta
> > 1, is made available subject to the terms and conditions in CNRI's
> > License Agreement.  This Agreement may be located on the Internet
> > using the following unique, persistent identifier (known as a handle):
> > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> 
> Do we really need this in the license text ? It's nice to have
> the text available on the Internet, but why add long descriptions
> about where to get it from to the license text itself ?

I'm not happy with this either, but CNRI can put anything they like in
their license, and they seem very fond of this particular bit of
advertising for their handle system.  I've never managed them to
convince them that it was unnecessary.

> > 3. In the event Licensee prepares a derivative work that is based on
> > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > derivative work available to the public as provided herein, then
> > Licensee hereby agrees to indicate in any such work the nature of the
> > modifications made to Python 1.6b1.
> 
> In what way would those indications have to be made ? A patch
> or just text describing the new features ?

Just text.  Bob Kahn told me that the list of "what's new" that I
always add to a release would be fine.

> What does "make available to the public" mean ? If I embed
> Python in an application and make this application available
> on the Internet for download would this fit the meaning ?

Yes, that's why he doesn't use the word "publish" -- such an action
would not be considered publication in the sense of the copyright law
(at least not in the US, and probably not according to the Bern
convention) but it is clearly making it available to the public.

> What about derived work that only uses the Python language
> reference as basis for its task, e.g. new interpreters
> or compilers which can read and execute Python programs ?

The language definition is not covered by the license at all.  Only
this particular code base.

> > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > INFRINGE ANY THIRD PARTY RIGHTS.
> > 
> > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> 
> I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> Germany the above text would only be valid after an initial
> 6 month period after installation, AFAIK (this period is
> called "Gewährleistung"). Licenses from other vendors usually
> add some extra license text to limit the liability in this period
> to the carrier on which the software was received by the licensee,
> e.g. the diskettes or CDs.

I'll mention this to Kahn.

> > 6. This License Agreement will automatically terminate upon a material
> > breach of its terms and conditions.
> 
> Immediately ? Other licenses usually include a 30-60 day period
> which allows the licensee to take actions. With the above text,
> the license will put the Python copy in question into an illegal
> state *prior* to having even been identified as conflicting with the
> license.

Believe it or not, this is necessary to ensure GPL compatibility!  An
earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
incompatible.  There's an easy workaround though: you fix your
compliance and download a new copy, which gives you all the same
rights again.

> > 7. This License Agreement shall be governed by and interpreted in all
> > respects by the law of the State of Virginia, excluding conflict of
> > law provisions.  Nothing in this License Agreement shall be deemed to
> > create any relationship of agency, partnership, or joint venture
> > between CNRI and Licensee.  This License Agreement does not grant
> > permission to use CNRI trademarks or trade name in a trademark sense
> > to endorse or promote products or services of Licensee, or any third
> > party.
> 
> Would the name "Python" be considered a trademark in the above
> sense ?

No, Python is not a CNRI trademark.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From trentm@ActiveState.com  Wed Aug  2 16:04:17 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 2 Aug 2000 08:04:17 -0700
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <20000802081254.V266@xs4all.nl>; from thomas@xs4all.net on Wed, Aug 02, 2000 at 08:12:54AM +0200
References: <200008012122.OAA22327@slayer.i.sourceforge.net> <20000802081254.V266@xs4all.nl>
Message-ID: <20000802080417.A16446@ActiveState.com>

On Wed, Aug 02, 2000 at 08:12:54AM +0200, Thomas Wouters wrote:
> On Tue, Aug 01, 2000 at 02:22:20PM -0700, Guido van Rossum wrote:
> > Update of /cvsroot/python/python/dist/src/Lib
> > In directory slayer.i.sourceforge.net:/tmp/cvs-serv22316
> 
> > Modified Files:
> > 	re.py 
> > Log Message:
> > My fix to the URL accidentally also switched back to the "pre" module.
> > Undo that!
> 
> This kind of thing is one of the reasons I wish 'cvs commit' would give you
> the entire patch you're about to commit in the log-message-edit screen, as
> CVS: comments, rather than just the modified files. It would also help with
> remembering what the patch was supposed to do ;) Is this possible with CVS,
> other than an 'EDITOR' that does this for you ?
> 
As Guido said, it is probably prefered that one does a cvs diff prior to
checking in. But to answer your question *unauthoritatively*, I know that CVS
allows you the change the checkin template and I *think* that it offers a
script hook to be able to generate it (not sure). If so then one could use
that script hook to put in the (commented) patch.

Trent



-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Wed Aug  2 16:14:16 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 2 Aug 2000 08:14:16 -0700
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>; from jeremy@beopen.com on Wed, Aug 02, 2000 at 08:31:55AM -0400
References: <20000802124112.W266@xs4all.nl> <Pine.GSO.4.10.10008021402041.20425-100000@sundial> <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <20000802081416.B16446@ActiveState.com>

On Wed, Aug 02, 2000 at 08:31:55AM -0400, Jeremy Hylton wrote:
> >>>>> "MZ" == Moshe Zadka <moshez@math.huji.ac.il> writes:
> 
>   MZ> Hmmmmm.....OK.  But I guess I'll still wait for a goahead from
>   MZ> the PythonLabs team.  BTW: Does anyone know if SF has an e-mail
>   MZ> notification of bugs, similar to that of patches? If so,
>   MZ> enabling it to send mail to a mailing list similar to
>   MZ> patches@python.org would be cool -- it would enable much more
>   MZ> peer review.
> 
> Go ahead and mark as closed bugs that are currently fixed.  If you can
> figure out when they were fixed (e.g. what checkin), that would be
> best.  If not, just be sure that it really is fixed -- and write a
> test case that would have caught the bug.

I think that unless

(1) you submitted the bug or can be sure that "works for me"
    is with the exact same configuration as the person who did; or
(2) you can identify where in the code the bug was and what checkin (or where
    in the code) fixed it

then you cannot close the bug.

This is the ideal case, with incomplete bug reports and extremely stale one
then these strict requirements are probably not always practical.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From jack@oratrix.nl  Wed Aug  2 16:16:06 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 02 Aug 2000 17:16:06 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
 Wed, 02 Aug 2000 10:11:20 -0500 , <200008021511.KAA03049@cj20424-a.reston1.va.home.com>
Message-ID: <20000802151606.753EF303181@snelboot.oratrix.nl>

I'm not sure I'm entirely happy with point 3. Depending on how you define 
"derivative work" and "make available" it could cause serious problems.

I assume that this clause is meant so that it is clear that MacPython and 
PythonWin and other such versions may be based on CNRI Python but are not the 
same. However, if you're building a commercial application that uses Python as 
its implementation language this "indication of modifications" becomes rather 
a long list. Just imagine that a C library came with such a license ("Anyone 
incorporating this C library or part thereof in their application should 
indicate the differences between their application and this C library":-).

Point 2 has the same problem to a lesser extent, the sentence starting with 
"Python ... is made available subject to the terms and conditions..." is fine 
for a product that is still clearly recognizable as Python, but would look 
silly if Python is just used as the implementation language.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From thomas@xs4all.net  Wed Aug  2 16:39:40 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 17:39:40 +0200
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <200008021402.JAA02711@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 09:02:00AM -0500
References: <200008012122.OAA22327@slayer.i.sourceforge.net> <20000802081254.V266@xs4all.nl> <200008021402.JAA02711@cj20424-a.reston1.va.home.com>
Message-ID: <20000802173940.X266@xs4all.nl>

On Wed, Aug 02, 2000 at 09:02:00AM -0500, Guido van Rossum wrote:
> > > My fix to the URL accidentally also switched back to the "pre" module.
> > > Undo that!

> > This kind of thing is one of the reasons I wish 'cvs commit' would give you
> > the entire patch you're about to commit in the log-message-edit screen, as
> > CVS: comments, rather than just the modified files. It would also help with
> > remembering what the patch was supposed to do ;) Is this possible with CVS,
> > other than an 'EDITOR' that does this for you ?

> Actually, I have made it a habit to *always* do a cvs diff before I
> commit, for exactly this reason.

Well, so do I, but none the less I'd like it if the patch was included in
the comment :-) I occasionally forget what I was doing (17 xterms, two of
which are running 20-session screens (6 of which are dedicated to Python,
and 3 to Mailman :), two irc channels with people asking for work-related
help or assistance, one telephone with a 'group' number of same, and enough
room around me for 5 or 6 people to stand around and ask questions... :)
Also, I sometimes wonder about the patch while I'm writing the comment. (Did
I do that right ? Didn't I forget about this ? etc.) Having it included as a
comment would be perfect, for me.

I guess I'll look at the hook thing Trent mailed about, and Subversion, if I
find the time for it :P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Wed Aug  2 18:22:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 02 Aug 2000 19:22:06 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>
 <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>
Message-ID: <398858BE.15928F47@lemburg.com>

Guido van Rossum wrote:
> 
> > Is the license on 2.0 going to look the same ? I mean we now
> > already have two seperate licenses and if BeOpen adds another
> > two or three paragraphs will end up with a license two pages
> > long.
> 
> Good question.  We can't really keep the license the same because the
> old license is very specific to CNRI.  I would personally be in favor
> of using the BSD license for 2.0.

If that's possible, I don't think we have to argue about the
1.6 license text at all ;-) ... but then: I seriously doubt that
CNRI is going to let you put 2.0 under a different license text :-( ...

> > Some comments on the new version:
> 
> > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > license to reproduce, analyze, test, perform and/or display publicly,
> > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > alone or in any derivative version, provided, however, that CNRI's
> > > License Agreement is retained in Python 1.6b1, alone or in any
> > > derivative version prepared by Licensee.
> >
> > I don't think the latter (retaining the CNRI license alone) is 
> > possible: you always have to include the CWI license.
> 
> Wow.  I hadn't even noticed this!  It seems you can prepare a
> derivative version of the license.  Well, maybe.

I think they mean "derivative version of Python 1.6b1", but in
court, the above wording could cause serious trouble for CNRI
... it seems 2.0 can reuse the CWI license after all ;-)
 
> > > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > > substitute the following text (omitting the quotes): "Python 1.6, beta
> > > 1, is made available subject to the terms and conditions in CNRI's
> > > License Agreement.  This Agreement may be located on the Internet
> > > using the following unique, persistent identifier (known as a handle):
> > > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> >
> > Do we really need this in the license text ? It's nice to have
> > the text available on the Internet, but why add long descriptions
> > about where to get it from to the license text itself ?
> 
> I'm not happy with this either, but CNRI can put anything they like in
> their license, and they seem very fond of this particular bit of
> advertising for their handle system.  I've never managed them to
> convince them that it was unnecessary.

Oh well... the above paragraph sure looks scary to a casual
license reader.

Also I'm not sure about the usefulness of this paragraph since
the mapping of a URL to a content cannot be considered a
legal binding. They would at least have to add some crypto
signature of the license text to make verification of the
origin possible.
 
> > > 3. In the event Licensee prepares a derivative work that is based on
> > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > derivative work available to the public as provided herein, then
> > > Licensee hereby agrees to indicate in any such work the nature of the
> > > modifications made to Python 1.6b1.
> >
> > In what way would those indications have to be made ? A patch
> > or just text describing the new features ?
> 
> Just text.  Bob Kahn told me that the list of "what's new" that I
> always add to a release would be fine.

Ok, should be made explicit in the license though...
 
> > What does "make available to the public" mean ? If I embed
> > Python in an application and make this application available
> > on the Internet for download would this fit the meaning ?
> 
> Yes, that's why he doesn't use the word "publish" -- such an action
> would not be considered publication in the sense of the copyright law
> (at least not in the US, and probably not according to the Bern
> convention) but it is clearly making it available to the public.

Ouch. That would mean I'd have to describe all additions,
i.e. the embedding application, in most details in order not to
breach the terms of the CNRI license.
 
> > What about derived work that only uses the Python language
> > reference as basis for its task, e.g. new interpreters
> > or compilers which can read and execute Python programs ?
> 
> The language definition is not covered by the license at all.  Only
> this particular code base.

Ok.
 
> > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > INFRINGE ANY THIRD PARTY RIGHTS.
> > >
> > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> >
> > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > Germany the above text would only be valid after an initial
> > 6 month period after installation, AFAIK (this period is
> > called "Gewährleistung"). Licenses from other vendors usually
> > add some extra license text to limit the liability in this period
> > to the carrier on which the software was received by the licensee,
> > e.g. the diskettes or CDs.
> 
> I'll mention this to Kahn.
> 
> > > 6. This License Agreement will automatically terminate upon a material
> > > breach of its terms and conditions.
> >
> > Immediately ? Other licenses usually include a 30-60 day period
> > which allows the licensee to take actions. With the above text,
> > the license will put the Python copy in question into an illegal
> > state *prior* to having even been identified as conflicting with the
> > license.
> 
> Believe it or not, this is necessary to ensure GPL compatibility!  An
> earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> incompatible.  There's an easy workaround though: you fix your
> compliance and download a new copy, which gives you all the same
> rights again.

Hmm, but what about the 100.000 copies of the embedding application
that have already been downloaded -- I would have to force them
to redownload the application (or even just a demo of it) in
order to reestablish the lawfulness of the copy action.

Not that I want to violate the license in any way, but there
seem to be quite a few pitfalls in the present text, some of
which are not clear at all (e.g. the paragraph 3).

> > > 7. This License Agreement shall be governed by and interpreted in all
> > > respects by the law of the State of Virginia, excluding conflict of
> > > law provisions.  Nothing in this License Agreement shall be deemed to
> > > create any relationship of agency, partnership, or joint venture
> > > between CNRI and Licensee.  This License Agreement does not grant
> > > permission to use CNRI trademarks or trade name in a trademark sense
> > > to endorse or promote products or services of Licensee, or any third
> > > party.
> >
> > Would the name "Python" be considered a trademark in the above
> > sense ?
> 
> No, Python is not a CNRI trademark.

I think you or BeOpen on behalf of you should consider
registering the mark before someone else does it. There are
quite a few "PYTHON" marks registered, yet all refer to non-
computer business.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From akuchlin@mems-exchange.org  Wed Aug  2 20:57:09 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 15:57:09 -0400
Subject: [Python-Dev] Python HOWTO project created
Message-ID: <20000802155709.D28691@kronos.cnri.reston.va.us>

[CC'ed to python-dev and doc-sig -- followups set to doc-sig]

I've created a py-howto project on SourceForge to hold the Python
HOWTO documents.  

http://sourceforge.net/projects/py-howto/

Currently me, Fred, Moshe, and ESR are listed as developers and have
write access to CVS; if you want write access, drop me a note.  Web
pages and a py-howto-checkins mailing list will be coming soon, after
a bit more administrative fiddling around on my part.

Should I also create a py-howto-discuss list for discussing revisions,
or is the doc-sig OK?  Fred, what's your ruling about this?

--amk


From guido@beopen.com  Wed Aug  2 22:54:47 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 16:54:47 -0500
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: Your message of "Wed, 02 Aug 2000 03:30:30 -0400."
 <3987CE16.DB3E72B8@prescod.net>
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au> <3986794E.ADBB938C@prescod.net> <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
 <3987CE16.DB3E72B8@prescod.net>
Message-ID: <200008022154.QAA04109@cj20424-a.reston1.va.home.com>

OK.  Fine.  You say your module is great.  The Windows weenies here
don't want to touch it with a ten-foot pole.  I'm not going to be able
to dig all the way to the truth here -- I don't understand the
Registry API at all.

I propose that you and Mark Hammond go off-line and deal with Mark's
criticism one-on-one, and come back with a compromise that you are
both happy with.  I don't care what the compromise is, but both of you
must accept it.

If you *can't* agree, or if I haven't heard from you by the time I'm
ready to release 2.0b1 (say, end of August), winreg.py bites the dust.

I realize that this gives Mark Hammond veto power over the module, but
he's a pretty reasonable guy, *and* he knows the Registry API better
than anyone.  It should be possible for one of you to convince the
other.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From fdrake@beopen.com  Wed Aug  2 22:05:20 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 17:05:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Doc-SIG] Python HOWTO project created
In-Reply-To: <20000802155709.D28691@kronos.cnri.reston.va.us>
References: <20000802155709.D28691@kronos.cnri.reston.va.us>
Message-ID: <14728.36112.584563.516268@cj42289-a.reston1.va.home.com>

Andrew Kuchling writes:
 > Should I also create a py-howto-discuss list for discussing revisions,
 > or is the doc-sig OK?  Fred, what's your ruling about this?

  It's your project, your choice.  ;)  I've no problem with using the
Doc-SIG for this if you like, but a separate list may be a good thing
since it would have fewer distractions!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Wed Aug  2 23:18:26 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:18:26 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Wed, 02 Aug 2000 19:22:06 +0200."
 <398858BE.15928F47@lemburg.com>
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>
 <398858BE.15928F47@lemburg.com>
Message-ID: <200008022218.RAA04178@cj20424-a.reston1.va.home.com>

[MAL]
> > > Is the license on 2.0 going to look the same ? I mean we now
> > > already have two seperate licenses and if BeOpen adds another
> > > two or three paragraphs will end up with a license two pages
> > > long.

[GvR]
> > Good question.  We can't really keep the license the same because the
> > old license is very specific to CNRI.  I would personally be in favor
> > of using the BSD license for 2.0.

[MAL}
> If that's possible, I don't think we have to argue about the
> 1.6 license text at all ;-) ... but then: I seriously doubt that
> CNRI is going to let you put 2.0 under a different license text :-( ...

What will happen is that the licenses in effect all get concatenated
in the LICENSE file.  It's a drag.

> > > Some comments on the new version:
> > 
> > > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > > license to reproduce, analyze, test, perform and/or display publicly,
> > > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > > alone or in any derivative version, provided, however, that CNRI's
> > > > License Agreement is retained in Python 1.6b1, alone or in any
> > > > derivative version prepared by Licensee.
> > >
> > > I don't think the latter (retaining the CNRI license alone) is 
> > > possible: you always have to include the CWI license.
> > 
> > Wow.  I hadn't even noticed this!  It seems you can prepare a
> > derivative version of the license.  Well, maybe.
> 
> I think they mean "derivative version of Python 1.6b1", but in
> court, the above wording could cause serious trouble for CNRI

You're right of course, I misunderstood you *and* the license.  Kahn
explains it this way:

[Kahn]
| Ok. I take the point being made. The way english works with ellipsis or 
| anaphoric references is to link back to the last anchor point. In the above 
| case, the last referent is Python 1.6b1.
| 
| Thus, the last phrase refers to a derivative version of Python1.6b1 
| prepared by Licensee. There is no permission given to make a derivative 
| version of the License.

> ... it seems 2.0 can reuse the CWI license after all ;-)

I'm not sure why you think that: 2.0 is a derivative version and is
thus bound by the CNRI license as well as by the license that BeOpen
adds.

> > > > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > > > substitute the following text (omitting the quotes): "Python 1.6, beta
> > > > 1, is made available subject to the terms and conditions in CNRI's
> > > > License Agreement.  This Agreement may be located on the Internet
> > > > using the following unique, persistent identifier (known as a handle):
> > > > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > > > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> > >
> > > Do we really need this in the license text ? It's nice to have
> > > the text available on the Internet, but why add long descriptions
> > > about where to get it from to the license text itself ?
> > 
> > I'm not happy with this either, but CNRI can put anything they like in
> > their license, and they seem very fond of this particular bit of
> > advertising for their handle system.  I've never managed them to
> > convince them that it was unnecessary.
> 
> Oh well... the above paragraph sure looks scary to a casual
> license reader.

But it's really harmless.

> Also I'm not sure about the usefulness of this paragraph since
> the mapping of a URL to a content cannot be considered a
> legal binding. They would at least have to add some crypto
> signature of the license text to make verification of the
> origin possible.

Sure.  Just don't worry about it.  Kahn again:

| They always have the option of using the full text in that case.

So clearly he isn't interested in taking it out.  I'd let it go.

> > > > 3. In the event Licensee prepares a derivative work that is based on
> > > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > > derivative work available to the public as provided herein, then
> > > > Licensee hereby agrees to indicate in any such work the nature of the
> > > > modifications made to Python 1.6b1.
> > >
> > > In what way would those indications have to be made ? A patch
> > > or just text describing the new features ?
> > 
> > Just text.  Bob Kahn told me that the list of "what's new" that I
> > always add to a release would be fine.
> 
> Ok, should be made explicit in the license though...

It's hard to specify this precisely -- in fact, the more precise you
specify it the more scary it looks and the more likely they are to be
able to find fault with the details of how you do it.  In this case, I
believe (and so do lawyers) that vague is good!  If you write "ported
to the Macintosh" and that's what you did, they can hardly argue with
you, can they?

> > > What does "make available to the public" mean ? If I embed
> > > Python in an application and make this application available
> > > on the Internet for download would this fit the meaning ?
> > 
> > Yes, that's why he doesn't use the word "publish" -- such an action
> > would not be considered publication in the sense of the copyright law
> > (at least not in the US, and probably not according to the Bern
> > convention) but it is clearly making it available to the public.
> 
> Ouch. That would mean I'd have to describe all additions,
> i.e. the embedding application, in most details in order not to
> breach the terms of the CNRI license.

No, additional modules aren't modifications to CNRI's work.  A change
to the syntax to support curly braces is.

> > > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > > INFRINGE ANY THIRD PARTY RIGHTS.
> > > >
> > > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> > >
> > > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > > Germany the above text would only be valid after an initial
> > > 6 month period after installation, AFAIK (this period is
> > > called "Gewährleistung"). Licenses from other vendors usually
> > > add some extra license text to limit the liability in this period
> > > to the carrier on which the software was received by the licensee,
> > > e.g. the diskettes or CDs.
> > 
> > I'll mention this to Kahn.

His response:

| Guido, Im not willing to do a study of international law here. If you
| can have the person identify one country other than the US that does
| not allow the above limitation or exclusion of liability and provide a
| copy of the section of their law, ill be happy to change this to read
| ".... SOME STATES OR COUNTRIES MAY NOT ALLOW ...." Otherwise, id just
| leave it alone (i.e. as is) for now.

Please mail this info directly to Kahn@CNRI.Reston.Va.US if you
believe you have the right information.  (You may CC me.)  Personally,
I wouldn't worry.  If the German law says that part of a license is
illegal, it doesn't make it any more or less illegal whether the
license warns you about this fact.

I believe that in the US, as a form of consumer protection, some
states not only disallow general disclaimers, but also require that
licenses containing such disclaimers notify the reader that the
disclaimer is not valid in their state, so that's where the language
comes from.  I don't know about German law.

> > > > 6. This License Agreement will automatically terminate upon a material
> > > > breach of its terms and conditions.
> > >
> > > Immediately ? Other licenses usually include a 30-60 day period
> > > which allows the licensee to take actions. With the above text,
> > > the license will put the Python copy in question into an illegal
> > > state *prior* to having even been identified as conflicting with the
> > > license.
> > 
> > Believe it or not, this is necessary to ensure GPL compatibility!  An
> > earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> > incompatible.  There's an easy workaround though: you fix your
> > compliance and download a new copy, which gives you all the same
> > rights again.
> 
> Hmm, but what about the 100.000 copies of the embedding application
> that have already been downloaded -- I would have to force them
> to redownload the application (or even just a demo of it) in
> order to reestablish the lawfulness of the copy action.

It's better not to violate the license.  But do you really think that
they would go after you immediately if you show good intentions to
rectify?

> Not that I want to violate the license in any way, but there
> seem to be quite a few pitfalls in the present text, some of
> which are not clear at all (e.g. the paragraph 3).

I've warned Kahn about this effect of making the license bigger, but
he simply disagrees (and we agree to disagree).  I don't know what
else I could do about it, apart from putting a FAQ about the license
on python.org -- which I intend to do.

> > > > 7. This License Agreement shall be governed by and interpreted in all
> > > > respects by the law of the State of Virginia, excluding conflict of
> > > > law provisions.  Nothing in this License Agreement shall be deemed to
> > > > create any relationship of agency, partnership, or joint venture
> > > > between CNRI and Licensee.  This License Agreement does not grant
> > > > permission to use CNRI trademarks or trade name in a trademark sense
> > > > to endorse or promote products or services of Licensee, or any third
> > > > party.
> > >
> > > Would the name "Python" be considered a trademark in the above
> > > sense ?
> > 
> > No, Python is not a CNRI trademark.
> 
> I think you or BeOpen on behalf of you should consider
> registering the mark before someone else does it. There are
> quite a few "PYTHON" marks registered, yet all refer to non-
> computer business.

Yes, I do intend to do this.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Fredrik Lundh" <effbot@telia.com  Wed Aug  2 22:37:52 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 2 Aug 2000 23:37:52 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
Message-ID: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>

Guido asked me to update my old SRE benchmarks, and
post them to python-dev.

Summary:

-- SRE is usually faster than the old RE module (PRE).

-- SRE is faster than REGEX on anything but very trivial
   patterns and short target strings.  And in some cases,
   it's even faster than a corresponding string.find...

-- on real-life benchmarks like XML parsing and Python
   tokenizing, SRE is 2-3 times faster than PRE.

-- using Unicode strings instead of 8-bit strings doesn't hurt
   performance (for some tests, the Unicode version is 30-40%
   faster on my machine.  Go figure...)

-- PRE is still faster for some patterns, especially when using
   long target strings.  I know why, and I plan to fix that before
   2.0 final.

enjoy /F

--------------------------------------------------------------------
These tests were made on a P3/233 MHz running Windows 95,
using a local build of the 0.9.8 release (this will go into 1.6b1,
I suppose).

--------------------------------------------------------------------
parsing xml:

running xmllib.py on hamlet.xml (280k):

sre8             7.14 seconds
sre16            7.82 seconds
pre             17.17 seconds

(for the sre16 test, the xml file was converted to unicode before
it was fed to the unmodified parser).

for comparision, here's the results for a couple of fast pure-Python
parsers:

rex/pre          2.44 seconds
rex/sre          0.59 seconds
srex/sre         0.16 seconds

(rex is a shallow XML parser, based on code by Robert Cameron.  srex
is an even simpler shallow parser, using sre's template mode).

--------------------------------------------------------------------
parsing python:

running tokenize.py on Tkinter.py (156k):

sre8             3.23 seconds
pre              7.57 seconds

--------------------------------------------------------------------
searching for literal text:

searching for "spam" in a string padded with "spaz" (1000 bytes on
each side of the target):

string.find     0.112 ms
sre8.search     0.059
pre.search      0.122

unicode.find    0.130
sre16.search    0.065

(yes, regular expressions can run faster than optimized C code -- as
long as we don't take compilation time into account ;-)

same test, without any false matches:

string.find     0.035 ms
sre8.search     0.050
pre.search      0.116

unicode.find    0.031
sre16.search    0.055

--------------------------------------------------------------------
compiling regular expressions

compiling the 480 tests in the standard test suite:

sre             1.22 seconds
pre             0.05 seconds

or in other words, pre (using a compiler written in C) can
compile just under 10,000 patterns per second.  sre can only
compile about 400 pattern per second.  do we care? ;-)

(footnote: sre's pattern cache stores 100 patterns.  pre's
cache hold 20 patterns, iirc).

--------------------------------------------------------------------
benchmark suite

to round off this report, here's a couple of "micro benchmarks".
all times are in milliseconds.

n=3D        0     5    50   250  1000  5000
----- ----- ----- ----- ----- ----- -----

pattern 'Python|Perl', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.013 0.013 0.016 0.027 0.079
sre16 0.014 0.014 0.015 0.018 0.025 0.076
pre   0.107 0.109 0.114 0.116 0.135 0.259
regex 0.011 0.011 0.012 0.016 0.033 0.122

pattern 'Python|Perl', string 'P'*n+'Perl'+'P'*n
sre8  0.013 0.016 0.030 0.100 0.358 1.716
sre16 0.014 0.015 0.030 0.094 0.347 1.649
pre   0.115 0.112 0.158 0.351 1.085 5.002
regex 0.010 0.016 0.060 0.271 1.022 5.162

(false matches causes problems for pre and regex)

pattern '(Python|Perl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.016 0.030 0.099 0.362 1.684
sre16 0.015 0.016 0.030 0.094 0.340 1.623
pre   0.110 0.111 0.112 0.119 0.143 0.267
regex 0.012 0.012 0.013 0.017 0.034 0.124

(in 0.9.8, sre's optimizer doesn't grok named groups, and
it doesn't realize that this pattern has to start with a "P")

pattern '(?:Python|Perl)', string '-'*n+'Perl'+'-'*n
sre8  0.013 0.013 0.014 0.016 0.027 0.079
sre16 0.015 0.014 0.016 0.018 0.026 0.075
pre   0.108 0.135 0.113 0.137 0.140 0.275
regex skip

(anonymous groups work better)

pattern 'Python', string '-'*n+'Python'+'-'*n
sre8  0.013 0.013 0.014 0.019 0.039 0.148
sre16 0.013 0.013 0.014 0.020 0.043 0.187
pre   0.129 0.105 0.109 0.117 0.191 0.277
regex 0.011 0.025 0.018 0.016 0.037 0.127

pattern 'Python', string 'P'*n+'Python'+'P'*n
sre8  0.040 0.012 0.021 0.026 0.080 0.248
sre16 0.012 0.013 0.015 0.025 0.061 0.283
pre   0.110 0.148 0.153 0.338 0.925 4.355
regex 0.013 0.013 0.041 0.155 0.535 2.628

(as we saw in the string.find test, sre is very fast when
there are lots of false matches)

pattern '.*Python', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.026 0.067 0.217 1.039
sre16 0.016 0.017 0.026 0.067 0.218 1.076
pre   0.111 0.112 0.124 0.180 0.386 1.494
regex 0.015 0.022 0.073 0.408 1.669 8.489

pattern '.*Python.*', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.030 0.089 0.315 1.499
sre16 0.016 0.018 0.032 0.090 0.314 1.537
pre   0.112 0.113 0.129 0.186 0.413 1.605
regex 0.016 0.023 0.076 0.387 1.674 8.519

pattern '.*(Python)', string '-'*n+'Python'+'-'*n
sre8  0.020 0.021 0.044 0.147 0.542 2.630
sre16 0.019 0.021 0.044 0.154 0.541 2.681
pre   0.115 0.117 0.141 0.245 0.636 2.690
regex 0.019 0.026 0.097 0.467 2.007 10.264

pattern '.*(?:Python)', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.027 0.065 0.220 1.037
sre16 0.016 0.017 0.026 0.070 0.221 1.066
pre   0.112 0.119 0.136 0.223 0.566 2.377
regex skip

pattern 'Python|Perl|Tcl', string '-'*n+'Perl'+'-'*n
sre8  0.013 0.015 0.034 0.114 0.407 1.985
sre16 0.014 0.016 0.034 0.109 0.392 1.915
pre   0.107 0.108 0.117 0.124 0.167 0.393
regex 0.012 0.012 0.013 0.017 0.033 0.123

(here's another sre compiler problem: it fails to realize
that this pattern starts with characters from a given set
[PT].  pre and regex both use bitmaps...)

pattern 'Python|Perl|Tcl', string 'P'*n+'Perl'+'P'*n
sre8  0.013 0.018 0.055 0.228 0.847 4.165
sre16 0.015 0.027 0.055 0.218 0.821 4.061
pre   0.111 0.116 0.172 0.415 1.354 6.302
regex 0.011 0.019 0.085 0.374 1.467 7.261

(but when there are lots of false matches, sre is faster
anyway.  interesting...)

pattern '(Python|Perl|Tcl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.018 0.042 0.152 0.575 2.798
sre16 0.015 0.019 0.042 0.148 0.556 2.715
pre   0.112 0.111 0.116 0.129 0.172 0.408
regex 0.012 0.013 0.014 0.018 0.035 0.124

pattern '(?:Python|Perl|Tcl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.016 0.034 0.113 0.405 1.987
sre16 0.016 0.016 0.033 0.112 0.393 1.918
pre   0.109 0.109 0.112 0.128 0.177 0.397
regex skip

pattern '(Python)\\1', string '-'*n+'PythonPython'+'-'*n
sre8  0.014 0.018 0.030 0.096 0.342 1.673
sre16 0.015 0.016 0.031 0.094 0.330 1.625
pre   0.112 0.111 0.112 0.119 0.141 0.268
regex 0.011 0.012 0.013 0.017 0.033 0.123

pattern '(Python)\\1', string 'P'*n+'PythonPython'+'P'*n
sre8  0.013 0.016 0.035 0.111 0.411 1.976
sre16 0.015 0.016 0.034 0.112 0.416 1.992
pre   0.110 0.116 0.160 0.355 1.051 4.797
regex 0.011 0.017 0.047 0.200 0.737 3.680

pattern '([0a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.084 0.091 0.143 0.371 1.160 6.165
sre16 0.086 0.090 0.142 0.470 1.258 7.827
pre   0.155 0.140 0.185 0.200 0.280 0.523
regex 0.018 0.018 0.020 0.024 0.137 0.240

(again, sre's lack of "fastmap" is rather costly)

pattern '(?:[0a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.028 0.033 0.077 0.303 1.433 7.140
sre16 0.021 0.027 0.073 0.277 1.031 5.053
pre   0.131 0.131 0.174 0.183 0.227 0.461
regex skip

pattern '([a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.032 0.038 0.083 0.288 1.109 5.404
sre16 0.033 0.038 0.083 0.292 1.035 5.802
pre   0.195 0.135 0.176 0.187 0.233 0.468
regex 0.018 0.018 0.019 0.023 0.041 0.131

pattern '(?:[a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.022 0.025 0.067 0.302 1.011 8.245
sre16 0.021 0.026 0.066 0.302 1.103 5.372
pre   0.262 0.397 0.178 0.193 0.250 0.817
regex skip

pattern '.*P.*y.*t.*h.*o.*n.*', string '-'*n+'Python'+'-'*n
sre8  0.021 0.084 0.118 0.251 0.965 5.414
sre16 0.021 0.025 0.063 0.366 1.192 4.639
pre   0.123 0.147 0.225 0.568 1.899 9.336
regex 0.028 0.060 0.258 1.269 5.497 28.334

--------------------------------------------------------------------



From bwarsaw@beopen.com  Wed Aug  2 22:40:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 2 Aug 2000 17:40:59 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
Message-ID: <14728.38251.289986.857417@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    >> Do we have a procedure for putting more batteries in the core?
    >> I'm not talking about stuff like PEP-206, I'm talking about
    >> small, useful modules like Cookies.py.

    GvR> Cookie support in the core would be a good thing.

I use Tim O'Malley's LGPL'd version (not as contagious as GPL'd) in
Mailman with one important patch.  I've uploaded it to SF as patch
#101055.  If you like it, I'm happy to check it in.

-Barry


From bwarsaw@beopen.com  Wed Aug  2 22:42:26 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 2 Aug 2000 17:42:26 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
 <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
Message-ID: <14728.38338.92481.102493@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    GvR> I think Cookie.py is for server-side management of cookies,
    GvR> not for client-side.  Do we need client-side cookies too????

Ah.  AFAIK, Tim's Cookie.py is server side only.  Still very useful --
and already written!

-Barry


From guido@beopen.com  Wed Aug  2 23:44:03 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:44:03 -0500
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: Your message of "Wed, 02 Aug 2000 23:37:52 +0200."
 <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <200008022244.RAA04388@cj20424-a.reston1.va.home.com>

> Guido asked me to update my old SRE benchmarks, and
> post them to python-dev.

Thanks, Fredrik!  This (plus the fact that SRE now passes all PRE
tests) makes me very happy with using SRE as the regular expression
engine of choice for 1.6 and 2.0.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Wed Aug  2 23:46:35 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:46:35 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:40:59 -0400."
 <14728.38251.289986.857417@anthem.concentric.net>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.38251.289986.857417@anthem.concentric.net>
Message-ID: <200008022246.RAA04405@cj20424-a.reston1.va.home.com>

>     GvR> Cookie support in the core would be a good thing.

[Barry]
> I use Tim O'Malley's LGPL'd version (not as contagious as GPL'd) in
> Mailman with one important patch.  I've uploaded it to SF as patch
> #101055.  If you like it, I'm happy to check it in.

I don't have the time to judge this code myself, but hope that others
in this group do.

Are you sure it's a good thing to add LGPL'ed code to the Python
standard library though?  AFAIK it is still more restrictive than the
old CWI license and probably also more restrictive than the new CNRI
license; so it could come under scrutiny and prevent closed,
proprietary software development using Python...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From barry@scottb.demon.co.uk  Wed Aug  2 22:50:43 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Wed, 2 Aug 2000 22:50:43 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIENBDCAA.MarkH@ActiveState.com>
Message-ID: <020901bffccb$b4bf4da0$060210ac@private>

This is a multi-part message in MIME format.

------=_NextPart_000_020A_01BFFCD4.1683B5A0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit



> -----Original Message-----
> From: Mark Hammond [mailto:MarkH@activestate.com]
> Sent: 02 August 2000 00:14
> To: Barry Scott; python-dev@python.org
> Subject: RE: [Python-Dev] Preventing 1.5 extensions crashing under
> 1.6/2.0 Python
> 
> 
> > If someone in the core of Python thinks a patch implementing
> > what I've outlined is useful please let me know and I will
> > generate the patch.
> 
> Umm - I'm afraid that I dont keep my python-dev emils for that long, and
> right now I'm too lazy/busy to dig around the archives.
> 
> Exactly what did you outline?  I know it went around a few times, and I
> can't remember who said what.  For my money, I liked Fredrik's solution
> best (check Py_IsInitialized() in Py_InitModule4()), but as mentioned that
> only solves for the next version of Python; it doesnt solve the fact 1.5
> modules will crash under 1.6/2.0

	This is not a good way to solve the problem as it only works in a
	limited number of cases. 

	Attached is my proposal which works for all new and old python
	and all old and new extensions.

> 
> It would definately be excellent to get _something_ in the CNRI 1.6
> release, so the BeOpen 2.0 release can see the results.

> But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,

	Yes indeed once the story of 1.6 and 2.0 is out I expect folks
	will skip 1.6. For example, if your win32 stuff is not ported
	then Python 1.6 is not usable on Windows/NT.
	
> 
> Mark.

		Barry
------=_NextPart_000_020A_01BFFCD4.1683B5A0
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment

From: "Barry Scott" <barry@scottb.demon.co.uk>
Sender: <python-dev-admin@python.org>
To: <python-dev@python.org>
Subject: RE: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
Date: Tue, 18 Jul 2000 23:36:15 +0100
Message-ID: <000701bff108$950ec9f0$060210ac@private>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2173.0
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2919.6700
X-UIDL: 5c981bb22b1a76163266dc06120e339a
In-Reply-To: 
X-BeenThere: python-dev@python.org
X-Mailman-Version: 2.0beta4

Mark's comment about what I'd come up with being too complex
inspired this simpler solution.

Change the init function name to a new name PythonExtensionInit_ say.
Pass in the API version for the extension writer to check. If the
version is bad for this extension returns without calling any python
functions. Add a return code that is true if compatible, false if not.
If compatible the extension can use python functions and report and
problems it wishes.

int PythonExtensionInit_XXX( int invoking_python_api_version )
	{
	if( invoking_python_api_version != PYTHON_API_VERSION )
		{
		/* python will report that the module is incompatible */
		return 0;
		}

	/* setup module for XXX ... */

	/* say this extension is compatible with the invoking python */
	return 1;
	}

All 1.5 extensions fail to load on later python 2.0 and later.
All 2.0 extensions fail to load on python 1.5.

All new extensions work only with python of the same API version.

Document that failure to setup a module could mean the extension is
incompatible with this version of python.

Small code change in python core. But need to tell extension writers
what the new interface is and update all extensions within the python
CVS tree.

		Barry


_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://www.python.org/mailman/listinfo/python-dev

------=_NextPart_000_020A_01BFFCD4.1683B5A0--



From akuchlin@mems-exchange.org  Wed Aug  2 22:55:53 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 17:55:53 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008022246.RAA04405@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 05:46:35PM -0500
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
Message-ID: <20000802175553.A30340@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 05:46:35PM -0500, Guido van Rossum wrote:
>Are you sure it's a good thing to add LGPL'ed code to the Python
>standard library though?  AFAIK ... it could come under scrutiny and
>prevent closed, proprietary software development using Python...

Licence discussions are a conversational black hole...  Why not just
ask Tim O'Malley to change the licence in return for getting it added
to the core?

--amk


From akuchlin@mems-exchange.org  Wed Aug  2 23:00:59 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 18:00:59 -0400
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>; from effbot@telia.com on Wed, Aug 02, 2000 at 11:37:52PM +0200
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <20000802180059.B30340@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 11:37:52PM +0200, Fredrik Lundh wrote:
>-- SRE is usually faster than the old RE module (PRE).

Once the compiler is translated to C, it might be worth considering
making SRE available as a standalone library for use outside of
Python.  Most other regex libraries either don't do Perl's extensions,
or they don't do Unicode.  Bonus points if you can get the Perl6 team
interested in it.

Hmm... here's an old problem that's returned (recursion on repeated
group matches, I expect):

>>> p=re.compile('(x)*')
>>> p
<SRE_Pattern object at 0x8127048>
>>> p.match(500000*'x')
Segmentation fault (core dumped)

--amk


From guido@beopen.com  Thu Aug  3 00:10:14 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:10:14 -0500
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:55:53 -0400."
 <20000802175553.A30340@kronos.cnri.reston.va.us>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
 <20000802175553.A30340@kronos.cnri.reston.va.us>
Message-ID: <200008022310.SAA04508@cj20424-a.reston1.va.home.com>


From guido@beopen.com  Thu Aug  3 00:10:33 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:10:33 -0500
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:55:53 -0400."
 <20000802175553.A30340@kronos.cnri.reston.va.us>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
 <20000802175553.A30340@kronos.cnri.reston.va.us>
Message-ID: <200008022310.SAA04518@cj20424-a.reston1.va.home.com>

> Licence discussions are a conversational black hole...  Why not just
> ask Tim O'Malley to change the licence in return for getting it added
> to the core?

Excellent idea.  Go for it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Thu Aug  3 00:11:39 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:11:39 -0500
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: Your message of "Wed, 02 Aug 2000 18:00:59 -0400."
 <20000802180059.B30340@kronos.cnri.reston.va.us>
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
 <20000802180059.B30340@kronos.cnri.reston.va.us>
Message-ID: <200008022311.SAA04529@cj20424-a.reston1.va.home.com>

> Hmm... here's an old problem that's returned (recursion on repeated
> group matches, I expect):
> 
> >>> p=re.compile('(x)*')
> >>> p
> <SRE_Pattern object at 0x8127048>
> >>> p.match(500000*'x')
> Segmentation fault (core dumped)

Ouch.

Andrew, would you mind adding a test case for that to the re test
suite?  It's important that this doesn't come back!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Thu Aug  3 00:18:04 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:18:04 -0500
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: Your message of "Wed, 02 Aug 2000 22:50:43 +0100."
 <020901bffccb$b4bf4da0$060210ac@private>
References: <020901bffccb$b4bf4da0$060210ac@private>
Message-ID: <200008022318.SAA04558@cj20424-a.reston1.va.home.com>

> > But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,
> 
> 	Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> 	will skip 1.6. For example, if your win32 stuff is not ported
> 	then Python 1.6 is not usable on Windows/NT.

I expect to be releasing a 1.6 Windows installer -- but I can't
control Mark Hammond.  Yet, it shouldn't be hard for him to create a
1.6 version of win32all, should it?

> Change the init function name to a new name PythonExtensionInit_ say.
> Pass in the API version for the extension writer to check. If the
> version is bad for this extension returns without calling any python
> functions. Add a return code that is true if compatible, false if not.
> If compatible the extension can use python functions and report and
> problems it wishes.
> 
> int PythonExtensionInit_XXX( int invoking_python_api_version )
> 	{
> 	if( invoking_python_api_version != PYTHON_API_VERSION )
> 		{
> 		/* python will report that the module is incompatible */
> 		return 0;
> 		}
> 
> 	/* setup module for XXX ... */
> 
> 	/* say this extension is compatible with the invoking python */
> 	return 1;
> 	}
> 
> All 1.5 extensions fail to load on later python 2.0 and later.
> All 2.0 extensions fail to load on python 1.5.
> 
> All new extensions work only with python of the same API version.
> 
> Document that failure to setup a module could mean the extension is
> incompatible with this version of python.
> 
> Small code change in python core. But need to tell extension writers
> what the new interface is and update all extensions within the python
> CVS tree.

I sort-of like this idea -- at least at the +0 level.

I would choose a shorter name: PyExtInit_XXX().

Could you (or someone else) prepare a patch that changes this?  It
would be great if the patch were relative to the 1.6 branch of the
source tree; unfortunately this is different because of the
ANSIfication.

Unfortunately we only have two days to get this done for 1.6 -- I plan
to release 1.6b1 this Friday!  If you don't get to it, prepare a patch
for 2.0 would be the next best thing.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 00:13:30 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 01:13:30 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <020901bffccb$b4bf4da0$060210ac@private>  <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <01d701bffcd7$46a74a00$f2a6b5d4@hagrid>

> Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> will skip 1.6.   For example, if your win32 stuff is not ported then
> Python 1.6 is not usable on Windows/NT.

"not usable"?

guess you haven't done much cross-platform development lately...

> Change the init function name to a new name PythonExtensionInit_ say.
> Pass in the API version for the extension writer to check. If the
> version is bad for this extension returns without calling any python

huh?  are you seriously proposing to break every single C extension
ever written -- on each and every platform -- just to trap an error
message caused by extensions linked against 1.5.2 on your favourite
platform?

> Small code change in python core. But need to tell extension writers
> what the new interface is and update all extensions within the python
> CVS tree.

you mean "update the source code for all extensions ever written."

-1



From DavidA@ActiveState.com  Thu Aug  3 01:33:02 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Wed, 2 Aug 2000 17:33:02 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
In-Reply-To: <013901bff821$55dd02e0$8119fea9@neil>
Message-ID: <Pine.WNT.4.21.0008021732140.980-100000@loom>

>    IIRC ActiveState contributed to Perl a version of fork that works on
> Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> help heal one of the more difficult platform rifts. Emulating fork for Win32
> looks quite difficult to me but if its already done...

I've talked to Sarathy about it, and it's messy, as Perl manages PIDs
above and beyond what Windows does, among other things.  If anyone is
interested in doing that work, I can make the introduction.

--david



From DavidA@ActiveState.com  Thu Aug  3 01:35:01 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Wed, 2 Aug 2000 17:35:01 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
In-Reply-To: <013901bff821$55dd02e0$8119fea9@neil>
Message-ID: <Pine.WNT.4.21.0008021734040.980-100000@loom>

>    IIRC ActiveState contributed to Perl a version of fork that works on
> Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> help heal one of the more difficult platform rifts. Emulating fork for Win32
> looks quite difficult to me but if its already done...

Sigh. Me tired.

The message I posted a few minutes ago was actually referring to the
system() work, not the fork() work.  I agree that the fork() emulation
isn't Pythonic.

--david



From skip@mojam.com (Skip Montanaro)  Thu Aug  3 03:32:29 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 2 Aug 2000 21:32:29 -0500 (CDT)
Subject: [Python-Dev] METH_VARARGS
Message-ID: <14728.55741.477399.196240@beluga.mojam.com>

I noticed Andrew Kuchling's METH_VARARGS submission:

    Use METH_VARARGS instead of numeric constant 1 in method def. tables

While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
potential conflicts with other packages?

Skip


From akuchlin@mems-exchange.org  Thu Aug  3 03:41:02 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 22:41:02 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008022310.SAA04518@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 06:10:33PM -0500
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com>
Message-ID: <20000802224102.A25837@newcnri.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 06:10:33PM -0500, Guido van Rossum wrote:
>> Why not just
>> ask Tim O'Malley to change the licence in return for getting it added
>> to the core?
>Excellent idea.  Go for it!

Mail to timo@bbn.com bounces; does anyone have a more recent e-mail
address?  What we do if he can't be located?  Add the module anyway,
abandon the idea, or write a new version?

--amk


From fdrake@beopen.com  Thu Aug  3 03:51:23 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 22:51:23 -0400 (EDT)
Subject: [Python-Dev] METH_VARARGS
In-Reply-To: <14728.55741.477399.196240@beluga.mojam.com>
References: <14728.55741.477399.196240@beluga.mojam.com>
Message-ID: <14728.56875.996310.790872@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
 > METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
 > potential conflicts with other packages?

  I think so, but there are too many third party extension modules
that would have to be changed to not also offer the old symbols as
well.  I see two options: leave things as they are, and provide both
versions of the symbols through at least Python 2.1.  For the later,
all examples in the code and documentation would need to be changed
and the non-PY_ versions strongly labelled as deprecated and going
away in Python version 2.2 (or whatever version it would be).
  It would *not* hurt to provide both symbols and change all the
examples, at any rate.  Aside from deleting all the checkin email,
that is!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gstein@lyra.org  Thu Aug  3 04:06:20 2000
From: gstein@lyra.org (Greg Stein)
Date: Wed, 2 Aug 2000 20:06:20 -0700
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <20000802224102.A25837@newcnri.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Wed, Aug 02, 2000 at 10:41:02PM -0400
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us>
Message-ID: <20000802200620.G19525@lyra.org>

On Wed, Aug 02, 2000 at 10:41:02PM -0400, Andrew Kuchling wrote:
> On Wed, Aug 02, 2000 at 06:10:33PM -0500, Guido van Rossum wrote:
> >> Why not just
> >> ask Tim O'Malley to change the licence in return for getting it added
> >> to the core?
> >Excellent idea.  Go for it!
> 
> Mail to timo@bbn.com bounces; does anyone have a more recent e-mail
> address?  What we do if he can't be located?  Add the module anyway,
> abandon the idea, or write a new version?

If we can't contact him, then I'd be quite happy to assist in designing and
writing a new one under a BSD-ish or Public Domain license. I was
considering doing exactly that just last week :-)

[ I want to start using cookies in ViewCVS; while the LGPL is "fine" for me,
  it would be nice if the whole ViewCVS package was BSD-ish ]


Of course, I'd much rather get a hold of Tim.

Cheers,
-g


-- 
Greg Stein, http://www.lyra.org/


From bwarsaw@beopen.com  Thu Aug  3 05:11:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:11:59 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.38251.289986.857417@anthem.concentric.net>
 <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
Message-ID: <14728.61711.859894.972939@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    GvR> Are you sure it's a good thing to add LGPL'ed code to the
    GvR> Python standard library though?  AFAIK it is still more
    GvR> restrictive than the old CWI license and probably also more
    GvR> restrictive than the new CNRI license; so it could come under
    GvR> scrutiny and prevent closed, proprietary software development
    GvR> using Python...

I don't know, however I have a version of the file with essentially no
license on it:

# Id: Cookie.py,v 2.4 1998/02/13 16:42:30 timo Exp
#  by  Timothy O'Malley <timo@bbn.com> Date: 1998/02/13 16:42:30
#
#  Cookie.py is an update for the old nscookie.py module.
#    Under the old module, it was not possible to set attributes,
#    such as "secure" or "Max-Age" on key,value granularity.  This
#    shortcoming has been addressed in Cookie.py but has come at
#    the cost of a slightly changed interface.  Cookie.py also
#    requires Python-1.5, for the re and cPickle modules.
#
#  The original idea to treat Cookies as a dictionary came from
#  Dave Mitchel (davem@magnet.com) in 1995, when he released the
#  first version of nscookie.py.

Is that better or worse? <wink>.  Back in '98, I actually asked him to
send me an LGPL'd copy because that worked better for Mailman.  We
could start with Tim's pre-LGPL'd version and backport the minor mods
I've made.

BTW, I've recently tried to contact Tim, but the address in the file
bounces.

-Barry


From guido@beopen.com  Thu Aug  3 05:39:47 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 23:39:47 -0500
Subject: [Python-Dev] METH_VARARGS
In-Reply-To: Your message of "Wed, 02 Aug 2000 21:32:29 EST."
 <14728.55741.477399.196240@beluga.mojam.com>
References: <14728.55741.477399.196240@beluga.mojam.com>
Message-ID: <200008030439.XAA05445@cj20424-a.reston1.va.home.com>

> While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
> METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
> potential conflicts with other packages?

Unless someone knows of a *real* conflict, I'd leave this one alone.
Yes, it should be Py_*, but no, it's not worth the effort of changing
all that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From gstein@lyra.org  Thu Aug  3 05:26:48 2000
From: gstein@lyra.org (Greg Stein)
Date: Wed, 2 Aug 2000 21:26:48 -0700
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <14728.61711.859894.972939@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 03, 2000 at 12:11:59AM -0400
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <14728.61711.859894.972939@anthem.concentric.net>
Message-ID: <20000802212648.J19525@lyra.org>

On Thu, Aug 03, 2000 at 12:11:59AM -0400, Barry A. Warsaw wrote:
>...
> I don't know, however I have a version of the file with essentially no
> license on it:

That implies "no license" which means "no rights to redistribute, use, or
whatever." Very incompatible :-)

>...
> Is that better or worse? <wink>.  Back in '98, I actually asked him to
> send me an LGPL'd copy because that worked better for Mailman.  We
> could start with Tim's pre-LGPL'd version and backport the minor mods
> I've made.

Wouldn't help. We need a relicensed version, to use the LGPL'd version, or
to rebuild it from scratch.

> BTW, I've recently tried to contact Tim, but the address in the file
> bounces.

I just sent mail to Andrew Smith who has been working with Tim for several
years on various projects (RSVP, RAP, etc). Hopefully, he has a current
email address for Tim. I'll report back when I hear something from Andrew.
Of course, if somebody else can track him down faster...

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From bwarsaw@beopen.com  Thu Aug  3 05:41:14 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:41:14 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
Message-ID: <14728.63466.263123.434708@anthem.concentric.net>

>>>>> "GM" == Gareth McCaughan <Gareth.McCaughan@pobox.com> writes:

    GM> Consider the following piece of code, which takes a file
    GM> and prepares a concordance saying on which lines each word
    GM> in the file appears. (For real use it would need to be
    GM> made more sophisticated.)

    |     line_number = 0
    |     for line in open(filename).readlines():
    |       line_number = line_number+1
    |       for word in map(string.lower, string.split(line)):
    |         existing_lines = word2lines.get(word, [])   |
    |         existing_lines.append(line_number)          | ugh!
    |         word2lines[word] = existing_lines           |

I've run into this same situation many times myself.  I agree it's
annoying.  Annoying enough to warrant a change?  Maybe -- I'm not
sure.

    GM> I suggest a minor change: another optional argument to
    GM> "get" so that

    GM>     dict.get(item,default,flag)

Good idea, not so good solution.  Let's make it more explicit by
adding a new method instead of a flag.  I'll use `put' here since this
seems (in a sense) opposite of get() and my sleep addled brain can't
think of anything more clever.  Let's not argue about the name of this
method though -- if Guido likes the extension, he'll pick a good name
and I go on record as agreeing with his name choice, just to avoid a
protracted war.

A trivial patch to UserDict (see below) will let you play with this.

>>> d = UserDict()
>>> word = 'hello'
>>> d.get(word, [])
[]
>>> d.put(word, []).append('world')
>>> d.get(word)
['world']
>>> d.put(word, []).append('gareth')
>>> d.get(word)
['world', 'gareth']

Shouldn't be too hard to add equivalent C code to the dictionary
object.

-Barry

-------------------- snip snip --------------------
Index: UserDict.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/UserDict.py,v
retrieving revision 1.7
diff -u -r1.7 UserDict.py
--- UserDict.py	2000/02/02 15:10:14	1.7
+++ UserDict.py	2000/08/03 04:35:11
@@ -34,3 +34,7 @@
                 self.data[k] = v
     def get(self, key, failobj=None):
         return self.data.get(key, failobj)
+    def put(self, key, failobj=None):
+        if not self.data.has_key(key):
+            self.data[key] = failobj
+        return self.data[key]


From bwarsaw@beopen.com  Thu Aug  3 05:45:33 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:45:33 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.38251.289986.857417@anthem.concentric.net>
 <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
 <20000802175553.A30340@kronos.cnri.reston.va.us>
 <200008022310.SAA04518@cj20424-a.reston1.va.home.com>
 <20000802224102.A25837@newcnri.cnri.reston.va.us>
 <20000802200620.G19525@lyra.org>
Message-ID: <14728.63725.390053.65213@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

    GS> If we can't contact him, then I'd be quite happy to assist in
    GS> designing and writing a new one under a BSD-ish or Public
    GS> Domain license. I was considering doing exactly that just last
    GS> week :-)

I don't think that's necessary; see my other post.  We should still
try to contact him if possible though.

My request for an LGPL'd copy was necessary because Mailman is GPL'd
(and Stallman suggested this as an acceptable solution).  It would be
just as fine for Mailman if an un-LGPL'd Cookie.py were part of the
standard Python distribution.

-Barry


From bwarsaw@beopen.com  Thu Aug  3 05:50:51 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:50:51 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
 <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
 <14728.38251.289986.857417@anthem.concentric.net>
 <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
 <14728.61711.859894.972939@anthem.concentric.net>
 <20000802212648.J19525@lyra.org>
Message-ID: <14728.64043.155392.32408@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

    GS> Wouldn't help. We need a relicensed version, to use the LGPL'd
    GS> version, or to rebuild it from scratch.

    >> BTW, I've recently tried to contact Tim, but the address in the
    >> file bounces.

    GS> I just sent mail to Andrew Smith who has been working with Tim
    GS> for several years on various projects (RSVP, RAP,
    GS> etc). Hopefully, he has a current email address for Tim. I'll
    GS> report back when I hear something from Andrew.  Of course, if
    GS> somebody else can track him down faster...

Cool.  Tim was exceedingly helpful in giving me a version of the file
I could use.  I have no doubt that if we can contact him, he'll
relicense it in a way that makes sense for the standard distro.  That
would be the best outcome.

-Barry


From tim_one@email.msn.com  Thu Aug  3 05:52:07 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 00:52:07 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <14728.63725.390053.65213@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>

Guys, these are cookies, not brain surgery!  If people like this API,
couldn't someone have done a clean-room reimplementation of it in less time
than we've spent jockeying over the freaking license?

tolerance-for-license-discussions-at-an-all-time-low-ly y'rs  - tim




From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 09:03:53 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 10:03:53 +0200
Subject: [Python-Dev] Cookies.py in the core
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us>
Message-ID: <002601bffd21$5f23e800$f2a6b5d4@hagrid>

andrew wrote:

> Mail to timo@bbn.com bounces; does anyone have a more recent e-mail
> address?  What we do if he can't be located?  Add the module anyway,
> abandon the idea, or write a new version?

readers of the daily URL might have noticed that he posted
a socket timeout wrapper a few days ago:

    Timothy O'Malley <timo@alum.mit.edu>

it's the same name and the same signature, so I assume it's
the same guy ;-)

</F>



From wesc@alpha.ece.ucsb.edu  Thu Aug  3 08:56:59 2000
From: wesc@alpha.ece.ucsb.edu (Wesley J. Chun)
Date: Thu, 3 Aug 2000 00:56:59 -0700 (PDT)
Subject: [Python-Dev] Re: Bookstand at LA Python conference
Message-ID: <200008030756.AAA23434@alpha.ece.ucsb.edu>

    > From: Guido van Rossum <guido@python.org>
    > Date: Sat, 29 Jul 2000 12:39:01 -0500
    > 
    > The next Python conference will be in Long Beach (Los Angeles).  We're
    > looking for a bookstore to set up a bookstand like we had at the last
    > conference.  Does anybody have a suggestion?


the most well-known big independent technical bookstore that also
does mail order and has been around for about 20 years is OpAmp:

OpAmp Technical Books
1033 N. Sycamore Ave
Los Angeles, CA  90038
800-468-4322
http://www.opamp.com

there really isn't a "2nd place" since OpAmp owns the market,
but if there was a #3, it would be Technical Book Company:

Technical Book Company
2056 Westwood Blvd
Los Angeles, CA  90025
800-233-5150


the above 2 stores are listed in the misc.books.technical FAQ:

http://www.faqs.org/faqs/books/technical/

there's a smaller bookstore that's also known to have a good
technical book selection:

Scholar's Bookstore 
El Segundo, CA  90245
310-322-3161

(and of course, the standbys are always the university bookstores
for UCLA, CalTech, UC Irvine, Cal State Long Beach, etc.)

as to be expected, someone has collated a list of bookstores
in the LA area:

http://www.geocities.com/Athens/4824/na-la.htm

hope this helps!!

-wesley

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

"Core Python Programming", Prentice Hall PTR, TBP Summer/Fall 2000
    http://www.phptr.com/ptrbooks/ptr_0130260363.html

Python Books:   http://www.softpro.com/languages-python.html

wesley.j.chun :: wesc@alpha.ece.ucsb.edu
cyberweb.consulting :: silicon.valley, ca
http://www.roadkill.com/~wesc/cyberweb/


From tim_one@email.msn.com  Thu Aug  3 09:05:31 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 04:05:31 -0400
Subject: [Python-Dev] Go \x yourself
Message-ID: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>

Offline, Guido and /F and I had a mighty battle about the meaning of \x
escapes in Python.  In the end we agreed to change the meaning of \x in a
backward-*in*compatible way.  Here's the scoop:

In 1.5.2 and before, the Reference Manual implies that an \x escape takes
two or more hex digits following, and has the value of the last byte.  In
reality it also accepted just one hex digit, or even none:

>>> "\x123465"  # same as "\x65"
'e'
>>> "\x65"
'e'
>>> "\x1"
'\001'
>>> "\x\x"
'\\x\\x'
>>>

I found no instances of the 0- or 1-digit forms in the CVS tree or in any of
the Python packages on my laptop.  Do you have any in your code?

And, apart from some deliberate abuse in the test suite, I found no
instances of more-than-two-hex-digits \x escapes either.  Similarly, do you
have any?  As Guido said and all agreed, it's probably a bug if you do.

The new rule is the same as Perl uses for \x escapes in -w mode, except that
Python will raise ValueError at compile-time for an invalid \x escape:  an
\x escape is of the form

    \xhh

where h is a hex digit.  That's it.  Guido reports that the O'Reilly books
(probably due to their Perl editing heritage!) already say Python works this
way.  It's the same rule for 8-bit and Unicode strings (in Perl too, at
least wrt the syntax).  In a Unicode string \xij has the same meaning as
\u00ij, i.e. it's the obvious Latin-1 character.  Playing back the above
pretending the new rule is in place:

>>> "\x123465" # \x12 -> \022, "3456" left alone
'\0223456'
>>> "\x65"
'e'
>>> "\x1"
ValueError
>>> "\x\x"
ValueError
>>>

We all support this:  the open-ended gobbling \x used to do lost information
without warning, and had no benefit whatsoever.  While there was some
attraction to generalizing \x in Unicode strings, \u1234 is already
perfectly adequate for specifying Unicode characters in hex form, and the
new rule for \x at least makes consistent Unicode sense now (and in a way
JPython should be able to adopt easily too).  The new rule gets rid of the
unPythonic TMTOWTDI introduced by generalizing Unicode \x to "the last 4
bytes".  That generalization also didn't make sense in light of the desire
to add \U12345678 escapes too (i.e., so then how many trailing hex digits
should a generalized \x suck up?  2?  4?  8?).  The only actual use for \x
in 8-bit strings (i.e., a way to specify a byte in hex) is still supported
with the same meaning as in 1.5.2, and \x in a Unicode string means
something as close to that as is possible.

Sure feels right to me.  Gripe quick if it doesn't to you.

as-simple-as-possible-is-a-nice-place-to-rest-ly y'rs  - tim




From gstein@lyra.org  Thu Aug  3 09:16:37 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 01:16:37 -0700
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 12:52:07AM -0400
References: <14728.63725.390053.65213@anthem.concentric.net> <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>
Message-ID: <20000803011637.K19525@lyra.org>

On Thu, Aug 03, 2000 at 12:52:07AM -0400, Tim Peters wrote:
> Guys, these are cookies, not brain surgery!  If people like this API,
> couldn't someone have done a clean-room reimplementation of it in less time
> than we've spent jockeying over the freaking license?

No.


-- 
Greg Stein, http://www.lyra.org/


From gstein@lyra.org  Thu Aug  3 09:18:38 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 01:18:38 -0700
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 04:05:31AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <20000803011838.L19525@lyra.org>

On Thu, Aug 03, 2000 at 04:05:31AM -0400, Tim Peters wrote:
>...
> Sure feels right to me.  Gripe quick if it doesn't to you.

+1

-- 
Greg Stein, http://www.lyra.org/


From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 09:27:39 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 10:27:39 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>
Message-ID: <006601bffd24$e25a9360$f2a6b5d4@hagrid>

andrew wrote:
>
> >-- SRE is usually faster than the old RE module (PRE).
>=20
> Once the compiler is translated to C, it might be worth considering
> making SRE available as a standalone library for use outside of
> Python.

if it will ever be translated, that is...

> Hmm... here's an old problem that's returned (recursion on repeated
> group matches, I expect):
>=20
> >>> p=3Dre.compile('(x)*')
> >>> p
> <SRE_Pattern object at 0x8127048>
> >>> p.match(500000*'x')
> Segmentation fault (core dumped)

fwiw, that pattern isn't portable:

$ jpython test.py
File "test.py", line 3, in ?
java.lang.StackOverflowError

and neither is:

def nest(level):
    if level:
        nest(level-1)
nest(500000)

...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
has already taken care of the other one ;-).  but 0.9.9 won't be
out before the 1.6b1 release...

(and to avoid scaring the hell out of the beta testers, it's probably
better to leave the test out of the regression suite until the bug is
fixed...)

</F>



From Vladimir.Marangozov@inrialpes.fr  Thu Aug  3 09:44:58 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 3 Aug 2000 10:44:58 +0200 (CEST)
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <20000803011637.K19525@lyra.org> from "Greg Stein" at Aug 03, 2000 01:16:37 AM
Message-ID: <200008030844.KAA12666@python.inrialpes.fr>

Greg Stein wrote:
> 
> On Thu, Aug 03, 2000 at 12:52:07AM -0400, Tim Peters wrote:
> > Guys, these are cookies, not brain surgery!  If people like this API,
> > couldn't someone have done a clean-room reimplementation of it in less time
> > than we've spent jockeying over the freaking license?
> 
> No.


Sorry for asking this, but what "cookies in the core" means to you in
the first place?  A library module.py, C code or both?


PS: I can hardly accept the idea that cookies are necessary for normal
Web usage. I'm not against them, though. IMO, it is important to keep
control on whether they're enabled or disabled.
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug  3 09:43:04 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 3 Aug 2000 11:43:04 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008030844.KAA12666@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008031140300.7196-100000@sundial>

On Thu, 3 Aug 2000, Vladimir Marangozov wrote:

> Sorry for asking this, but what "cookies in the core" means to you in
> the first place?  A library module.py, C code or both?

I think Python is good enough for that. (Python is a great language!)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                        Moshe preaching to the choir

> PS: I can hardly accept the idea that cookies are necessary for normal
> Web usage. I'm not against them, though. IMO, it is important to keep
> control on whether they're enabled or disabled.

Yes, but that all happens client-side -- we were talking server-side
cookies. Cookies are a state-management mechanism for a loosely-coupled
protocols, and are almost essential in today's web. Not giving support
means that Python is not as good a server-side language as it can be.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From Vladimir.Marangozov@inrialpes.fr  Thu Aug  3 10:11:36 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 3 Aug 2000 11:11:36 +0200 (CEST)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021512180.8980-100000@sundial> from "Moshe Zadka" at Aug 02, 2000 03:17:31 PM
Message-ID: <200008030911.LAA12747@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> On Wed, 2 Aug 2000, Vladimir Marangozov wrote:
> 
> > Moshe Zadka wrote:
> > > 
> > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> > 
> > You get a compiled SRE object, right?
> 
> Nope -- I tested it with pre. 

As of yesterday's CVS (I saw AMK checking in an escape patch since then):

~/python/dev>python
Python 2.0b1 (#1, Aug  3 2000, 09:01:35)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> import pre
>>> pre.compile('[\\200-\\400]')
Segmentation fault (core dumped)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug  3 10:06:23 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 3 Aug 2000 12:06:23 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <200008030911.LAA12747@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008031206000.7196-100000@sundial>

On Thu, 3 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > On Wed, 2 Aug 2000, Vladimir Marangozov wrote:
> > 
> > > Moshe Zadka wrote:
> > > > 
> > > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> > > 
> > > You get a compiled SRE object, right?
> > 
> > Nope -- I tested it with pre. 
> 
> As of yesterday's CVS (I saw AMK checking in an escape patch since then):

Hmmmmm....I ought to be more careful then.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Thu Aug  3 10:14:24 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 11:14:24 +0200
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 04:05:31AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <20000803111424.Z266@xs4all.nl>

On Thu, Aug 03, 2000 at 04:05:31AM -0400, Tim Peters wrote:

> Sure feels right to me.  Gripe quick if it doesn't to you.

+1 if it's a compile-time error, +0 if it isn't and won't be made one. The
compile-time error makes it a lot easier to track down the issues, if any.
(Okay, so everyone should have proper unit testing -- not everyone actually
has it ;)

I suspect it would be a compile-time error, but I haven't looked at
compiling string literals yet ;P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Thu Aug  3 10:17:46 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 11:17:46 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <398938BA.CA54A98E@lemburg.com>

> 
> searching for literal text:
> 
> searching for "spam" in a string padded with "spaz" (1000 bytes on
> each side of the target):
> 
> string.find     0.112 ms
> sre8.search     0.059
> pre.search      0.122
> 
> unicode.find    0.130
> sre16.search    0.065
> 
> (yes, regular expressions can run faster than optimized C code -- as
> long as we don't take compilation time into account ;-)
> 
> same test, without any false matches:
> 
> string.find     0.035 ms
> sre8.search     0.050
> pre.search      0.116
> 
> unicode.find    0.031
> sre16.search    0.055

Those results are probably due to the fact that string.find
does a brute force search. If it would do a last match char
first search or even Boyer-Moore (this only pays off for long
search targets) then it should be a lot faster than [s|p]re.

Just for compares: would you mind running the search 
routines in mxTextTools on the same machine ?

import TextTools
TextTools.find(text, what)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Thu Aug  3 10:55:57 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 11:55:57 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <020901bffccb$b4bf4da0$060210ac@private> <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <398941AD.F47CA1C1@lemburg.com>

Guido van Rossum wrote:
> 
> > Change the init function name to a new name PythonExtensionInit_ say.
> > Pass in the API version for the extension writer to check. If the
> > version is bad for this extension returns without calling any python
> > functions. Add a return code that is true if compatible, false if not.
> > If compatible the extension can use python functions and report and
> > problems it wishes.
> >
> > int PythonExtensionInit_XXX( int invoking_python_api_version )
> >       {
> >       if( invoking_python_api_version != PYTHON_API_VERSION )
> >               {
> >               /* python will report that the module is incompatible */
> >               return 0;
> >               }
> >
> >       /* setup module for XXX ... */
> >
> >       /* say this extension is compatible with the invoking python */
> >       return 1;
> >       }
> >
> > All 1.5 extensions fail to load on later python 2.0 and later.
> > All 2.0 extensions fail to load on python 1.5.
> >
> > All new extensions work only with python of the same API version.
> >
> > Document that failure to setup a module could mean the extension is
> > incompatible with this version of python.
> >
> > Small code change in python core. But need to tell extension writers
> > what the new interface is and update all extensions within the python
> > CVS tree.
> 
> I sort-of like this idea -- at least at the +0 level.

I sort of dislike the idea ;-)

It introduces needless work for hundreds of extension writers
and effectively prevents binary compatibility for future
versions of Python: not all platforms have the problems of the
Windows platform and extensions which were compiled against a
different API version may very well still work with the
new Python version -- e.g. the dynamic loader on Linux is
very well capable of linking the new Python version against
an extension compiled for the previous Python version.

If all this is really necessary, I'd at least suggest adding macros
emulating the old Py_InitModule() APIs, so that extension writers
don't have to edit their code just to get it recompiled.

BTW, the subject line doesn't have anything to do with the
proposed solutions in this thread... they all crash Python
or the extensions in some way, some nicer, some not so nice ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Thu Aug  3 10:57:22 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 05:57:22 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <20000803111424.Z266@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMBGNAA.tim_one@email.msn.com>

[Thomas Wouters]
> +1 if it's a compile-time error, +0 if it isn't and won't be
> made one. ...

Quoting back from the original msg:

> ... will raise ValueError at compile-time for an invalid \x escape
                            ^^^^^^^^^^^^^^^

The pseudo-example was taken from a pseudo interactive prompt, and just as
in a real example at a real interactive prompt, each (pseduo)input line was
(pseudo)compiled one at a time <wink>.




From mal@lemburg.com  Thu Aug  3 11:04:53 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 12:04:53 +0200
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
References: <Pine.WNT.4.21.0008021734040.980-100000@loom>
Message-ID: <398943C4.AFECEE36@lemburg.com>

David Ascher wrote:
> 
> >    IIRC ActiveState contributed to Perl a version of fork that works on
> > Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> > help heal one of the more difficult platform rifts. Emulating fork for Win32
> > looks quite difficult to me but if its already done...
> 
> Sigh. Me tired.
> 
> The message I posted a few minutes ago was actually referring to the
> system() work, not the fork() work.  I agree that the fork() emulation
> isn't Pythonic.

What about porting os.kill() to Windows (see my other post
with changed subject line in this thread) ? Wouldn't that
make sense ? (the os.spawn() APIs do return PIDs of spawned
processes, so calling os.kill() to send signals to these
seems like a feasable way to control them)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Thu Aug  3 11:11:24 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 12:11:24 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>
Message-ID: <3989454C.5C9EF39B@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "GM" == Gareth McCaughan <Gareth.McCaughan@pobox.com> writes:
> 
>     GM> Consider the following piece of code, which takes a file
>     GM> and prepares a concordance saying on which lines each word
>     GM> in the file appears. (For real use it would need to be
>     GM> made more sophisticated.)
> 
>     |     line_number = 0
>     |     for line in open(filename).readlines():
>     |       line_number = line_number+1
>     |       for word in map(string.lower, string.split(line)):
>     |         existing_lines = word2lines.get(word, [])   |
>     |         existing_lines.append(line_number)          | ugh!
>     |         word2lines[word] = existing_lines           |
> 
> I've run into this same situation many times myself.  I agree it's
> annoying.  Annoying enough to warrant a change?  Maybe -- I'm not
> sure.
> 
>     GM> I suggest a minor change: another optional argument to
>     GM> "get" so that
> 
>     GM>     dict.get(item,default,flag)
> 
> Good idea, not so good solution.  Let's make it more explicit by
> adding a new method instead of a flag.  I'll use `put' here since this
> seems (in a sense) opposite of get() and my sleep addled brain can't
> think of anything more clever.  Let's not argue about the name of this
> method though -- if Guido likes the extension, he'll pick a good name
> and I go on record as agreeing with his name choice, just to avoid a
> protracted war.
> 
> A trivial patch to UserDict (see below) will let you play with this.
> 
> >>> d = UserDict()
> >>> word = 'hello'
> >>> d.get(word, [])
> []
> >>> d.put(word, []).append('world')
> >>> d.get(word)
> ['world']
> >>> d.put(word, []).append('gareth')
> >>> d.get(word)
> ['world', 'gareth']
> 
> Shouldn't be too hard to add equivalent C code to the dictionary
> object.

The following one-liner already does what you want:

	d[word] = d.get(word, []).append('world')

... and it's in no way more readable than your proposed
.put() line ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From m.favas@per.dem.csiro.au  Thu Aug  3 11:54:05 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 03 Aug 2000 18:54:05 +0800
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
Message-ID: <39894F4D.FB11F098@per.dem.csiro.au>

[Guido]
>> Hmm... here's an old problem that's returned (recursion on repeated
>> group matches, I expect):
>> 
>> >>> p=re.compile('(x)*')
>> >>> p
>> <SRE_Pattern object at 0x8127048>
>> >>> p.match(500000*'x')
>> Segmentation fault (core dumped)
>
>Ouch.
>
>Andrew, would you mind adding a test case for that to the re test
>suite?  It's important that this doesn't come back!

In fact, on my machine with the default stacksize of 2048kb, test_re.py
already exercises this bug. (Goes away if I do an "unlimit", of course.)
So testing for this deterministically is always going to be dependent on
the platform. How large do you want to go (reasonably)? - although I
guess core dumps should be avoided...

Mark

-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913


From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 12:10:24 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 13:10:24 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <398938BA.CA54A98E@lemburg.com>
Message-ID: <00eb01bffd3b$8324fb80$f2a6b5d4@hagrid>

mal wrote:

> Just for compares: would you mind running the search=20
> routines in mxTextTools on the same machine ?

> > searching for "spam" in a string padded with "spaz" (1000 bytes on
> > each side of the target):
> >=20
> > string.find     0.112 ms

texttools.find    0.080 ms

> > sre8.search     0.059
> > pre.search      0.122
> >=20
> > unicode.find    0.130
> > sre16.search    0.065
> >=20
> > same test, without any false matches (padded with "-"):
> >=20
> > string.find     0.035 ms

texttools.find    0.083 ms

> > sre8.search     0.050
> > pre.search      0.116
> >=20
> > unicode.find    0.031
> > sre16.search    0.055
>=20
> Those results are probably due to the fact that string.find
> does a brute force search. If it would do a last match char
> first search or even Boyer-Moore (this only pays off for long
> search targets) then it should be a lot faster than [s|p]re.

does the TextTools algorithm work with arbitrary character
set sizes, btw?

</F>



From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 12:25:45 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 13:25:45 +0200
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
References: <39894F4D.FB11F098@per.dem.csiro.au>
Message-ID: <00fc01bffd3d$91a36460$f2a6b5d4@hagrid>

mark favas wrote:
> >> >>> p.match(500000*'x')
> >> Segmentation fault (core dumped)
> >
> >Andrew, would you mind adding a test case for that to the re test
> >suite?  It's important that this doesn't come back!
>=20
> In fact, on my machine with the default stacksize of 2048kb, =
test_re.py
> already exercises this bug. (Goes away if I do an "unlimit", of =
course.)
> So testing for this deterministically is always going to be dependent =
on
> the platform. How large do you want to go (reasonably)? - although I
> guess core dumps should be avoided...

afaik, there was no test in the standard test suite that
included run-away recursion...

what test is causing this error?

(adding a print statement to sre._compile should help you
figure that out...)

</F>



From MarkH@ActiveState.com  Thu Aug  3 12:19:50 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 3 Aug 2000 21:19:50 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <398943C4.AFECEE36@lemburg.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEBGDDAA.MarkH@ActiveState.com>

> What about porting os.kill() to Windows (see my other post
> with changed subject line in this thread) ? Wouldn't that
> make sense ? (the os.spawn() APIs do return PIDs of spawned
> processes, so calling os.kill() to send signals to these
> seems like a feasable way to control them)

Signals are a bit of a problem on Windows.  We can terminate the thread
mid-execution, but a clean way of terminating a thread isn't obvious.

I admit I didnt really read the long manpage when you posted it, but is a
terminate-without-prejudice option any good?

Mark.



From MarkH@ActiveState.com  Thu Aug  3 12:34:09 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 3 Aug 2000 21:34:09 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEBGDDAA.MarkH@ActiveState.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEBGDDAA.MarkH@ActiveState.com>

eek - a bit quick off the mark here ;-]

> Signals are a bit of a problem on Windows.  We can terminate the thread
> mid-execution, but a clean way of terminating a thread isn't obvious.

thread = process - you get the idea!

> terminate-without-prejudice option any good?

really should say

> terminate-without-prejudice only version any good?

Mark.



From m.favas@per.dem.csiro.au  Thu Aug  3 12:35:48 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 03 Aug 2000 19:35:48 +0800
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
References: <39894F4D.FB11F098@per.dem.csiro.au> <00fc01bffd3d$91a36460$f2a6b5d4@hagrid>
Message-ID: <39895914.133D52A4@per.dem.csiro.au>

Fredrik Lundh wrote:
> 
> mark favas wrote:
> > In fact, on my machine with the default stacksize of 2048kb, test_re.py
> > already exercises this bug.> 
> afaik, there was no test in the standard test suite that
> included run-away recursion...
> 
> what test is causing this error?
> 
> (adding a print statement to sre._compile should help you
> figure that out...)
> 
> </F>

The stack overflow is caused by the test (in test_re.py):

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
assert re.match('(x)*', 50000*'x').span() == (0, 50000)

(changing 50000 to 18000 works, 19000 overflows...)

-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913


From guido@beopen.com  Thu Aug  3 13:56:38 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 07:56:38 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Thu, 03 Aug 2000 12:11:24 +0200."
 <3989454C.5C9EF39B@lemburg.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>
 <3989454C.5C9EF39B@lemburg.com>
Message-ID: <200008031256.HAA06107@cj20424-a.reston1.va.home.com>

> "Barry A. Warsaw" wrote:
> > Good idea, not so good solution.  Let's make it more explicit by
> > adding a new method instead of a flag.

You're learning to channel me. :-)

> > I'll use `put' here since this
> > seems (in a sense) opposite of get() and my sleep addled brain can't
> > think of anything more clever.  Let's not argue about the name of this
> > method though -- if Guido likes the extension, he'll pick a good name
> > and I go on record as agreeing with his name choice, just to avoid a
> > protracted war.

But I'll need input.  My own idea was dict.getput(), but that's ugly
as hell; dict.put() doesn't suggest that it also returns the value.

Protocol: if you have a suggestion for a name for this function, mail
it to me.  DON'T MAIL THE LIST.  (If you mail it to the list, that
name is disqualified.)  Don't explain me why the name is good -- if
it's good, I'll know, if it needs an explanation, it's not good.  From
the suggestions I'll pick one if I can, and the first person to
suggest it gets a special mention in the implementation.  If I can't
decide, I'll ask the PythonLabs folks to help.

Marc-Andre writes:
> The following one-liner already does what you want:
> 
> 	d[word] = d.get(word, []).append('world')

Are you using a patch to the list object so that append() returns the
list itself?  Or was it just late?  For me, this makes d[word] = None.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From akuchlin@mems-exchange.org  Thu Aug  3 13:06:49 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:06:49 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <002601bffd21$5f23e800$f2a6b5d4@hagrid>; from effbot@telia.com on Thu, Aug 03, 2000 at 10:03:53AM +0200
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us> <002601bffd21$5f23e800$f2a6b5d4@hagrid>
Message-ID: <20000803080649.A27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 10:03:53AM +0200, Fredrik Lundh wrote:
>readers of the daily URL might have noticed that he posted
>a socket timeout wrapper a few days ago:

Noted; thanks!  I've sent him an e-mail...

--amk


From akuchlin@mems-exchange.org  Thu Aug  3 13:14:56 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:14:56 -0400
Subject: [Python-Dev] (s)re crashing in regrtest
In-Reply-To: <39895914.133D52A4@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Thu, Aug 03, 2000 at 07:35:48PM +0800
References: <39894F4D.FB11F098@per.dem.csiro.au> <00fc01bffd3d$91a36460$f2a6b5d4@hagrid> <39895914.133D52A4@per.dem.csiro.au>
Message-ID: <20000803081456.B27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 07:35:48PM +0800, Mark Favas wrote:
>The stack overflow is caused by the test (in test_re.py):
># Try nasty case that overflows the straightforward recursive
># implementation of repeated groups.

That would be the test I added last night to trip this problem, per
GvR's instructions.  I'll comment out the test for now, so that it can
be restored once the bug is fixed.

--amk


From mal@lemburg.com  Thu Aug  3 13:14:55 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 14:14:55 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>
 <398858BE.15928F47@lemburg.com> <200008022218.RAA04178@cj20424-a.reston1.va.home.com>
Message-ID: <3989623F.2AB4C00C@lemburg.com>

Guido van Rossum wrote:
>
> [...]
>
> > > > Some comments on the new version:
> > >
> > > > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > > > license to reproduce, analyze, test, perform and/or display publicly,
> > > > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > > > alone or in any derivative version, provided, however, that CNRI's
> > > > > License Agreement is retained in Python 1.6b1, alone or in any
> > > > > derivative version prepared by Licensee.
> > > >
> > > > I don't think the latter (retaining the CNRI license alone) is
> > > > possible: you always have to include the CWI license.
> > >
> > > Wow.  I hadn't even noticed this!  It seems you can prepare a
> > > derivative version of the license.  Well, maybe.
> >
> > I think they mean "derivative version of Python 1.6b1", but in
> > court, the above wording could cause serious trouble for CNRI
> 
> You're right of course, I misunderstood you *and* the license.  Kahn
> explains it this way:
> 
> [Kahn]
> | Ok. I take the point being made. The way english works with ellipsis or
> | anaphoric references is to link back to the last anchor point. In the above
> | case, the last referent is Python 1.6b1.
> |
> | Thus, the last phrase refers to a derivative version of Python1.6b1
> | prepared by Licensee. There is no permission given to make a derivative
> | version of the License.
>
> > ... it seems 2.0 can reuse the CWI license after all ;-)
> 
> I'm not sure why you think that: 2.0 is a derivative version and is
> thus bound by the CNRI license as well as by the license that BeOpen
> adds.

If you interpret the above wording in the sense of "preparing
a derivative version of the License Agreement", BeOpen (or
anyone else) could just remove the CNRI License text. I
understand that this is not intended (that's why I put the smiley
there ;-).

> [...] 
>
> > > > > 3. In the event Licensee prepares a derivative work that is based on
> > > > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > > > derivative work available to the public as provided herein, then
> > > > > Licensee hereby agrees to indicate in any such work the nature of the
> > > > > modifications made to Python 1.6b1.
> > > >
> > > > In what way would those indications have to be made ? A patch
> > > > or just text describing the new features ?
> > >
> > > Just text.  Bob Kahn told me that the list of "what's new" that I
> > > always add to a release would be fine.
> >
> > Ok, should be made explicit in the license though...
> 
> It's hard to specify this precisely -- in fact, the more precise you
> specify it the more scary it looks and the more likely they are to be
> able to find fault with the details of how you do it.  In this case, I
> believe (and so do lawyers) that vague is good!  If you write "ported
> to the Macintosh" and that's what you did, they can hardly argue with
> you, can they?

True.
 
> > > > What does "make available to the public" mean ? If I embed
> > > > Python in an application and make this application available
> > > > on the Internet for download would this fit the meaning ?
> > >
> > > Yes, that's why he doesn't use the word "publish" -- such an action
> > > would not be considered publication in the sense of the copyright law
> > > (at least not in the US, and probably not according to the Bern
> > > convention) but it is clearly making it available to the public.
> >
> > Ouch. That would mean I'd have to describe all additions,
> > i.e. the embedding application, in most details in order not to
> > breach the terms of the CNRI license.
> 
> No, additional modules aren't modifications to CNRI's work.  A change
> to the syntax to support curly braces is.

Ok, thanks for clarifying this.

(I guess the "vague is good" argument fits here as well.)
 
> > > > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > > > INFRINGE ANY THIRD PARTY RIGHTS.
> > > > >
> > > > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> > > >
> > > > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > > > Germany the above text would only be valid after an initial
> > > > 6 month period after installation, AFAIK (this period is
> > > > called "Gewährleistung"). Licenses from other vendors usually
> > > > add some extra license text to limit the liability in this period
> > > > to the carrier on which the software was received by the licensee,
> > > > e.g. the diskettes or CDs.
> > >
> > > I'll mention this to Kahn.
> 
> His response:
> 
> | Guido, Im not willing to do a study of international law here. If you
> | can have the person identify one country other than the US that does
> | not allow the above limitation or exclusion of liability and provide a
> | copy of the section of their law, ill be happy to change this to read
> | ".... SOME STATES OR COUNTRIES MAY NOT ALLOW ...." Otherwise, id just
> | leave it alone (i.e. as is) for now.
> 
> Please mail this info directly to Kahn@CNRI.Reston.Va.US if you
> believe you have the right information.  (You may CC me.)  Personally,
> I wouldn't worry.  If the German law says that part of a license is
> illegal, it doesn't make it any more or less illegal whether the
> license warns you about this fact.
> 
> I believe that in the US, as a form of consumer protection, some
> states not only disallow general disclaimers, but also require that
> licenses containing such disclaimers notify the reader that the
> disclaimer is not valid in their state, so that's where the language
> comes from.  I don't know about German law.

I haven't found an English version of the German law text,
but this is the title of the law which handles German
business conditions:

"Gesetz zur Regelung des Rechts der Allgemeinen Geschäftsbedingungen
AGBG) - Act Governing Standard Business Conditions"
 
The relevant paragraph is no. 11 (10).

I'm not a lawyer, but from what I know:
terms generally excluding liability are invalid; liability
may be limited during the first 6 months after license
agreement and excluded after this initial period.

Anyway, you're right in that the notice about the paragraph
not necessarily applying to the licensee only has informational
character and that it doesn't do any harm otherwise.

> > > > > 6. This License Agreement will automatically terminate upon a material
> > > > > breach of its terms and conditions.
> > > >
> > > > Immediately ? Other licenses usually include a 30-60 day period
> > > > which allows the licensee to take actions. With the above text,
> > > > the license will put the Python copy in question into an illegal
> > > > state *prior* to having even been identified as conflicting with the
> > > > license.
> > >
> > > Believe it or not, this is necessary to ensure GPL compatibility!  An
> > > earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> > > incompatible.  There's an easy workaround though: you fix your
> > > compliance and download a new copy, which gives you all the same
> > > rights again.
> >
> > Hmm, but what about the 100.000 copies of the embedding application
> > that have already been downloaded -- I would have to force them
> > to redownload the application (or even just a demo of it) in
> > order to reestablish the lawfulness of the copy action.
> 
> It's better not to violate the license.  But do you really think that
> they would go after you immediately if you show good intentions to
> rectify?

I don't intend to violate the license, but customers of 
an application embedding Python will have to agree to the
Python license to be able to legally use the Python engine
embedded in the application -- that is: if the application
unintensionally fails to meet the CNRI license terms
then the application as a whole would immediately become
unusable by the customer.

Now just think of an eCommerce application which produces
some $100k USD revenue each day... such a customer wouldn't
like these license terms at all :-(

BTW, I think that section 6. can be removed altogether, if
it doesn't include any reference to such a 30-60 day period:
the permissions set forth in a license are only valid in case
the license terms are adhered to whether it includes such
a section or not.

> > Not that I want to violate the license in any way, but there
> > seem to be quite a few pitfalls in the present text, some of
> > which are not clear at all (e.g. the paragraph 3).
> 
> I've warned Kahn about this effect of making the license bigger, but
> he simply disagrees (and we agree to disagree).  I don't know what
> else I could do about it, apart from putting a FAQ about the license
> on python.org -- which I intend to do.

Good (or bad ? :-()
 
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From akuchlin@mems-exchange.org  Thu Aug  3 13:22:44 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:22:44 -0400
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: <006601bffd24$e25a9360$f2a6b5d4@hagrid>; from effbot@telia.com on Thu, Aug 03, 2000 at 10:27:39AM +0200
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us> <006601bffd24$e25a9360$f2a6b5d4@hagrid>
Message-ID: <20000803082244.C27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 10:27:39AM +0200, Fredrik Lundh wrote:
>if it will ever be translated, that is...

I'll agree to take a shot at it (which carries no implication of
actually finisihing :) ) post-2.0.  It's silly for all of Tcl, Python,
Perl to grow their own implementations, when a common implementation
could benefit from having 3x the number of eyes looking at it and
optimizing it.

>fwiw, that pattern isn't portable:

No, it isn't; the straightforward implementation of repeated groups is
recursive, and fixing this requires jumping through hoops to make it
nonrecursive (or adopting Python's solution and only recursing up to
some upper limit).  re had to get this right because regex didn't
crash on this pattern, and neither do recent Perls.  The vast bulk of
my patches to PCRE were to fix this problem.

--amk


From guido@beopen.com  Thu Aug  3 14:31:16 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 08:31:16 -0500
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
In-Reply-To: Your message of "Thu, 03 Aug 2000 10:27:39 +0200."
 <006601bffd24$e25a9360$f2a6b5d4@hagrid>
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>
 <006601bffd24$e25a9360$f2a6b5d4@hagrid>
Message-ID: <200008031331.IAA06319@cj20424-a.reston1.va.home.com>

> andrew wrote:
> 
> > Hmm... here's an old problem that's returned (recursion on repeated
> > group matches, I expect):
> > 
> > >>> p=re.compile('(x)*')
> > >>> p
> > <SRE_Pattern object at 0x8127048>
> > >>> p.match(500000*'x')
> > Segmentation fault (core dumped)

Effbot:
> fwiw, that pattern isn't portable:

Who cares -- it shouldn't dump core!

> ...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
> has already taken care of the other one ;-).  but 0.9.9 won't be
> out before the 1.6b1 release...

I assume you are planning to put the backtracking stack back in, as
you mentioned in the checkin message?

> (and to avoid scaring the hell out of the beta testers, it's probably
> better to leave the test out of the regression suite until the bug is
> fixed...)

Even better, is it possible to put a limit on the recursion level
before 1.6b1 is released (tomorrow if we get final agreement on the
license) so at least it won't dump core?  Otherwise you'll get reports
of this from people who write this by accident...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Thu Aug  3 13:57:14 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 14:57:14 +0200
Subject: [Python-Dev] Buglist
Message-ID: <20000803145714.B266@xs4all.nl>

Just a little FYI and 'is this okay' message; I've been browsing the buglist
the last few days, doing a quick mark & message sweep over the bugs that I
can understand. I've mostly been closing bugs that look closed, and
assigning them when it's very obvious who it should be assigned to.

Should I be doing this already ? Is the bug-importing 'done', or is Jeremy
still busy with importing and fixing bug status (stati ?) and such ? Is
there something better to use as a guideline than my 'best judgement' ? I
think it's a good idea to 'resolve' most of the bugs on the list, because a
lot of them are really non-issues or no-longer-issues, and the sheer size of
the list prohibits a proper overview of the real issues :P However, it's
entirely possible we're going to miss out on a few bugs this way. I'm trying
my best to be careful, but I think overlooking a few bugs is better than
overlooking all of them because of the size of the list :P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gmcm@hypernet.com  Thu Aug  3 14:05:07 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Thu, 3 Aug 2000 09:05:07 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <1246814587-81974994@hypernet.com>

[Tim sez]
> The new rule is ...
>...  an \x escape is of the form
> 
>     \xhh
> 
> where h is a hex digit.  That's it.  

> >>> "\x123465" # \x12 -> \022, "3465" left alone
> '\0223465'

Hooray! I got bit often enough by that one ('e') that I forced 
myself to always use the wholly unnatural octal.

god-gave-us-sixteen-fingers-for-a-reason-ly y'rs


- Gordon


From fdrake@beopen.com  Thu Aug  3 14:06:51 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 09:06:51 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
Message-ID: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>

  At various points, there have been comments that xrange objects
should not print as lists but as xrange objects.  Taking a look at the
implementation, I noticed that if you call repr() (by name or by
backtick syntax), you get "the right thing"; the list representation
comes up when you print the object on a real file object.  The
tp_print slot of the xrange type produces the list syntax.  There is
no tp_str handler, so str(xrange(...)) is the same as
repr(xrange(...)).
  I propose ripping out the tp_print handler completely.  (And I've
already tested my patch. ;)
  Comments?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug  3 14:09:40 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 3 Aug 2000 16:09:40 +0300 (IDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008031609130.26290-100000@sundial>

On Thu, 3 Aug 2000, Fred L. Drake, Jr. wrote:

>   I propose ripping out the tp_print handler completely.  (And I've
> already tested my patch. ;)
>   Comments?

+1. Like I always say: less code, less bugs.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From mal@lemburg.com  Thu Aug  3 14:31:34 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:31:34 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <398938BA.CA54A98E@lemburg.com> <00eb01bffd3b$8324fb80$f2a6b5d4@hagrid>
Message-ID: <39897436.E42F1C3C@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > Just for compares: would you mind running the search
> > routines in mxTextTools on the same machine ?
> 
> > > searching for "spam" in a string padded with "spaz" (1000 bytes on
> > > each side of the target):
> > >
> > > string.find     0.112 ms
> 
> texttools.find    0.080 ms
> 
> > > sre8.search     0.059
> > > pre.search      0.122
> > >
> > > unicode.find    0.130
> > > sre16.search    0.065
> > >
> > > same test, without any false matches (padded with "-"):
> > >
> > > string.find     0.035 ms
> 
> texttools.find    0.083 ms
> 
> > > sre8.search     0.050
> > > pre.search      0.116
> > >
> > > unicode.find    0.031
> > > sre16.search    0.055
> >
> > Those results are probably due to the fact that string.find
> > does a brute force search. If it would do a last match char
> > first search or even Boyer-Moore (this only pays off for long
> > search targets) then it should be a lot faster than [s|p]re.
> 
> does the TextTools algorithm work with arbitrary character
> set sizes, btw?

The find function creates a Boyer-Moore search object
for the search string (on every call). It compares 1-1
or using a translation table which is applied
to the searched text prior to comparing it to the search
string (this enables things like case insensitive
search and character sets, but is about 45% slower). Real-life
usage would be to create the search objects once per process
and then reuse them. The Boyer-Moore table calcuation takes
some time...

But to answer your question: mxTextTools is 8-bit throughout.
A Unicode aware version will follow by the end of this year.

Thanks for checking,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Thu Aug  3 14:40:05 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:40:05 +0200
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1
 failing...)
References: <ECEPKNMJLHAPFFJHDOJBMEBGDDAA.MarkH@ActiveState.com>
Message-ID: <39897635.6C9FB82D@lemburg.com>

Mark Hammond wrote:
> 
> eek - a bit quick off the mark here ;-]
> 
> > Signals are a bit of a problem on Windows.  We can terminate the thread
> > mid-execution, but a clean way of terminating a thread isn't obvious.
> 
> thread = process - you get the idea!
> 
> > terminate-without-prejudice option any good?
> 
> really should say
> 
> > terminate-without-prejudice only version any good?

Well for one you can use signals for many other things than
just terminating a process (e.g. to have it reload its configuration
files). That's why os.kill() allows you to specify a signal.

The usual way of terminating a process on Unix from the outside
is to send it a SIGTERM (and if that doesn't work a SIGKILL).
I use this strategy a lot to control runaway client processes
and safely shut them down:

On Unix you can install a signal
handler in the Python program which then translates the SIGTERM
signal into a normal Python exception. Sending the signal then
causes the same as e.g. hitting Ctrl-C in a program: an
exception is raised asynchronously, but it can be handled
properly by the Python exception clauses to enable safe
shutdown of the process.

For background: the client processes in my application server
can execute arbitrary Python scripts written by users, i.e.
potentially buggy code which could effectively hose the server.
To control this, I use client processes which do the actual
exec code and watch them using a watchdog process. If the processes
don't return anything useful within a certain timeout limit,
the watchdog process sends them a SIGTERM and restarts a new
client.

Threads would not support this type of strategy, so I'm looking
for something similar on Windows, Win2k to be more specific.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Thu Aug  3 15:50:26 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 09:50:26 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Thu, 03 Aug 2000 14:14:55 +0200."
 <3989623F.2AB4C00C@lemburg.com>
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com> <398858BE.15928F47@lemburg.com> <200008022218.RAA04178@cj20424-a.reston1.va.home.com>
 <3989623F.2AB4C00C@lemburg.com>
Message-ID: <200008031450.JAA06505@cj20424-a.reston1.va.home.com>

> > > ... it seems 2.0 can reuse the CWI license after all ;-)
> > 
> > I'm not sure why you think that: 2.0 is a derivative version and is
> > thus bound by the CNRI license as well as by the license that BeOpen
> > adds.
> 
> If you interpret the above wording in the sense of "preparing
> a derivative version of the License Agreement", BeOpen (or
> anyone else) could just remove the CNRI License text. I
> understand that this is not intended (that's why I put the smiley
> there ;-).

Please forget this interpretation! :-)

> I haven't found an English version of the German law text,
> but this is the title of the law which handles German
> business conditions:
> 
> "Gesetz zur Regelung des Rechts der Allgemeinen Geschäftsbedingungen
> AGBG) - Act Governing Standard Business Conditions"
>  
> The relevant paragraph is no. 11 (10).
> 
> I'm not a lawyer, but from what I know:
> terms generally excluding liability are invalid; liability
> may be limited during the first 6 months after license
> agreement and excluded after this initial period.
> 
> Anyway, you're right in that the notice about the paragraph
> not necessarily applying to the licensee only has informational
> character and that it doesn't do any harm otherwise.

OK, we'll just let this go.

> > It's better not to violate the license.  But do you really think that
> > they would go after you immediately if you show good intentions to
> > rectify?
> 
> I don't intend to violate the license, but customers of 
> an application embedding Python will have to agree to the
> Python license to be able to legally use the Python engine
> embedded in the application -- that is: if the application
> unintensionally fails to meet the CNRI license terms
> then the application as a whole would immediately become
> unusable by the customer.
> 
> Now just think of an eCommerce application which produces
> some $100k USD revenue each day... such a customer wouldn't
> like these license terms at all :-(

That depends.  Unintentional failure to meet the license terms seems
unlikely to me considering that the license doesn't impose a lot of
requirments.  It's vague in its definitions, but I think that works in
your advantage.

> BTW, I think that section 6. can be removed altogether, if
> it doesn't include any reference to such a 30-60 day period:
> the permissions set forth in a license are only valid in case
> the license terms are adhered to whether it includes such
> a section or not.

Try to explain that to a lawyer. :)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Thu Aug  3 14:55:28 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:55:28 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>
 <3989454C.5C9EF39B@lemburg.com> <200008031256.HAA06107@cj20424-a.reston1.va.home.com>
Message-ID: <398979D0.5AF80126@lemburg.com>

Guido van Rossum wrote:
> 
> Marc-Andre writes:
> > The following one-liner already does what you want:
> >
> >       d[word] = d.get(word, []).append('world')
> 
> Are you using a patch to the list object so that append() returns the
> list itself?  Or was it just late?  For me, this makes d[word] = None.

Ouch... looks like I haven't had enough coffee today. I'll
fix that immediately ;-)

How about making this a method:

def inplace(dict, key, default):

    value = dict.get(key, default)
    dict[key] = value
    return value

>>> d = {}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world']}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world', 'world']}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world', 'world', 'world']}

(Hope I got it right this time ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Thu Aug  3 15:14:13 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 10:14:13 -0400 (EDT)
Subject: [Python-Dev] Buglist
In-Reply-To: <20000803145714.B266@xs4all.nl>
References: <20000803145714.B266@xs4all.nl>
Message-ID: <14729.32309.807363.345594@bitdiddle.concentric.net>

I am done moving old bugs from Jitterbug to SF.  There are still some
new bugs being submitted to Jitterbug, which I'll need to move one at
a time.

In principle, it's okay to mark bugs as closed, as long as you are
*sure* that the bug has been fixed.  If you try to reproduce a bug on
your system and can't, it's not clear that it has been fixed.  It
might be a platform-specific bug, for example.  I would prefer it if
you only closed bugs where you can point to the CVS checkin that fixed
it.

Whenever you fix a bug, you should add a test case to the regression
test that would have caught the bug.  Have you done that for any of
the bugs you've marked as closed?

You should also add a comment at any bug you're closing explaining why
it is closed.

It is good to assign bugs to people -- probably even if we end up
playing hot potato for a while.  If a bug is assigned to you, you
should either try to fix it, diagnose it, or assign it to someone
else.

> I think overlooking a few bugs is better than overlooking all of
> them because of the size of the list :P 

You seem to be arguing that the sheer number of bug reports bothers
you and that it's better to have a shorter list of bugs regardless of
whether they're actually fixed.  Come on! I don't want to overlook any
bugs.

Jeremy


From bwarsaw@beopen.com  Thu Aug  3 15:25:20 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 10:25:20 -0400 (EDT)
Subject: [Python-Dev] Go \x yourself
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <14729.32976.819777.292096@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> The new rule is the same as Perl uses for \x escapes in -w
    TP> mode, except that Python will raise ValueError at compile-time
    TP> for an invalid \x escape: an \x escape is of the form

    TP>     \xhh

    TP> where h is a hex digit.  That's it.

+1


From bwarsaw@beopen.com  Thu Aug  3 15:41:10 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 10:41:10 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
 <14728.63466.263123.434708@anthem.concentric.net>
 <3989454C.5C9EF39B@lemburg.com>
Message-ID: <14729.33926.145263.296629@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> The following one-liner already does what you want:

    M> 	d[word] = d.get(word, []).append('world')

    M> ... and it's in no way more readable than your proposed
    M> .put() line ;-)

Does that mean it's less readable?  :)

-Barry


From mal@lemburg.com  Thu Aug  3 15:49:01 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 16:49:01 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
 <14728.63466.263123.434708@anthem.concentric.net>
 <3989454C.5C9EF39B@lemburg.com> <14729.33926.145263.296629@anthem.concentric.net>
Message-ID: <3989865D.A52964D6@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal@lemburg.com> writes:
> 
>     M> The following one-liner already does what you want:
> 
>     M>  d[word] = d.get(word, []).append('world')
> 
>     M> ... and it's in no way more readable than your proposed
>     M> .put() line ;-)
> 
> Does that mean it's less readable?  :)

I find these .go_home().get_some_cheese().and_eat()...
constructions rather obscure.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Thu Aug  3 15:49:49 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 16:49:49 +0200
Subject: [Python-Dev] Buglist
In-Reply-To: <14729.32309.807363.345594@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 03, 2000 at 10:14:13AM -0400
References: <20000803145714.B266@xs4all.nl> <14729.32309.807363.345594@bitdiddle.concentric.net>
Message-ID: <20000803164949.D13365@xs4all.nl>

On Thu, Aug 03, 2000 at 10:14:13AM -0400, Jeremy Hylton wrote:

> In principle, it's okay to mark bugs as closed, as long as you are
> *sure* that the bug has been fixed.  If you try to reproduce a bug on
> your system and can't, it's not clear that it has been fixed.  It
> might be a platform-specific bug, for example.  I would prefer it if
> you only closed bugs where you can point to the CVS checkin that fixed
> it.

This is tricky for some bugreports, as they don't say *anything* about the
platform in question. However, I have been conservative, and haven't done
anything if I didn't either have the same platform as mentioned and could
reproduce the bug with 1.6a2 and/or Python 1.5.2 (very handy to have them
lying around) but not with current CVS, OR could find the CVS checkin that
fixed them. For instance, the incorrect usage of PyMem_Del() in some modules
(bug #110638) *seems* to be fixed, but I can't really test it and the CVS
checkin(s) that seem to fix it don't even mention the bug or the reason for
the change.

> Whenever you fix a bug, you should add a test case to the regression
> test that would have caught the bug.  Have you done that for any of
> the bugs you've marked as closed?

No, because all the bugs I've closed so far are 'obviously fixed', by
someone other than me. I would write one if I fixed the bug myself, I guess.
Also, most of these are more 'issues' rather than 'bugs', like someone
complaining about installing Python without Tcl/Tk and Tkinter not working,
threads misbehaving on some systems (didn't close that one, just added a
remark), etc.

> You should also add a comment at any bug you're closing explaining why
> it is closed.

Of course. I also forward the SF excerpt to the original submittor, since
they are not likely to browse the SF buglist and spot their own bug.

> It is good to assign bugs to people -- probably even if we end up
> playing hot potato for a while.  If a bug is assigned to you, you
> should either try to fix it, diagnose it, or assign it to someone
> else.

Hm, I did that for a few, but it's not very easy to find the right person,
in some cases. Bugs in the 're' module, should they go to amk or to /F ? XML
stuff, should it go to Paul Prescod or some of the other people who seem to
be doing something with XML ? A 'capabilities' list would be pretty neat!

> > I think overlooking a few bugs is better than overlooking all of
> > them because of the size of the list :P 

> You seem to be arguing that the sheer number of bug reports bothers
> you and that it's better to have a shorter list of bugs regardless of
> whether they're actually fixed.  Come on! I don't want to overlook any
> bugs.

No, that wasn't what I meant :P It's just that some bugs are vague, and
*seem* fixed, but are still an issue on some combination of compiler,
libraries, OS, etc. Also, there is the question on whether something is a
bug or a feature, or an artifact of compiler, library or design. A quick
pass over the bugs will either have to draw a firm line somewhere, or keep
most of the bugs and hope someone will look at them.

Having 9 out of 10 bugs waiting in the buglist without anyone looking at
them because it's too vague and everyone thinks not 'their' field of
expertise and expect someone else to look at them, defeats the purpose of
the buglist. But closing those bugreports, explaining the problem and even
forwarding the excerpt to the submittor *might* result in the original
submittor, who still has the bug, to forget about explaining it further,
whereas a couple of hours trying to duplicate the bug might locate it. I
personally just wouldn't want to be the one doing all that effort ;)

Just-trying-to-help-you-do-your-job---not-taking-it-over-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Thu Aug  3 16:00:03 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 17:00:03 +0200
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 03, 2000 at 09:06:51AM -0400
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
Message-ID: <20000803170002.C266@xs4all.nl>

On Thu, Aug 03, 2000 at 09:06:51AM -0400, Fred L. Drake, Jr. wrote:

> There is no tp_str handler, so str(xrange(...)) is the same as
> repr(xrange(...)).
>   I propose ripping out the tp_print handler completely.  (And I've
> already tested my patch. ;)
>   Comments?

+0... I would say 'swap str and repr', because str(xrange) does what
repr(xrange) should do, and the other way 'round:

>>> x = xrange(1000)
>>> repr(x)
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
... ... ... 
... 998, 999)

>>> str(x)
'(xrange(0, 1000, 1) * 1)'

But I don't really care either way.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Thu Aug  3 16:14:57 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 11:14:57 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <20000803170002.C266@xs4all.nl>
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
 <20000803170002.C266@xs4all.nl>
Message-ID: <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > >>> x = xrange(1000)
 > >>> repr(x)
 > (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
 > ... ... ... 
 > ... 998, 999)
 > 
 > >>> str(x)
 > '(xrange(0, 1000, 1) * 1)'

  What version is this with?  1.5.2 gives me:

Python 1.5.2 (#1, May  9 2000, 15:05:56)  [GCC 2.95.3 19991030 (prerelease)] on linux-i386
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = xrange(2)
>>> str(x)
'(xrange(0, 2, 1) * 1)'
>>> repr(x)
'(xrange(0, 2, 1) * 1)'
>>> x
(0, 1)

  The 1.6b1 that's getting itself ready says this:

Python 1.6b1 (#19, Aug  2 2000, 01:11:29)  [GCC 2.95.3 19991030 (prerelease)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
Module readline not available.
>>> x = xrange(2)
>>> str(x)
'(xrange(0, 2, 1) * 1)'
>>> repr(x)
'(xrange(0, 2, 1) * 1)'
>>> x
(0, 1)

  What I'm proposing is:

Python 2.0b1 (#116, Aug  2 2000, 15:35:35)  [GCC 2.95.3 19991030 (prerelease)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> x = xrange(2)
>>> str(x)
'xrange(0, 2, 1)'
>>> repr(x)
'xrange(0, 2, 1)'
>>> x
xrange(0, 2, 1)

  (Where the outer (... * n) is added only when n != 1, 'cause I think
that's just ugly.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Thu Aug  3 16:30:23 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 17:30:23 +0200
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 03, 2000 at 11:14:57AM -0400
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com> <20000803170002.C266@xs4all.nl> <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>
Message-ID: <20000803173023.D266@xs4all.nl>

On Thu, Aug 03, 2000 at 11:14:57AM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > >>> x = xrange(1000)
>  > >>> repr(x)
>  > (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
>  > ... ... ... 
>  > ... 998, 999)
>  > 
>  > >>> str(x)
>  > '(xrange(0, 1000, 1) * 1)'

>   What version is this with?  1.5.2 gives me:
> 
> Python 1.5.2 (#1, May  9 2000, 15:05:56)  [GCC 2.95.3 19991030 (prerelease)] on linux-i386
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> x = xrange(2)
> >>> str(x)
> '(xrange(0, 2, 1) * 1)'
> >>> repr(x)
> '(xrange(0, 2, 1) * 1)'
> >>> x
> (0, 1)

Sorry, my bad. I just did 'x', and assumed it called repr(). I guess my
newbiehood shows in that I thought 'print x' always called 'str(x)'. Like I
replied to Tim this morning, after he caught me in the same kind of
ebmarrasing thinko:

Sigh, that's what I get for getting up when my GF had to and being at the
office at 8am. Don't mind my postings today, they're likely 99% brainfart.

Seeing as how 'print "range: %s"%x' did already use the 'str' and 'repr'
output, I see no reason not to make 'print x' do the same. So +1.

> >>> x
> xrange(0, 2, 1)
> 
>   (Where the outer (... * n) is added only when n != 1, 'cause I think
> that's just ugly.)

Why not remove the first and last argument, if they are respectively 0 and 1?

>>> xrange(100)
xrange(100)
>>> xrange(10,100)
xrange(10, 100)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Thu Aug  3 16:48:28 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 11:48:28 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <20000803173023.D266@xs4all.nl>
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
 <20000803170002.C266@xs4all.nl>
 <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>
 <20000803173023.D266@xs4all.nl>
Message-ID: <14729.37964.46818.653202@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Sorry, my bad. I just did 'x', and assumed it called repr(). I guess my
 > newbiehood shows in that I thought 'print x' always called 'str(x)'. Like I

  That's the evil beauty of tp_print -- nobody really expects it
because most types don't implement it (and don't need to); I seem to
recall Guido saying it was a performance issue for certain types, but
don't recall the specifics.

 > Why not remove the first and last argument, if they are respectively 0 and 1?

  I agree!  In fact, always removing the last argument if it == 1 is a
good idea as well.  Here's the output from the current patch:

>>> xrange(2)
xrange(2)
>>> xrange(2, 4)
xrange(2, 4)
>>> x = xrange(10, 4, -1)
>>> x
xrange(10, 4, -1)
>>> x.tolist()
[10, 9, 8, 7, 6, 5]
>>> x*3
(xrange(10, 4, -1) * 3)



  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From jeremy@beopen.com  Thu Aug  3 17:26:51 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 12:26:51 -0400 (EDT)
Subject: [Python-Dev] Buglist
In-Reply-To: <20000803164949.D13365@xs4all.nl>
References: <20000803145714.B266@xs4all.nl>
 <14729.32309.807363.345594@bitdiddle.concentric.net>
 <20000803164949.D13365@xs4all.nl>
Message-ID: <14729.40267.557470.612144@bitdiddle.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

  >> It is good to assign bugs to people -- probably even if we end up
  >> playing hot potato for a while.  If a bug is assigned to you, you
  >> should either try to fix it, diagnose it, or assign it to someone
  >> else.

  TW> Hm, I did that for a few, but it's not very easy to find the
  TW> right person, in some cases. Bugs in the 're' module, should
  TW> they go to amk or to /F ? XML stuff, should it go to Paul
  TW> Prescod or some of the other people who seem to be doing
  TW> something with XML ? A 'capabilities' list would be pretty neat!

I had the same problem when I was trying to assign bugs.  It is seldom
clear who should be assigned a bug.  I have used two rules when
processing open, uncategorized bugs:

    * If you have a reasonable guess about who to assign a bug to,
    it's better to assign to the wrong person than not to assign at
    all.  If the wrong person gets it, she can assign it to someone
    else. 

    * If you don't know who to assign it to, at least give it a
    category.  That allows someone who feels expert in a category
    (e.g. a Tkinter guru), to easily scan all the unassigned bugs in
    that category.

  >> You seem to be arguing that the sheer number of bug reports
  >> bothers you and that it's better to have a shorter list of bugs
  >> regardless of whether they're actually fixed.  Come on! I don't
  >> want to overlook any bugs.

  TW> No, that wasn't what I meant :P 

Sorry.  I didn't believe you really meant that, but you came off
sounding like you did :-).

  TW> Having 9 out of 10 bugs waiting in the buglist without anyone
  TW> looking at them because it's too vague and everyone thinks not
  TW> 'their' field of expertise and expect someone else to look at
  TW> them, defeats the purpose of the buglist. 

I still don't agree here.  If you're not fairly certain about the bug,
keep it on the list.  I don't see too much harm in having vague, open
bugs on the list.  

  TW>                                           But closing those
  TW> bugreports, explaining the problem and even forwarding the
  TW> excerpt to the submittor *might* result in the original
  TW> submittor, who still has the bug, to forget about explaining it
  TW> further, whereas a couple of hours trying to duplicate the bug
  TW> might locate it. I personally just wouldn't want to be the one
  TW> doing all that effort ;)

You can send mail to the person who reported the bug and ask her for
more details without closing it.

  TW> Just-trying-to-help-you-do-your-job---not-taking-it-over-ly

And I appreciate the help!! The more bugs we have categorized or
assigned, the better.

of-course-actually-fixing-real-bugs-is-good-too-ly y'rs,
Jeremy




From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug  3 17:44:28 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 3 Aug 2000 19:44:28 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
Message-ID: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>

Suppose I'm fixing a bug in the library. I want peer review for my fix,
but I need none for my new "would have caught" test cases. Is it
considered alright to check-in right away the test case, breaking the test
suite, and to upload a patch to SF to fix it? Or should the patch include
the new test cases? 

The XP answer would be "hey, you have to checkin the breaking test case
right away", and I'm inclined to agree.

I really want to break the standard library, just because I'm a sadist --
but seriously, we need tests that break more often, so bugs will be easier
to fix.

waiting-for-fellow-sadists-ly y'rs, Z.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From guido@beopen.com  Thu Aug  3 18:54:55 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 12:54:55 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 19:44:28 +0300."
 <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>

> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases? 
> 
> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.
> 
> I really want to break the standard library, just because I'm a sadist --
> but seriously, we need tests that break more often, so bugs will be easier
> to fix.

In theory I'm with you.  In practice, each time the test suite breaks,
we get worried mail from people who aren't following the list closely,
did a checkout, and suddenly find that the test suite breaks.  That
just adds noise to the list.  So I'm against it.

-1

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug  3 17:55:41 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 3 Aug 2000 19:55:41 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008031954110.2575-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> In theory I'm with you.  In practice, each time the test suite breaks,
> we get worried mail from people who aren't following the list closely,
> did a checkout, and suddenly find that the test suite breaks.  That
> just adds noise to the list.  So I'm against it.
> 
> -1

In theory, theory and practice shouldn't differ. In practice, they do.
Guido, you're way too much realist <1.6 wink>
Oh, well.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From gstein@lyra.org  Thu Aug  3 18:04:01 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 10:04:01 -0700
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 03, 2000 at 07:44:28PM +0300
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <20000803100401.T19525@lyra.org>

On Thu, Aug 03, 2000 at 07:44:28PM +0300, Moshe Zadka wrote:
> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases?

If you're fixing a bug, then check in *both* pieces and call explicitly for
a peer reviewer (plus the people watching -checkins). If you don't quite fix
the bug, then a second checkin can smooth things out.

Let's not get too caught up in "process", to the exclusion of being
productive about bug fixing.

> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.
> 
> I really want to break the standard library, just because I'm a sadist --
> but seriously, we need tests that break more often, so bugs will be easier
> to fix.

I really want to see less process and discussion, and more code.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 18:19:03 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 19:19:03 +0200
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>              <006601bffd24$e25a9360$f2a6b5d4@hagrid>  <200008031331.IAA06319@cj20424-a.reston1.va.home.com>
Message-ID: <007401bffd6e$ed9bbde0$f2a6b5d4@hagrid>

guido wrote:
> > ...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
> > has already taken care of the other one ;-).  but 0.9.9 won't be
> > out before the 1.6b1 release...
>=20
> I assume you are planning to put the backtracking stack back in, as
> you mentioned in the checkin message?

yup -- but that'll have to wait a few more days...

> > (and to avoid scaring the hell out of the beta testers, it's =
probably
> > better to leave the test out of the regression suite until the bug =
is
> > fixed...)
>=20
> Even better, is it possible to put a limit on the recursion level
> before 1.6b1 is released (tomorrow if we get final agreement on the
> license) so at least it won't dump core?

shouldn't be too hard, given that I added a "recursion level
counter" in _sre.c revision 2.30.  I just added the necessary
if-statement.

</F>



From gstein@lyra.org  Thu Aug  3 19:39:08 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 11:39:08 -0700
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 03, 2000 at 12:54:55PM -0500
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
Message-ID: <20000803113908.X19525@lyra.org>

On Thu, Aug 03, 2000 at 12:54:55PM -0500, Guido van Rossum wrote:
> > Suppose I'm fixing a bug in the library. I want peer review for my fix,
> > but I need none for my new "would have caught" test cases. Is it
> > considered alright to check-in right away the test case, breaking the test
> > suite, and to upload a patch to SF to fix it? Or should the patch include
> > the new test cases? 
> > 
> > The XP answer would be "hey, you have to checkin the breaking test case
> > right away", and I'm inclined to agree.
> > 
> > I really want to break the standard library, just because I'm a sadist --
> > but seriously, we need tests that break more often, so bugs will be easier
> > to fix.
> 
> In theory I'm with you.  In practice, each time the test suite breaks,
> we get worried mail from people who aren't following the list closely,
> did a checkout, and suddenly find that the test suite breaks.  That
> just adds noise to the list.  So I'm against it.

Tell those people to chill out for a few days and not be so jumpy. You're
talking about behavior that can easily be remedied.

It is a simple statement about the CVS repository: "CVS builds but may not
pass the test suite in certain cases" rather than "CVS is perfect"

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From tim_one@email.msn.com  Thu Aug  3 19:49:02 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 14:49:02 -0400
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> Suppose I'm fixing a bug in the library. I want peer review
> for my fix, but I need none for my new "would have caught"
> test cases. Is it considered alright to check-in right away
> the test case, breaking the test suite, and to upload a patch
> to SF to fix it? Or should the patch include the new test cases?
>
> The XP answer would be "hey, you have to checkin the breaking
> test case right away", and I'm inclined to agree.

It's abhorrent to me to ever leave the tree in a state where a test is
"expected to fail".  If it's left in a failing state for a brief period, at
best other developers will waste time wondering whether it's due to
something they did.  If it's left in a failing state longer than that,
people quickly stop paying attention to failures at all (the difference
between "all passed" and "something failed" is huge, the differences among 1
or 2 or 3 or ... failures get overlooked, and we've seen over and over that
when 1 failure is allowed to persist, others soon join it).

You can check in an anti-test right away, though:  a test that passes so
long as the code remains broken <wink>.




From jeremy@beopen.com  Thu Aug  3 19:58:15 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 14:58:15 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
 <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
Message-ID: <14729.49351.574550.48521@bitdiddle.concentric.net>

I'm Tim on this issue.  As officially appointed release manager for
2.0, I set some guidelines for checking in code.  One is that no
checkin should cause the regression test to fail.  If it does, I'll
back it out.

If you didn't review the contribution guidelines when they were posted
on this list, please look at PEP 200 now.

Jeremy


From jeremy@beopen.com  Thu Aug  3 20:00:23 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:00:23 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
 <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
 <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <14729.49479.677157.957162@bitdiddle.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy@beopen.com> writes:

  JH> I'm Tim on this issue.

Make that "I'm with Tim on this issue."  I'm sure it would be fun to
channel Tim, but I don't have the skills for that.

Jeremy


From guido@beopen.com  Thu Aug  3 21:02:07 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:02:07 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 11:39:08 MST."
 <20000803113908.X19525@lyra.org>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
 <20000803113908.X19525@lyra.org>
Message-ID: <200008032002.PAA17349@cj20424-a.reston1.va.home.com>

> Tell those people to chill out for a few days and not be so jumpy. You're
> talking about behavior that can easily be remedied.
> 
> It is a simple statement about the CVS repository: "CVS builds but may not
> pass the test suite in certain cases" rather than "CVS is perfect"

I would agree if it was only the python-dev crowd -- they are easily
trained.  But there are lots of others who check out the tree, so it
would be a continuing education process.  I don't see what good it does.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Fredrik Lundh" <effbot@telia.com  Thu Aug  3 20:13:08 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 21:13:08 +0200
Subject: [Python-Dev] Breaking Test Cases on Purpose
References: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
Message-ID: <00d501bffd7e$deb6ece0$f2a6b5d4@hagrid>

moshe:
> > The XP answer would be "hey, you have to checkin the breaking
> > test case right away", and I'm inclined to agree.

tim:
> It's abhorrent to me to ever leave the tree in a state where a test is
> "expected to fail".  If it's left in a failing state for a brief =
period, at
> best other developers will waste time wondering whether it's due to
> something they did

note that we've just seen this in action, in the SRE crash thread.

Andrew checked in a test that caused the test suite to bomb, and
sent me and Mark F. looking for an non-existing portability bug...

> You can check in an anti-test right away, though:  a test that passes =
so
> long as the code remains broken <wink>.

which is what the new SRE test script does -- the day SRE supports
unlimited recursion (soon), the test script will complain...

</F>



From tim_one@email.msn.com  Thu Aug  3 20:06:49 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 15:06:49 -0400
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENNGNAA.tim_one@email.msn.com>

[Jeremy Hylton]
> I'm Tim on this issue.

Then I'm Jeremy too.  Wow!  I needed a vacation <wink>.




From guido@beopen.com  Thu Aug  3 21:15:26 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:15:26 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 15:00:23 -0400."
 <14729.49479.677157.957162@bitdiddle.concentric.net>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com> <14729.49351.574550.48521@bitdiddle.concentric.net>
 <14729.49479.677157.957162@bitdiddle.concentric.net>
Message-ID: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>

>   JH> I'm Tim on this issue.
> 
> Make that "I'm with Tim on this issue."  I'm sure it would be fun to
> channel Tim, but I don't have the skills for that.

Actually, in my attic there's a small door that leads to a portal into
Tim's brain.  Maybe we could get Tim to enter the portal -- it would
be fun to see him lying on a piano in a dress reciting a famous aria.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Thu Aug  3 20:19:18 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:19:18 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
 <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
 <14729.49351.574550.48521@bitdiddle.concentric.net>
 <14729.49479.677157.957162@bitdiddle.concentric.net>
 <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
Message-ID: <14729.50614.806442.190962@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

  JH> I'm Tim on this issue.
  >>  Make that "I'm with Tim on this issue."  I'm sure it would be
  >> fun to channel Tim, but I don't have the skills for that.

  GvR> Actually, in my attic there's a small door that leads to a
  GvR> portal into Tim's brain.  Maybe we could get Tim to enter the
  GvR> portal -- it would be fun to see him lying on a piano in a
  GvR> dress reciting a famous aria.

You should have been on the ride from Monterey to the San Jose airport
a couple of weeks ago.  There was no piano, but it was pretty close.

Jeremy


From jeremy@beopen.com  Thu Aug  3 20:31:50 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:31:50 -0400 (EDT)
Subject: [Python-Dev] tests for standard library modules
Message-ID: <14729.51366.391122.131492@bitdiddle.concentric.net>

Most of the standard library is untested.

There are 148 top-level Python modules in the standard library, plus a
few packages that contain 50 or 60 more modules.  When we run the
regression test, we only touch 48 of those modules.  Only 18 of the
modules have their own test suite.  The other 30 modules at least get
imported, though sometimes none of the code gets executed.  (The
traceback module is an example.)

I would like to see much better test coverage in Python 2.0.  I would
welcome any test case submissions that improve the coverage of the
standard library.

Skip's trace.py code coverage tool is now available in Tools/script.
You can use it to examine how much of a particular module is covered
by existing tests.

Jeremy


From guido@beopen.com  Thu Aug  3 21:39:44 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:39:44 -0500
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: Your message of "Thu, 03 Aug 2000 15:31:50 -0400."
 <14729.51366.391122.131492@bitdiddle.concentric.net>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
Message-ID: <200008032039.PAA17852@cj20424-a.reston1.va.home.com>

> Most of the standard library is untested.

Indeed.  I would suggest looking at the Tcl test suite.  It's very
thorough!  When I look at many of the test modules we *do* have, I
cringe at how little of the module the test actually covers.  Many
tests (not the good ones!) seem to be content with checking that all
functions in a module *exist*.  Much of this dates back to one
particular period in 1996-1997 when we (at CNRI) tried to write test
suites for all modules -- clearly we were in a hurry! :-(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Thu Aug  3 23:25:38 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 4 Aug 2000 00:25:38 +0200 (CEST)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <14729.51366.391122.131492@bitdiddle.concentric.net> from "Jeremy Hylton" at Aug 03, 2000 03:31:50 PM
Message-ID: <200008032225.AAA27154@python.inrialpes.fr>

Jeremy Hylton wrote:
> 
> Skip's trace.py code coverage tool is now available in Tools/script.
> You can use it to examine how much of a particular module is covered
> by existing tests.

Hmm. Glancing quickly at trace.py, I see that half of it is guessing
line numbers. The same SET_LINENO problem again. This is unfortunate.
But fortunately <wink>, here's another piece of code, modeled after
its C counterpart, that comes to Skip's rescue and that works with -O.

Example:

>>> import codeutil
>>> co = codeutil.PyCode_Line2Addr.func_code   # some code object
>>> codeutil.PyCode_GetExecLines(co)
[20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
>>> codeutil.PyCode_Line2Addr(co, 29)
173
>>> codeutil.PyCode_Addr2Line(co, 173)
29
>>> codeutil.PyCode_Line2Addr(co, 10)
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  File "codeutil.py", line 26, in PyCode_Line2Addr
    raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
IndexError: line must be in range [20,36]

etc...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252

------------------------------[ codeutil.py ]-------------------------
import types

def PyCode_Addr2Line(co, addrq):
    assert type(co) == types.CodeType, \
           "1st arg must be a code object, %s given" % type(co).__name__
    if addrq < 0 or addrq > len(co.co_code):
        raise IndexError, "address must be in range [0,%d]" % len(co.co_code)
    addr = 0
    line = co.co_firstlineno
    lnotab = co.co_lnotab
    for i in range(0, len(lnotab), 2):
        addr_incr = ord(lnotab[i])
        line_incr = ord(lnotab[i+1])
        addr = addr + addr_incr
        if (addr > addrq):
            break
        line = line + line_incr
    return line

def PyCode_Line2Addr(co, lineq):
    assert type(co) == types.CodeType, \
           "1st arg must be a code object, %s given" % type(co).__name__
    line = co.co_firstlineno
    lastlineno = PyCode_Addr2Line(co, len(co.co_code))
    if lineq < line or lineq > lastlineno:
        raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
    addr = 0
    lnotab = co.co_lnotab
    for i in range(0, len(lnotab), 2):
        if line >= lineq:
            break
        addr_incr = ord(lnotab[i])
        line_incr = ord(lnotab[i+1])
        addr = addr + addr_incr
        line = line + line_incr
    return addr

def PyCode_GetExecLines(co):
    assert type(co) == types.CodeType, \
           "arg must be a code object, %s given" % type(co).__name__
    lastlineno = PyCode_Addr2Line(co, len(co.co_code))
    lines = range(co.co_firstlineno, lastlineno + 1)
    # remove void lines (w/o opcodes): comments, blank/escaped lines
    i = len(lines) - 1
    while i >= 0:
        if lines[i] != PyCode_Addr2Line(co, PyCode_Line2Addr(co, lines[i])):
            lines.pop(i)
        i = i - 1
    return lines


From mwh21@cam.ac.uk  Thu Aug  3 23:19:51 2000
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 03 Aug 2000 23:19:51 +0100
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Moshe Zadka's message of "Thu, 3 Aug 2000 19:44:28 +0300 (IDT)"
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <m31z063s3c.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez@math.huji.ac.il> writes:

> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases? 
> 
> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.

I'm not so sure.  I can't find the bit I'm looking for in Beck's
book[1], but ISTR that you have two sorts of test, unit tests and
functional tests.  Unit tests always work, functional tests are more
what you want to work in the future, but may not now.  What goes in
Lib/test is definitely more of the unit test variety, and so if
something in there breaks it's a cause for alarm.  Checking in a test
you know will break just raises blood pressure for no good reason.

Also what if you're hacking on some bit of Python, run the test suite
and it fails?  You worry that you've broken something, when in fact
it's nothing to do with you.

-1. (like everyone else...)

Cheers,
M.

[1] Found it; p. 118 of "Extreme Programming Explained"

-- 
  I'm okay with intellegent buildings, I'm okay with non-sentient
  buildings. I have serious reservations about stupid buildings.
     -- Dan Sheppard, ucam.chat (from Owen Dunn's summary of the year)



From skip@mojam.com (Skip Montanaro)  Thu Aug  3 23:21:04 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:21:04 -0500 (CDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
In-Reply-To: <398979D0.5AF80126@lemburg.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
 <14728.63466.263123.434708@anthem.concentric.net>
 <3989454C.5C9EF39B@lemburg.com>
 <200008031256.HAA06107@cj20424-a.reston1.va.home.com>
 <398979D0.5AF80126@lemburg.com>
Message-ID: <14729.61520.11958.530601@beluga.mojam.com>

    >> How about making this a method:

    >> def inplace(dict, key, default):

    >>     value = dict.get(key, default)
    >>     dict[key] = value
    >>     return value

eh... I don't like these do two things at once kind of methods.  I see
nothing wrong with

    >>> dict = {}
    >>> dict['hello'] = dict.get('hello', [])
    >>> dict['hello'].append('world')
    >>> print dict
    {'hello': ['world']}

or

    >>> d = dict['hello'] = dict.get('hello', [])
    >>> d.insert(0, 'cruel')
    >>> print dict
    {'hello': ['cruel', 'world']}

for the obsessively efficiency-minded folks.

Also, we're talking about a method that would generally only be useful when
dictionaries have values which were mutable objects.  Irregardless of how
useful instances and lists are, I still find that my predominant day-to-day
use of dictionaries is with strings as keys and values.  Perhaps that's just
the nature of my work.

In short, I don't think anything needs to be changed.

-1 (don't like the concept, so I don't care about the implementation)

Skip


From mal@lemburg.com  Thu Aug  3 23:36:33 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 04 Aug 2000 00:36:33 +0200
Subject: [Python-Dev] Line number tools (tests for standard library modules)
References: <200008032225.AAA27154@python.inrialpes.fr>
Message-ID: <3989F3F1.162A9766@lemburg.com>

Vladimir Marangozov wrote:
> 
> Jeremy Hylton wrote:
> >
> > Skip's trace.py code coverage tool is now available in Tools/script.
> > You can use it to examine how much of a particular module is covered
> > by existing tests.
> 
> Hmm. Glancing quickly at trace.py, I see that half of it is guessing
> line numbers. The same SET_LINENO problem again. This is unfortunate.
> But fortunately <wink>, here's another piece of code, modeled after
> its C counterpart, that comes to Skip's rescue and that works with -O.
> 
> Example:
> 
> >>> import codeutil
> >>> co = codeutil.PyCode_Line2Addr.func_code   # some code object
> >>> codeutil.PyCode_GetExecLines(co)
> [20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
> >>> codeutil.PyCode_Line2Addr(co, 29)
> 173
> >>> codeutil.PyCode_Addr2Line(co, 173)
> 29
> >>> codeutil.PyCode_Line2Addr(co, 10)
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
>   File "codeutil.py", line 26, in PyCode_Line2Addr
>     raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
> IndexError: line must be in range [20,36]
> 
> etc...

Cool. 

With proper Python style names these utilities
would be nice additions for e.g. codeop.py or code.py.

BTW, I wonder why code.py includes Python console emulations:
there seems to be a naming bug there... I would have
named the module PythonConsole.py and left code.py what
it was previously: a collection of tools dealing with Python
code objects.

--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From skip@mojam.com (Skip Montanaro)  Thu Aug  3 23:53:58 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:53:58 -0500 (CDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <14729.51366.391122.131492@bitdiddle.concentric.net>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
Message-ID: <14729.63494.544079.516429@beluga.mojam.com>

    Jeremy> Skip's trace.py code coverage tool is now available in
    Jeremy> Tools/script.  You can use it to examine how much of a
    Jeremy> particular module is covered by existing tests.

Yes, though note that in the summary stuff on my web site there are obvious
bugs that I haven't had time to look at.  Sometimes modules are counted
twice.  Other times a module is listed as untested when right above it there
is a test coverage line...  

Skip


From skip@mojam.com (Skip Montanaro)  Thu Aug  3 23:59:34 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:59:34 -0500 (CDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <200008032225.AAA27154@python.inrialpes.fr>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
 <200008032225.AAA27154@python.inrialpes.fr>
Message-ID: <14729.63830.894657.930184@beluga.mojam.com>

    Vlad> Hmm. Glancing quickly at trace.py, I see that half of it is
    Vlad> guessing line numbers. The same SET_LINENO problem again. This is
    Vlad> unfortunate.  But fortunately <wink>, here's another piece of
    Vlad> code, modeled after its C counterpart, that comes to Skip's rescue
    Vlad> and that works with -O.

Go ahead and check in any changes you see that need doing.  I haven't
fiddled with trace.py much in the past couple of years, so there are some
places that clearly do things different than currently accepted practice.

(I am going to be up to my ass in alligators pretty much from now through
Labor Day (early September for the furriners among us), so things I thought
I would get to probably will remain undone.  The most important thing is to
fix the list comprehensions patch to force expression tuples to be
parenthesized.  Guido says it's an easy fix, and the grammar changes seem
trivial, but fixing compile.c is beyond my rusty knowledge at the moment.
Someone want to pick this up?)

Skip


From MarkH@ActiveState.com  Fri Aug  4 00:13:06 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 09:13:06 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <39897635.6C9FB82D@lemburg.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCEDODDAA.MarkH@ActiveState.com>

[Marc writes]
> On Unix you can install a signal
> handler in the Python program which then translates the SIGTERM
> signal into a normal Python exception. Sending the signal then
> causes the same as e.g. hitting Ctrl-C in a program: an
> exception is raised asynchronously, but it can be handled
> properly by the Python exception clauses to enable safe
> shutdown of the process.

I understand this.  This is why I was skeptical that a
"terminate-without-prejudice" only version would be useful.

I _think_ this fairly large email is agreeing that it isn't of much use.
If so, then I am afraid you are on your own :-(

Mark.



From Vladimir.Marangozov@inrialpes.fr  Fri Aug  4 00:27:39 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 4 Aug 2000 01:27:39 +0200 (CEST)
Subject: [Python-Dev] Removing the 16 bit arg limit
Message-ID: <200008032327.BAA27362@python.inrialpes.fr>

I've looked at this and the best compromise solution I ended up with
(before Py3K) is sketched here:

opcode.h:
#define EXTENDED_ARG	135	/* 16 higher bits of the next opcode arg */

ceval.c:
		case EXTENDED_ARG:
			do {
				oparg <<= 16;
				op = NEXTOP();
				oparg += NEXTARG();
			} while (op == EXTENDED_ARG);
			goto dispatch_opcode;

compile.c:
static void
com_addoparg(struct compiling *c, int op, int arg)
{
	if (arg < 0) {
		com_error(c, PyExc_SystemError,
			  "com_addoparg: argument out of range");
	}
	if (op == SET_LINENO) {
		com_set_lineno(c, arg);
		if (Py_OptimizeFlag)
			return;
	}
	do {
		int arg2 = arg & 0xffff;
		arg -= arg2;
		if (arg > 0)
			com_addbyte(c, EXTENDED_ARG);
		else
			com_addbyte(c, op);
		com_addint(c, arg2);
	} while (arg > 0);
}


But this is only difficulty level 0.

Difficulty level 1 is the jumps and their forward refs & backpatching in
compile.c.

There's no tricky solution to this (due to the absolute jumps). The only
reasonable, long-term useful solution I can think of is to build a table
of all anchors (delimiting the basic blocks of the code), then make a final
pass over the serialized basic blocks and update the anchors (with or
without EXTENDED_ARG jumps depending on the need).

However, I won't even think about it anymore whithout BDFL & Tim's
approval and strong encouragement <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gward@python.net  Fri Aug  4 02:24:44 2000
From: gward@python.net (Greg Ward)
Date: Thu, 3 Aug 2000 21:24:44 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
Message-ID: <20000803212444.A1237@beelzebub>

Hi all --

for building extensions with non-MS compilers, it sounds like a small
change to PC/config.h is needed.  Rene Liebscher suggests changing

  #ifndef USE_DL_EXPORT
  /* So nobody needs to specify the .lib in their Makefile any more */
  #ifdef _DEBUG
  #pragma comment(lib,"python20_d.lib")
  #else
  #pragma comment(lib,"python20.lib")
  #endif
  #endif /* USE_DL_EXPORT */

to

  #if !defined(USE_DL_EXPORT) && defined(_MSC_VER)
  ...

That way, the convenience pragma will still be there for MSVC users, but
it won't break building extensions with Borland C++.  (As I understand
it, Borland C++ understands the pragma, but then tries to use Python's
standard python20.lib, which of course is only for MSVC.)  Non-MSVC
users will have to explicitly supply the library, but that's OK: the
Distutils does it for them.  (Even with MSVC, because it's too much
bother *not* to specify python20.lib explicitly.)

Does this look like the right change to everyone?  I can check it in
(and on the 1.6 branch too) if it looks OK.

While I have your attention, Rene also suggests the convention of
"bcpp_python20.lib" for the Borland-format lib file, with other
compilers (library formats) supported in future similarly.  Works for me 
-- anyone have any problems with that?

        Greg
-- 
Greg Ward - programmer-at-big                           gward@python.net
http://starship.python.net/~gward/
Know thyself.  If you need help, call the CIA.


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug  4 02:38:32 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:38:32 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10008040437320.9544-100000@sundial>

On Thu, 3 Aug 2000, Jeremy Hylton wrote:

> I'm Tim on this issue.  As officially appointed release manager for
> 2.0, I set some guidelines for checking in code.  One is that no
> checkin should cause the regression test to fail.  If it does, I'll
> back it out.
> 
> If you didn't review the contribution guidelines when they were posted
> on this list, please look at PEP 200 now.

Actually, I did. The thing is, it seems to me there's a huge difference
between breaking code, and manifesting that the code is wrong.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug  4 02:41:12 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:41:12 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040440110.9544-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> >   JH> I'm Tim on this issue.
> > 
> > Make that "I'm with Tim on this issue."  I'm sure it would be fun to
> > channel Tim, but I don't have the skills for that.
> 
> Actually, in my attic there's a small door that leads to a portal into
> Tim's brain.  Maybe we could get Tim to enter the portal -- it would
> be fun to see him lying on a piano in a dress reciting a famous aria.

I think I need to get out more often. I just realized I think it would
be fun to. Anybody there has a video camera?
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug  4 02:45:52 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:45:52 +0300 (IDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <200008032039.PAA17852@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040444020.9544-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> > Most of the standard library is untested.
> 
> Indeed.  I would suggest looking at the Tcl test suite.  It's very
> thorough!  When I look at many of the test modules we *do* have, I
> cringe at how little of the module the test actually covers.  Many
> tests (not the good ones!) seem to be content with checking that all
> functions in a module *exist*.  Much of this dates back to one
> particular period in 1996-1997 when we (at CNRI) tried to write test
> suites for all modules -- clearly we were in a hurry! :-(

Here's a suggestion for easily getting hints about what test suites to
write: go through the list of open bugs, and write a "would have caught"
test. At worst, we will actually have to fix some bugs <wink>.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tim_one@email.msn.com  Fri Aug  4 03:23:59 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:23:59 -0400
Subject: [Python-Dev] snprintf breaks build
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>

Fred checked in a new rangeobject.c with 3 calls to snprintf.  That isn't a
std C function, and the lack of it breaks the build at least under Windows.
In the absence of a checkin supplying snprintf on all platforms within the
next hour, I'll just replace the snprintf calls with something that's
portable.




From MarkH@ActiveState.com  Fri Aug  4 03:27:32 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 12:27:32 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000803212444.A1237@beelzebub>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>

> Does this look like the right change to everyone?  I can check it in
> (and on the 1.6 branch too) if it looks OK.

I have no problems with this (but am a little confused - see below)

> While I have your attention, Rene also suggests the convention of
> "bcpp_python20.lib" for the Borland-format lib file, with other
> compilers (library formats) supported in future similarly.  Works for me
> -- anyone have any problems with that?

I would prefer python20_bcpp.lib, but that is not an issue.

I am a little confused by the intention, tho.  Wouldnt it make sense to
have Borland builds of the core create a Python20.lib, then we could keep
the pragma in too?

If people want to use Borland for extensions, can't we ask them to use that
same compiler to build the core too?  That would seem to make lots of the
problems go away?

But assuming there are good reasons, I am happy.  It wont bother me for
some time yet ;-) <just deleted a rant about the fact that anyone on
Windows who values their time in more than cents-per-hour would use MSVC,
but deleted it ;->

Sometimes-the-best-things-in-life-arent-free ly,

Mark.



From MarkH@ActiveState.com  Fri Aug  4 03:30:22 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 12:30:22 +1000
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008040440110.9544-100000@sundial>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEEKDDAA.MarkH@ActiveState.com>

> Anybody there has a video camera?

Eeeuuugghhh - the concept of Tim's last threatened photo-essay turning into
a video-essay has made me physically ill ;-)

Just-dont-go-there ly,

Mark.



From fdrake@beopen.com  Fri Aug  4 03:34:34 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 22:34:34 -0400 (EDT)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
Message-ID: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Fred checked in a new rangeobject.c with 3 calls to snprintf.  That isn't a
 > std C function, and the lack of it breaks the build at least under Windows.
 > In the absence of a checkin supplying snprintf on all platforms within the
 > next hour, I'll just replace the snprintf calls with something that's
 > portable.

  Hmm.  I think the issue with known existing snprintf()
implementations with Open Source licenses was that they were at least
somewhat contanimating.  I'll switch back to sprintf() until we have a
portable snprintf() implementation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Fri Aug  4 03:49:32 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:49:32 -0400
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEPEGNAA.tim_one@email.msn.com>

[Fred]
>   Hmm.  I think the issue with known existing snprintf()
> implementations with Open Source licenses was that they were at
> least somewhat contanimating.  I'll switch back to sprintf()
> until we have a portable snprintf() implementation.

Please don't bother!  Clearly, I've already fixed it on my machine so I
could make progress.  I'll simply check it in.  I didn't like the excessive
cleverness with the fmt vrbl anyway (your compiler may not complain that you
can end up passing more s[n]printf args than the format has specifiers to
convert, but it's a no-no anyway) ....




From tim_one@email.msn.com  Fri Aug  4 03:55:47 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:55:47 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000803212444.A1237@beelzebub>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPEGNAA.tim_one@email.msn.com>

[Greg Ward]
> for building extensions with non-MS compilers, it sounds like a small
> change to PC/config.h is needed.  Rene Liebscher suggests changing
>
>   #ifndef USE_DL_EXPORT
>   /* So nobody needs to specify the .lib in their Makefile any more */
>   #ifdef _DEBUG
>   #pragma comment(lib,"python20_d.lib")
>   #else
>   #pragma comment(lib,"python20.lib")
>   #endif
>   #endif /* USE_DL_EXPORT */
>
> to
>
>   #if !defined(USE_DL_EXPORT) && defined(_MSC_VER)
>   ...
>
> That way, the convenience pragma will still be there for MSVC users, but
> it won't break building extensions with Borland C++.

OK by me.

> ...
> While I have your attention,

You're pushing your luck, Greg <wink>.

> Rene also suggests the convention of "bcpp_python20.lib" for
> the Borland-format lib file, with other compilers (library
> formats) supported in future similarly.  Works for me -- anyone
> have any problems with that?

Nope, but I don't understand anything about how Borland differs from the
real <0.5 wink> Windows compiler, so don't know squat about the issues or
the goals.  If it works for Rene, I give up without a whimper.




From nhodgson@bigpond.net.au  Fri Aug  4 04:36:12 2000
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Fri, 4 Aug 2000 13:36:12 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
References: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>
Message-ID: <00cf01bffdc5$246867f0$8119fea9@neil>

> But assuming there are good reasons, I am happy.  It wont bother me for
> some time yet ;-) <just deleted a rant about the fact that anyone on
> Windows who values their time in more than cents-per-hour would use MSVC,
> but deleted it ;->

   OK. Better cut my rates. Some people will be pleased ;)

   Borland C++ isn't that bad. With an optimiser and a decent debugger it'd
even be usable as my main compiler. What is good about Borland is that it
produces lots of meaningful warnings.

   I've never regretted ensuring that Scintilla/SciTE build on Windows with
each of MSVC, Borland and GCC. It wasn't much work and real problems have
been found by the extra checks done by Borland.

   You-should-try-it-sometime-ly y'rs,

   Neil



From bwarsaw@beopen.com  Fri Aug  4 04:46:02 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 23:46:02 -0400 (EDT)
Subject: [Python-Dev] snprintf breaks build
References: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
 <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <14730.15482.216054.249627@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake@beopen.com> writes:

    Fred>   Hmm.  I think the issue with known existing snprintf()
    Fred> implementations with Open Source licenses was that they were
    Fred> at least somewhat contanimating.  I'll switch back to
    Fred> sprintf() until we have a portable snprintf()
    Fred> implementation.

In Mailman, I used the one from GNU screen, which is obviously GPL'd.
But Apache also comes with an snprintf implementation which doesn't
have the infectious license.  I don't feel like searching the
archives, but I'd be surprised if Greg Stein /didn't/ suggest this a
while back.

-Barry


From tim_one@email.msn.com  Fri Aug  4 04:54:47 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 23:54:47 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <00cf01bffdc5$246867f0$8119fea9@neil>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPHGNAA.tim_one@email.msn.com>

[Neil Hodgson]
> ...
>    I've never regretted ensuring that Scintilla/SciTE build on
> Windows with each of MSVC, Borland and GCC. It wasn't much work
> and real problems have been found by the extra checks done by
> Borland.
>
>    You-should-try-it-sometime-ly y'rs,

Indeed, the more compilers the better.  I've long wished that Guido would
leave CNRI, and find some situation in which he could hire people to work on
Python full-time.  If that ever happens, and he hires me, I'd like to do
serious work to free the Windows build config from such total dependence on
MSVC.




From MarkH@ActiveState.com  Fri Aug  4 04:52:58 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 13:52:58 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <00cf01bffdc5$246867f0$8119fea9@neil>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEEMDDAA.MarkH@ActiveState.com>

>    Borland C++ isn't that bad. With an optimiser and a decent
> debugger it'd even be usable as my main compiler.

>    You-should-try-it-sometime-ly y'rs,

OK - let me know when it has an optimiser and a decent debugger, and is
usable as a main compiler, and I will be happy to ;-)

Only-need-one-main-anything ly,

Mark.



From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug  4 05:30:59 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 4 Aug 2000 07:30:59 +0300 (IDT)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040728150.10236-100000@sundial>

On Thu, 3 Aug 2000, Fred L. Drake, Jr. wrote:

>   Hmm.  I think the issue with known existing snprintf()
> implementations with Open Source licenses was that they were at least
> somewhat contanimating.  I'll switch back to sprintf() until we have a
> portable snprintf() implementation.

Fred -- in your case, there is no need for sprintf -- a few sizeof(long)s
along the way would make sure that your buffers are large enough.  (For
extreme future-proofing, you might also sizeof() the messages you print)

(Tidbit: since sizeof(long) measures in bytes, and %d prints in decimals,
then a buffer of length sizeof(long) is enough to hold a decimal
represntation of a long).

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From greg@cosc.canterbury.ac.nz  Fri Aug  4 05:38:16 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 04 Aug 2000 16:38:16 +1200 (NZST)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <Pine.GSO.4.10.10008040728150.10236-100000@sundial>
Message-ID: <200008040438.QAA11982@s454.cosc.canterbury.ac.nz>

Moshe Zadka:

> (Tidbit: since sizeof(long) measures in bytes, and %d prints in decimals,
> then a buffer of length sizeof(long) is enough to hold a decimal
> represntation of a long).

Pardon? I think you're a bit out in your calculation there!

3*sizeof(long) should be enough, though (unless some weird C
implementation measures sizes in units of more than 8 bits).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Fri Aug  4 06:22:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 01:22:23 -0400
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <200008040438.QAA11982@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPJGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> (Tidbit: since sizeof(long) measures in bytes, and %d prints in
> decimals, then a buffer of length sizeof(long) is enough to hold
> a decimal represntation of a long).

[Greg Ewing]
> Pardon? I think you're a bit out in your calculation there!
>
> 3*sizeof(long) should be enough, though (unless some weird C
> implementation measures sizes in units of more than 8 bits).

Getting closer, but the sign bit can consume a character all by itself, so
3*sizeof(long) sitll isn't enough.  To do this correctly and minimally
requires that we implement an arbitrary-precision log10 function, use the
platform MIN/MIX #define's for longs and chars, and malloc the buffers at
runtime.

Note that instead I boosted the buffer sizes in the module from 80 to 250.
That's obviously way more than enough for 64-bit platforms, and "obviously
way more" is the correct thing to do for programmers <wink>.  If one of the
principled alternatives is ever checked in (be it an snprintf or /F's custom
solution (which I like better)), we can go back and use those instead.




From MarkH@ActiveState.com  Fri Aug  4 06:58:52 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 15:58:52 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>

[Re forcing all extensions to use PythonExtensionInit_XXX]

> I sort-of like this idea -- at least at the +0 level.

Since this email there have been some strong objections to this.  I too
would weigh in at -1 for this, simply for the amount of work it would cost
me personally!


> Unfortunately we only have two days to get this done for 1.6 -- I plan
> to release 1.6b1 this Friday!  If you don't get to it, prepare a patch
> for 2.0 would be the next best thing.

It is now Friday afternoon for me.  Regardless of the outcome of this, the
patch Fredrik posted recently would still seem reasonable, and not have too
much impact on performance (ie, after locating and loading a .dll/.so, one
function call isnt too bad!):

I've even left his trailing comment, which I agree with too?

Shall this be checked in to the 1.6 and 2.0 trees?

Mark.

Index: Python/modsupport.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/modsupport.c,v
retrieving revision 2.48
diff -u -r2.48 modsupport.c
--- Python/modsupport.c 2000/07/09 03:09:56     2.48
+++ Python/modsupport.c 2000/07/18 07:55:03
@@ -51,6 +51,8 @@
 {
        PyObject *m, *d, *v;
        PyMethodDef *ml;
+       if (!Py_IsInitialized())
+               Py_FatalError("Interpreter not initialized (version
mismatch?)");
        if (module_api_version != PYTHON_API_VERSION)
                fprintf(stderr, api_version_warning,
                        name, PYTHON_API_VERSION, name,
module_api_version);

"Fatal Python error: Interpreter not initialized" might not be too helpful,
but it's surely better than "PyThreadState_Get: no current thread"...




From tim_one@email.msn.com  Fri Aug  4 08:06:21 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 03:06:21 -0400
Subject: [Python-Dev] FW: submitting patches against 1.6a2
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com>

Anyone competent with urrlib care to check out this fellow's complaint?
Thanks!

-----Original Message-----
From: python-list-admin@python.org
[mailto:python-list-admin@python.org]On Behalf Of Paul Schreiber
Sent: Friday, August 04, 2000 2:20 AM
To: python-list@python.org
Subject: submitting patches against 1.6a2


I patched a number of bugs in urllib.py way back when -- in June, I
think. That was before the BeOpen announcement.

I emailed the patch to patches@python.org. I included the disclaimer. I
made the patch into a context diff.

I didn't hear back from anyone.

Should I resubmit? Where should I send the patch to?



Paul
--
http://www.python.org/mailman/listinfo/python-list




From esr@snark.thyrsus.com  Fri Aug  4 08:47:34 2000
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Fri, 4 Aug 2000 03:47:34 -0400
Subject: [Python-Dev] curses progress
Message-ID: <200008040747.DAA02323@snark.thyrsus.com>

OK, I've added docs for curses.textpad and curses.wrapper.  Did we
ever settle on a final location in the distribution tree for the
curses HOWTO?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

According to the National Crime Survey administered by the Bureau of
the Census and the National Institute of Justice, it was found that
only 12 percent of those who use a gun to resist assault are injured,
as are 17 percent of those who use a gun to resist robbery. These
percentages are 27 and 25 percent, respectively, if they passively
comply with the felon's demands. Three times as many were injured if
they used other means of resistance.
        -- G. Kleck, "Policy Lessons from Recent Gun Control Research,"


From pf@artcom-gmbh.de  Fri Aug  4 08:47:17 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Fri, 4 Aug 2000 09:47:17 +0200 (MEST)
Subject: Vladimir's codeutil.py (was Re: [Python-Dev] tests for standard library modules)
In-Reply-To: <200008032225.AAA27154@python.inrialpes.fr> from Vladimir Marangozov at "Aug 4, 2000  0:25:38 am"
Message-ID: <m13KcCL-000DieC@artcom0.artcom-gmbh.de>

Hi,

Vladimir Marangozov:
> But fortunately <wink>, here's another piece of code, modeled after
> its C counterpart, that comes to Skip's rescue and that works with -O.
[...]
> ------------------------------[ codeutil.py ]-------------------------
[...]

Neat!  This seems to be very useful.
I think this could be added to standard library if it were documented.

Regards, Peter


From thomas@xs4all.net  Fri Aug  4 09:14:56 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 10:14:56 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 04, 2000 at 03:58:52PM +1000
References: <200008022318.SAA04558@cj20424-a.reston1.va.home.com> <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>
Message-ID: <20000804101456.H266@xs4all.nl>

On Fri, Aug 04, 2000 at 03:58:52PM +1000, Mark Hammond wrote:

> It is now Friday afternoon for me.  Regardless of the outcome of this, the
> patch Fredrik posted recently would still seem reasonable, and not have too
> much impact on performance (ie, after locating and loading a .dll/.so, one
> function call isnt too bad!):

> +       if (!Py_IsInitialized())
> +               Py_FatalError("Interpreter not initialized (version

Wasn't there a problem with this, because the 'Py_FatalError()' would be the
one in the uninitialized library and thus result in the same tstate error ?
Perhaps it needs a separate error message, that avoids the usual Python
cleanup and trickery and just prints the error message and exits ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From MarkH@ActiveState.com  Fri Aug  4 09:20:04 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 18:20:04 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <20000804101456.H266@xs4all.nl>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEFHDDAA.MarkH@ActiveState.com>

> Wasn't there a problem with this, because the 'Py_FatalError()'
> would be the
> one in the uninitialized library and thus result in the same
> tstate error ?
> Perhaps it needs a separate error message, that avoids the usual Python
> cleanup and trickery and just prints the error message and exits ?

I would obviously need to test this, but a cursory look at Py_FatalError()
implies it does not touch the thread lock - simply an fprintf, and an
abort() (and for debug builds on Windows, an offer to break into the
debugger)

Regardless, I'm looking for a comment on the concept, and I will make sure
that whatever I do actually works ;-)

Mark.



From Fredrik Lundh" <effbot@telia.com  Fri Aug  4 09:30:25 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 4 Aug 2000 10:30:25 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <200008022318.SAA04558@cj20424-a.reston1.va.home.com> <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> <20000804101456.H266@xs4all.nl>
Message-ID: <012b01bffdee$3dadb020$f2a6b5d4@hagrid>

thomas wrote:
> > +       if (!Py_IsInitialized())
> > +               Py_FatalError("Interpreter not initialized (version
>=20
> Wasn't there a problem with this, because the 'Py_FatalError()' would =
be the
> one in the uninitialized library and thus result in the same tstate =
error ?

you mean this one:

  Py_FatalError("PyThreadState_Get: no current thread");

> Perhaps it needs a separate error message, that avoids the usual =
Python
> cleanup and trickery and just prints the error message and exits ?

void
Py_FatalError(char *msg)
{
 fprintf(stderr, "Fatal Python error: %s\n", msg);
#ifdef macintosh
 for (;;);
#endif
#ifdef MS_WIN32
 OutputDebugString("Fatal Python error: ");
 OutputDebugString(msg);
 OutputDebugString("\n");
#ifdef _DEBUG
 DebugBreak();
#endif
#endif /* MS_WIN32 */
 abort();
}

</F>



From ping@lfw.org  Fri Aug  4 09:38:12 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 4 Aug 2000 01:38:12 -0700 (PDT)
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>

On Thu, 3 Aug 2000, Tim Peters wrote:
> 
> >>> "\x123465" # \x12 -> \022, "3456" left alone
> '\0223456'
> >>> "\x65"
> 'e'
> >>> "\x1"
> ValueError
> >>> "\x\x"
> ValueError
> >>>

I'm quite certain that this should be a SyntaxError, not a ValueError:

    >>> "\x1"
    SyntaxError: two hex digits are required after \x
    >>> "\x\x"
    SyntaxError: two hex digits are required after \x

Otherwise, +1.  Sounds great.


-- ?!ng



From tim_one@email.msn.com  Fri Aug  4 10:26:29 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 05:26:29 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPPGNAA.tim_one@email.msn.com>

[Tim Peters]
> >>> "\x123465" # \x12 -> \022, "3456" left alone
> '\0223456'
> >>> "\x65"
> 'e'
> >>> "\x1"
> ValueError
> >>> "\x\x"
> ValueError
> >>>

[?!ng]
> I'm quite certain that this should be a SyntaxError, not a
> ValueError:
>
>     >>> "\x1"
>     SyntaxError: two hex digits are required after \x
>     >>> "\x\x"
>     SyntaxError: two hex digits are required after \x
>
> Otherwise, +1.  Sounds great.

SyntaxError was my original pick too.  Guido picked ValueError instead
because the corresponding "not enough hex digits" error in Unicode strings
for damaged \u1234 escapes raises UnicodeError today, which is a subclass of
ValueError.

I couldn't care less, and remain +1 either way.  On the chance that the BDFL
may have changed his mind, I've copied him on this msg,  This is your one &
only chance to prevail <wink>.

just-so-long-as-it's-not-XEscapeError-ly y'rs  - tim




From mal@lemburg.com  Fri Aug  4 11:03:49 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 04 Aug 2000 12:03:49 +0200
Subject: [Python-Dev] Go \x yourself
References: <LNBBLJKPBEHFEDALKOLCIEPPGNAA.tim_one@email.msn.com>
Message-ID: <398A9505.A88D8F93@lemburg.com>

[Wow, 5:26 in the morning and still (or already) up and running...]

Tim Peters wrote:
> 
> [Tim Peters]
> > >>> "\x123465" # \x12 -> \022, "3456" left alone
> > '\0223456'
> > >>> "\x65"
> > 'e'
> > >>> "\x1"
> > ValueError
> > >>> "\x\x"
> > ValueError
> > >>>
> 
> [?!ng]
> > I'm quite certain that this should be a SyntaxError, not a
> > ValueError:
> >
> >     >>> "\x1"
> >     SyntaxError: two hex digits are required after \x
> >     >>> "\x\x"
> >     SyntaxError: two hex digits are required after \x
> >
> > Otherwise, +1.  Sounds great.
> 
> SyntaxError was my original pick too.  Guido picked ValueError instead
> because the corresponding "not enough hex digits" error in Unicode strings
> for damaged \u1234 escapes raises UnicodeError today, which is a subclass of
> ValueError.
> 
> I couldn't care less, and remain +1 either way.  On the chance that the BDFL
> may have changed his mind, I've copied him on this msg,  This is your one &
> only chance to prevail <wink>.

The reason for Unicode raising a UnicodeError is that the
string is passed through a codec in order to be converted to
Unicode. Codecs raise ValueErrors for encoding errors.

The "\x..." errors should probably be handled in the same
way to assure forward compatibility (they might be passed through
codecs as well in some future Python version in order to
implement source code encodings).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From akuchlin@mems-exchange.org  Fri Aug  4 13:45:06 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 4 Aug 2000 08:45:06 -0400
Subject: [Python-Dev] curses progress
In-Reply-To: <200008040747.DAA02323@snark.thyrsus.com>; from esr@snark.thyrsus.com on Fri, Aug 04, 2000 at 03:47:34AM -0400
References: <200008040747.DAA02323@snark.thyrsus.com>
Message-ID: <20000804084506.B5870@newcnri.cnri.reston.va.us>

On Fri, Aug 04, 2000 at 03:47:34AM -0400, Eric S. Raymond wrote:
>OK, I've added docs for curses.textpad and curses.wrapper.  Did we
>ever settle on a final location in the distribution tree for the
>curses HOWTO?

Fred and GvR thought a separate SF project would be better, so I
created http://sourceforge.net/projects/py-howto .  You've already
been added as a developer, as have Moshe and Fred.  Just check out the
CVS tree (directory, really) and put it in the Doc/ subdirectory of
the Python CVS tree.  Preparations for a checkin mailing list are
progressing, but still not complete.

--amk


From thomas@xs4all.net  Fri Aug  4 14:01:35 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:01:35 +0200
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: <200007270559.AAA04753@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Jul 27, 2000 at 12:59:15AM -0500
References: <20000725230322.N266@xs4all.nl> <200007270559.AAA04753@cj20424-a.reston1.va.home.com>
Message-ID: <20000804150134.J266@xs4all.nl>

[Don't be scared, I'm revisiting this thread for a purpose -- this isn't a
time jump ;-)]

On Thu, Jul 27, 2000 at 12:59:15AM -0500, Guido van Rossum wrote:

> I'm making up opcodes -- the different variants of LOAD and STORE
> don't matter.  On the right I'm displaying the stack contents after
> execution of the opcode (push appends to the end).  I'm writing
> 'result' to indicate the result of the += operator.

>   a[i] += b
> 
>       LOAD a			[a]
>       DUP			[a, a]
>       LOAD i			[a, a, i]
>       DUP			[a, a, i, i]
>       ROT3			[a, i, a, i]
>       GETITEM			[a, i, a[i]]
>       LOAD b			[a, i, a[i], b]
>       AUGADD			[a, i, result]
>       SETITEM			[]
> 
> I'm leaving the slice variant out; I'll get to that in a minute.

[ And later you gave an example of slicing using slice objects, rather than
the *SLICE+x opcodes ]

I have two tiny problems with making augmented assignment use the current
LOAD/STORE opcodes in the way Guido pointed out, above. One has to do with
the order of the arguments, and the other with ROT_FOUR. And they're closely
related, too :P

The question is in what order the expression

x += y

is evaluated. 

x = y

evaluates 'y' first, then 'x', but 

x + y

evaluates 'x' first, and then 'y'. 

x = x + y

Would thus evaluate 'x', then 'y', and then 'x' (for storing the result.)
(The problem isn't with single-variable expressions like these examples, of
course, but with expressions with sideeffects.)

I think it makes sense to make '+=' like '+', in that it evaluates the lhs
first. However, '+=' is as much '=' as it is '+', so it also makes sense to
evaluate the rhs first. There are plenty of arguments both ways, and both
sides of my brain have been beating eachother with spiked clubs for the
better part of a day now ;) On the other hand, how important is this issue ?
Does Python say anything about the order of argument evaluation ? Does it
need to ?

After making up your mind about the above issue, there's another problem,
and that's the generated bytecode.

If '+=' should be as '=' and evaluate the rhs first, here's what the
bytecode would have to look like for the most complicated case (two-argument
slicing.)

a[b:c] += i

LOAD i			[i]
LOAD a			[i, a]
DUP_TOP			[i, a, a]
LOAD b			[i, a, a, b]
DUP_TOP			[i, a, a, b, b]
ROT_THREE		[i, a, b, a, b]
LOAD c			[i, a, b, a, b, c]
DUP_TOP			[i, a, b, a, b, c, c]
ROT_FOUR		[i, a, b, c, a, b, c]
SLICE+3			[i, a, b, c, a[b:c]]
ROT_FIVE		[a[b:c], i, a, b, c]
ROT_FIVE		[c, a[b:c], i, a, b]
ROT_FIVE		[b, c, a[b:c], i, a]
ROT_FIVE		[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

So, *two* new bytecodes, 'ROT_FOUR' and 'ROT_FIVE', just to get the right
operands in the right place.

On the other hand, if the *left* hand side is evaluated first, it would look
like this:

a[b:c] += i

LOAD a			[a]
DUP_TOP			[a, a]
LOAD b			[a, a, b]
DUP_TOP			[a, a, b, b]
ROT_THREE		[a, b, a, b]
LOAD c			[a, b, a, b, c]
DUP_TOP			[a, b, a, b, c, c]
ROT_FOUR		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

A lot shorter, and it only needs ROT_FOUR, not ROT_FIVE. An alternative
solution is to drop ROT_FOUR too, and instead use a DUP_TOPX argument-opcode
that duplicates the top 'x' items:

LOAD a			[a]
LOAD b			[a, b]
LOAD c			[a, b, c]
DUP_TOPX 3		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

I think 'DUP_TOPX' makes more sense than ROT_FOUR, as DUP_TOPX could be used
in the bytecode for 'a[b] += i' and 'a.b += i' as well. (Guido's example
would become something like this:

a[b] += i

LOAD a			[a]
LOAD b			[a, b]
DUP_TOPX 2		[a, b, a, b]
BINARY_SUBSCR		[a, b, a[b]]
LOAD i			[a, b, a[b], i]
INPLACE_ADD		[a, b, result]
STORE_SUBSCR		[]

So, *bytecode* wise, evaluating the lhs of '+=' first is easiest. It
requires a lot more hacking of compile.c, but I think I can manage that.
However, one part of me is still yelling that '+=' should evaluate its
arguments like '=', not '+'. Which part should I lobotomize ? :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Aug  4 14:08:58 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:08:58 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <200008041259.FAA24651@slayer.i.sourceforge.net>; from moshez@users.sourceforge.net on Fri, Aug 04, 2000 at 05:59:43AM -0700
References: <200008041259.FAA24651@slayer.i.sourceforge.net>
Message-ID: <20000804150858.K266@xs4all.nl>

On Fri, Aug 04, 2000 at 05:59:43AM -0700, Moshe Zadka wrote:

> Log Message:
> The only error the test suite skips is currently ImportError -- so that's
> what we raise. If you see a problem with this patch, say so and I'll
> retract.

test_support creates a class 'TestSkipped', which has a docstring that
suggests it can be used in the same way as ImportError. However, it doesn't
work ! Is that intentional ? The easiest fix to make it work is probably
making TestSkipped a subclass of ImportError, rather than Error (which it
is, now.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug  4 14:11:38 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 4 Aug 2000 16:11:38 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test
 test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <20000804150858.K266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008041610180.16446-100000@sundial>

On Fri, 4 Aug 2000, Thomas Wouters wrote:

> On Fri, Aug 04, 2000 at 05:59:43AM -0700, Moshe Zadka wrote:
> 
> > Log Message:
> > The only error the test suite skips is currently ImportError -- so that's
> > what we raise. If you see a problem with this patch, say so and I'll
> > retract.
> 
> test_support creates a class 'TestSkipped', which has a docstring that
> suggests it can be used in the same way as ImportError. However, it doesn't
> work ! Is that intentional ? The easiest fix to make it work is probably
> making TestSkipped a subclass of ImportError, rather than Error (which it
> is, now.)

Thanks for the tip, Thomas! I didn't know about it -- but I just read
the regrtest.py code, and it seemed to be the only exception it catches.
Why not just add test_support.TestSkipped to the exception it catches
when it catches the ImportError?

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Fri Aug  4 14:19:31 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:19:31 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <Pine.GSO.4.10.10008041610180.16446-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 04, 2000 at 04:11:38PM +0300
References: <20000804150858.K266@xs4all.nl> <Pine.GSO.4.10.10008041610180.16446-100000@sundial>
Message-ID: <20000804151931.L266@xs4all.nl>

On Fri, Aug 04, 2000 at 04:11:38PM +0300, Moshe Zadka wrote:

> > test_support creates a class 'TestSkipped', which has a docstring that
> > suggests it can be used in the same way as ImportError. However, it doesn't
> > work ! Is that intentional ? The easiest fix to make it work is probably
> > making TestSkipped a subclass of ImportError, rather than Error (which it
> > is, now.)

> Thanks for the tip, Thomas! I didn't know about it -- but I just read
> the regrtest.py code, and it seemed to be the only exception it catches.
> Why not just add test_support.TestSkipped to the exception it catches
> when it catches the ImportError?

Right. Done. Now to update all those tests that raise ImportError when they
*mean* 'TestSkipped' :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Fri Aug  4 14:26:53 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 4 Aug 2000 09:26:53 -0400 (EDT)
Subject: [Python-Dev] curses progress
In-Reply-To: <200008040747.DAA02323@snark.thyrsus.com>
References: <200008040747.DAA02323@snark.thyrsus.com>
Message-ID: <14730.50333.391218.736370@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > OK, I've added docs for curses.textpad and curses.wrapper.  Did we
 > ever settle on a final location in the distribution tree for the
 > curses HOWTO?

  Andrew is creating a new project on SourceForge.
  I think this is the right thing to do.  We may want to discuss
packaging, to make it easier for users to get to the documentation
they need; this will have to wait until after 1.6.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Fri Aug  4 15:59:35 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 09:59:35 -0500
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: Your message of "Fri, 04 Aug 2000 15:58:52 +1000."
 <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>
Message-ID: <200008041459.JAA01621@cj20424-a.reston1.va.home.com>

> [Re forcing all extensions to use PythonExtensionInit_XXX]

[GvR]
> > I sort-of like this idea -- at least at the +0 level.

[MH]
> Since this email there have been some strong objections to this.  I too
> would weigh in at -1 for this, simply for the amount of work it would cost
> me personally!

OK.  Dead it is.  -1.

> Shall this be checked in to the 1.6 and 2.0 trees?

Yes, I'll do so.

> "Fatal Python error: Interpreter not initialized" might not be too helpful,
> but it's surely better than "PyThreadState_Get: no current thread"...

Yes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Aug  4 16:06:33 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:06:33 -0500
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: Your message of "Fri, 04 Aug 2000 03:06:21 -0400."
 <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com>
Message-ID: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>

> Anyone competent with urrlib care to check out this fellow's complaint?

It arrived on June 14, so I probably ignored it -- with 1000s of other
messages received while I was on vacation.  This was before we started
using the SF PM.

But I still have his email.  Someone else please look at this!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


Subject: [Patches] urllib.py patch
From: Paul Schreiber <paul@commerceflow.com>
To: patches@python.org
Date: Wed, 14 Jun 2000 16:52:02 -0700
Content-Type: multipart/mixed;
 boundary="------------3EE36A3787159ED881FD3EC3"

This is a multi-part message in MIME format.
--------------3EE36A3787159ED881FD3EC3
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I confirm that, to the best of my knowledge and belief, this
contribution is free of any claims of third parties under
copyright, patent or other rights or interests ("claims").  To
the extent that I have any such claims, I hereby grant to CNRI a
nonexclusive, irrevocable, royalty-free, worldwide license to
reproduce, distribute, perform and/or display publicly, prepare
derivative versions, and otherwise use this contribution as part
of the Python software and its related documentation, or any
derivative versions thereof, at no cost to CNRI or its licensed
users, and to authorize others to do so.

I acknowledge that CNRI may, at its sole discretion, decide
whether or not to incorporate this contribution in the Python
software and its related documentation.  I further grant CNRI
permission to use my name and other identifying information
provided to CNRI by me for use in connection with the Python
software and its related documentation.

Patch description
-----------------
This addresses four issues:

(1) usernames and passwords in urls with special characters are now
decoded properly. i.e. http://foo%2C:bar@www.whatever.com/

(2) Basic Auth support has been added to HTTPS, like it was in HTTP.

(3) Version 1.92 sent the POSTed data, but did not deal with errors
(HTTP responses other than 200) properly. HTTPS now behaves the same way
HTTP does.

(4) made URL-checking beahve the same way with HTTPS as it does with
HTTP (changed == to !=).


Paul Schreiber
--------------3EE36A3787159ED881FD3EC3
Content-Type: text/plain; charset=us-ascii;
 name="urllib-diff-2"
Content-Disposition: inline;
 filename="urllib-diff-2"
Content-Transfer-Encoding: 7bit

*** urllib.old	Tue Jun 13 18:27:02 2000
--- urllib.py	Tue Jun 13 18:33:27 2000
***************
*** 302,316 ****
          def open_https(self, url, data=None):
              """Use HTTPS protocol."""
              import httplib
              if type(url) is type(""):
                  host, selector = splithost(url)
!                 user_passwd, host = splituser(host)
              else:
                  host, selector = url
                  urltype, rest = splittype(selector)
!                 if string.lower(urltype) == 'https':
                      realhost, rest = splithost(rest)
!                     user_passwd, realhost = splituser(realhost)
                      if user_passwd:
                          selector = "%s://%s%s" % (urltype, realhost, rest)
                  #print "proxy via https:", host, selector
--- 302,325 ----
          def open_https(self, url, data=None):
              """Use HTTPS protocol."""
              import httplib
+             user_passwd = None
              if type(url) is type(""):
                  host, selector = splithost(url)
!                 if host:
!                     user_passwd, host = splituser(host)
!                     host = unquote(host)
!                 realhost = host
              else:
                  host, selector = url
                  urltype, rest = splittype(selector)
!                 url = rest
!                 user_passwd = None
!                 if string.lower(urltype) != 'https':
!                     realhost = None
!                 else:
                      realhost, rest = splithost(rest)
!                     if realhost:
!                         user_passwd, realhost = splituser(realhost)
                      if user_passwd:
                          selector = "%s://%s%s" % (urltype, realhost, rest)
                  #print "proxy via https:", host, selector
***************
*** 331,336 ****
--- 340,346 ----
              else:
                  h.putrequest('GET', selector)
              if auth: h.putheader('Authorization: Basic %s' % auth)
+             if realhost: h.putheader('Host', realhost)
              for args in self.addheaders: apply(h.putheader, args)
              h.endheaders()
              if data is not None:
***************
*** 340,347 ****
              if errcode == 200:
                  return addinfourl(fp, headers, url)
              else:
!                 return self.http_error(url, fp, errcode, errmsg, headers)
!   
      def open_gopher(self, url):
          """Use Gopher protocol."""
          import gopherlib
--- 350,360 ----
              if errcode == 200:
                  return addinfourl(fp, headers, url)
              else:
!                 if data is None:
!                     return self.http_error(url, fp, errcode, errmsg, headers)
!                 else:
!                     return self.http_error(url, fp, errcode, errmsg, headers, data)
! 
      def open_gopher(self, url):
          """Use Gopher protocol."""
          import gopherlib
***************
*** 872,878 ****
          _userprog = re.compile('^([^@]*)@(.*)$')
  
      match = _userprog.match(host)
!     if match: return match.group(1, 2)
      return None, host
  
  _passwdprog = None
--- 885,891 ----
          _userprog = re.compile('^([^@]*)@(.*)$')
  
      match = _userprog.match(host)
!     if match: return map(unquote, match.group(1, 2))
      return None, host
  
  _passwdprog = None


--------------3EE36A3787159ED881FD3EC3--


_______________________________________________
Patches mailing list
Patches@python.org
http://www.python.org/mailman/listinfo/patches


From guido@beopen.com  Fri Aug  4 16:11:03 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:11:03 -0500
Subject: [Python-Dev] Go \x yourself
In-Reply-To: Your message of "Fri, 04 Aug 2000 01:38:12 MST."
 <Pine.LNX.4.10.10008040136490.5008-100000@localhost>
References: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>
Message-ID: <200008041511.KAA01925@cj20424-a.reston1.va.home.com>

> I'm quite certain that this should be a SyntaxError, not a ValueError:
> 
>     >>> "\x1"
>     SyntaxError: two hex digits are required after \x
>     >>> "\x\x"
>     SyntaxError: two hex digits are required after \x
> 
> Otherwise, +1.  Sounds great.

No, problems with literal interpretations traditionally raise
"runtime" exceptions rather than syntax errors.  E.g.

>>> 111111111111111111111111111111111111
OverflowError: integer literal too large
>>> u'\u123'
UnicodeError: Unicode-Escape decoding error: truncated \uXXXX
>>>

Note that UnicodeError is a subclass of ValueError.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug  4 15:11:00 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 4 Aug 2000 17:11:00 +0300 (IDT)
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008041709450.16446-100000@sundial>

On Fri, 4 Aug 2000, Guido van Rossum wrote:

> > Anyone competent with urrlib care to check out this fellow's complaint?
> 
> It arrived on June 14, so I probably ignored it -- with 1000s of other
> messages received while I was on vacation.  This was before we started
> using the SF PM.
> 
> But I still have his email.  Someone else please look at this!

AFAIK, those are the two urllib patches assigned to Jeremy.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From akuchlin@mems-exchange.org  Fri Aug  4 15:13:05 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 4 Aug 2000 10:13:05 -0400
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 04, 2000 at 10:06:33AM -0500
References: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> <200008041506.KAA01874@cj20424-a.reston1.va.home.com>
Message-ID: <20000804101305.A11929@kronos.cnri.reston.va.us>

On Fri, Aug 04, 2000 at 10:06:33AM -0500, Guido van Rossum wrote:
>It arrived on June 14, so I probably ignored it -- with 1000s of other
>messages received while I was on vacation.  This was before we started
>using the SF PM.

I think this is SF patch#100880 -- I entered it so it wouldn't get lost.

--amk


From guido@beopen.com  Fri Aug  4 16:26:45 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:26:45 -0500
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: Your message of "Fri, 04 Aug 2000 15:01:35 +0200."
 <20000804150134.J266@xs4all.nl>
References: <20000725230322.N266@xs4all.nl> <200007270559.AAA04753@cj20424-a.reston1.va.home.com>
 <20000804150134.J266@xs4all.nl>
Message-ID: <200008041526.KAA02071@cj20424-a.reston1.va.home.com>

[Thomas]
> The question is in what order the expression
> 
> x += y
> 
> is evaluated. 
> 
> x = y
> 
> evaluates 'y' first, then 'x', but 
> 
> x + y
> 
> evaluates 'x' first, and then 'y'. 
> 
> x = x + y
> 
> Would thus evaluate 'x', then 'y', and then 'x' (for storing the result.)
> (The problem isn't with single-variable expressions like these examples, of
> course, but with expressions with sideeffects.)

Yes.  And note that the Python reference manual specifies the
execution order (or at least tries to) -- I figured that in a
user-friendly interpreted language, predictability is more important
than some optimizer being able to speed your code up a tiny bit by
rearranging evaluation order.

> I think it makes sense to make '+=' like '+', in that it evaluates the lhs
> first. However, '+=' is as much '=' as it is '+', so it also makes sense to
> evaluate the rhs first. There are plenty of arguments both ways, and both
> sides of my brain have been beating eachother with spiked clubs for the
> better part of a day now ;) On the other hand, how important is this issue ?
> Does Python say anything about the order of argument evaluation ? Does it
> need to ?

I say that in x += y, x should be evaluated before y.

> After making up your mind about the above issue, there's another problem,
> and that's the generated bytecode.
[...]
> A lot shorter, and it only needs ROT_FOUR, not ROT_FIVE. An alternative
> solution is to drop ROT_FOUR too, and instead use a DUP_TOPX argument-opcode
> that duplicates the top 'x' items:

Sure.

> However, one part of me is still yelling that '+=' should evaluate its
> arguments like '=', not '+'. Which part should I lobotomize ? :)

That part.  If you see x+=y as shorthand for x=x+y, x gets evaluated
before y anyway!  We're saving the second evaluation of x, not the
first one!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Aug  4 16:46:57 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:46:57 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Thu, 03 Aug 2000 17:21:04 EST."
 <14729.61520.11958.530601@beluga.mojam.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net> <3989454C.5C9EF39B@lemburg.com> <200008031256.HAA06107@cj20424-a.reston1.va.home.com> <398979D0.5AF80126@lemburg.com>
 <14729.61520.11958.530601@beluga.mojam.com>
Message-ID: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>

[Skip]
> eh... I don't like these do two things at once kind of methods.  I see
> nothing wrong with
> 
>     >>> dict = {}
>     >>> dict['hello'] = dict.get('hello', [])
>     >>> dict['hello'].append('world')
>     >>> print dict
>     {'hello': ['world']}
> 
> or
> 
>     >>> d = dict['hello'] = dict.get('hello', [])
>     >>> d.insert(0, 'cruel')
>     >>> print dict
>     {'hello': ['cruel', 'world']}
> 
> for the obsessively efficiency-minded folks.

Good!  Two lines instead of three, and only two dict lookups in the
latter one.

> Also, we're talking about a method that would generally only be useful when
> dictionaries have values which were mutable objects.  Irregardless of how
> useful instances and lists are, I still find that my predominant day-to-day
> use of dictionaries is with strings as keys and values.  Perhaps that's just
> the nature of my work.

Must be.  I have used the above two idioms many times -- a dict of
lists is pretty common.  I believe that the fact that you don't need
it is the reason why you don't like it.

I believe that as long as we agree that

  dict['hello'] += 1

is clearer (less strain on the reader's brain) than

  dict['hello'] = dict['hello'] + 1

we might as well look for a clearer way to spell the above idiom.

My current proposal (violating my own embargo against posting proposed
names to the list :-) would be

  dict.default('hello', []).append('hello')

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From paul@prescod.net  Fri Aug  4 16:52:11 2000
From: paul@prescod.net (Paul Prescod)
Date: Fri, 04 Aug 2000 11:52:11 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>              <3986794E.ADBB938C@prescod.net>  <200008011820.NAA30284@cj20424-a.reston1.va.home.com> <004d01bffc50$522fa2a0$f2a6b5d4@hagrid>
Message-ID: <398AE6AB.9D8F943B@prescod.net>

Fredrik Lundh wrote:
> 
> ...
> 
> how about letting _winreg export all functions with their
> win32 names, and adding a winreg.py which looks some-
> thing like this:
> 
>     from _winreg import *
> 
>     class Key:
>         ....
> 
>     HKEY_CLASSES_ROOT = Key(...)
>     ...

To me, that would defeat the purpose. Have you looked at the "*"
exported by _winreg? The whole point is to impose some organization on
something that is totally disorganized (because that's how the C module
is).

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From skip@mojam.com (Skip Montanaro)  Fri Aug  4 19:07:28 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 4 Aug 2000 13:07:28 -0500 (CDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
 <14728.63466.263123.434708@anthem.concentric.net>
 <3989454C.5C9EF39B@lemburg.com>
 <200008031256.HAA06107@cj20424-a.reston1.va.home.com>
 <398979D0.5AF80126@lemburg.com>
 <14729.61520.11958.530601@beluga.mojam.com>
 <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
Message-ID: <14731.1632.44037.499807@beluga.mojam.com>

    >> Also, we're talking about a method that would generally only be
    >> useful when dictionaries have values which were mutable objects.
    >> Irregardless of how useful instances and lists are, I still find that
    >> my predominant day-to-day use of dictionaries is with strings as keys
    >> and values.  Perhaps that's just the nature of my work.

    Guido> Must be.  I have used the above two idioms many times -- a dict
    Guido> of lists is pretty common.  I believe that the fact that you
    Guido> don't need it is the reason why you don't like it.

I do use lists in dicts as well, it's just that it seems to me that using
strings as values (especially because I use bsddb a lot and often want to
map dictionaries to files) dominates.  The two examples I posted are what
I've used for a long time.  I guess I just don't find them to be big
limitations.

Skip


From barry@scottb.demon.co.uk  Sat Aug  5 00:19:52 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Sat, 5 Aug 2000 00:19:52 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <01d701bffcd7$46a74a00$f2a6b5d4@hagrid>
Message-ID: <000d01bffe6a$7e4bab60$060210ac@private>

> > Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> > will skip 1.6.   For example, if your win32 stuff is not ported then
> > Python 1.6 is not usable on Windows/NT.
> 
> "not usable"?
> 
> guess you haven't done much cross-platform development lately...

	True. On Unix I have an ISDN status monitor, it depends on
	FReeBSD interfaces and PIL. On Windows I have an SCM
	solution that depends on COM to drive SourceSafe.

	Without Mark's COM support I cannot run any of my code on
	Windows.

> > Change the init function name to a new name PythonExtensionInit_ say.
> > Pass in the API version for the extension writer to check. If the
> > version is bad for this extension returns without calling any python
> 
> huh?  are you seriously proposing to break every single C extension
> ever written -- on each and every platform -- just to trap an error
> message caused by extensions linked against 1.5.2 on your favourite
> platform?

	What makes you think that a crash will not happen under Unix
	when you change the API? You just don't get the Windows crash.

	As this thread has pointed out you have no intention of checking
	for binary compatibility on the API as you move up versions.
 
> > Small code change in python core. But need to tell extension writers
> > what the new interface is and update all extensions within the python
> > CVS tree.
>
> you mean "update the source code for all extensions ever written."

	Yes, I'm aware of the impact.

> -1
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From gward@python.net  Sat Aug  5 01:53:09 2000
From: gward@python.net (Greg Ward)
Date: Fri, 4 Aug 2000 20:53:09 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 04, 2000 at 12:27:32PM +1000
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>
Message-ID: <20000804205309.A1013@beelzebub>

On 04 August 2000, Mark Hammond said:
> I would prefer python20_bcpp.lib, but that is not an issue.

Good suggestion: the contents of the library are more important than the 
format.  Rene, can you make this change and include it in your next
patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as 
opposed to "python20_bcpp"?

> I am a little confused by the intention, tho.  Wouldnt it make sense to
> have Borland builds of the core create a Python20.lib, then we could keep
> the pragma in too?
> 
> If people want to use Borland for extensions, can't we ask them to use that
> same compiler to build the core too?  That would seem to make lots of the
> problems go away?

But that requires people to build all of Python from source, which I'm
guessing is a bit more bothersome than building an extension or two from 
source.  Especially since Python is already distributed as a very
easy-to-use binary installer for Windows, but most extensions are not.

Rest assured that we probably won't be making things *completely*
painless for those who do not toe Chairman Bill's party line and insist
on using "non-standard" Windows compilers.  They'll probably have to get
python20_bcpp.lib (or python20_gcc.lib, or python20_lcc.lib) on their
own -- whether downloaded or generated, I don't know.  But the
alternative is to include 3 or 4 python20_xxx.lib files in the standard
Windows distribution, which I think is silly.

> But assuming there are good reasons, I am happy.  It wont bother me for
> some time yet ;-) <just deleted a rant about the fact that anyone on
> Windows who values their time in more than cents-per-hour would use MSVC,
> but deleted it ;->

Then I won't even write my "it's not just about money, it's not even
about features, it's about the freedom to use the software you want to
use no matter what it says in Chairman Bill's book of wisdom" rant.

Windows: the Cultural Revolution of the 90s.  ;-)

        Greg
-- 
Greg Ward - geek-at-large                               gward@python.net
http://starship.python.net/~gward/
What happens if you touch these two wires tog--


From guido@beopen.com  Sat Aug  5 03:27:59 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 21:27:59 -0500
Subject: [Python-Dev] Python 1.6b1 is released!
Message-ID: <200008050227.VAA11161@cj20424-a.reston1.va.home.com>

Python 1.6b1, with the new CNRI open source license, is released today
from the python.org website.  Read all about it:

    http://www.python.org/1.6/

Here's a little background on the new license (also posted on
www.pythonlabs.com):

CNRI has funded Python development for five years and held copyright,
but never placed a CNRI-specific license on the software.  In order to
clarify the licensing, BeOpen.com has been working with CNRI to
produce a new CNRI license.  The result of these discussions (which
included Eric Raymond, Bruce Perens, Richard Stallman and Python
Consortium members) has produced the CNRI Open Source License, under
which Python 1.6b1 has been released.

Bob Weiner, CTO of BeOpen.com, on the result of the licensing
discussions: "Bob Kahn [CNRI's President] worked with us to understand
the particular needs of the Open Source community and Python users.
The result is a very open license."

The new CNRI license was approved by the Python Consortium members, at
a meeting of the Python Consortium on Friday, July 21, 2000 in
Monterey, California.

Eric Raymond, President of the Open Source Initiative (OSI), reports
that OSI's Board of Directors voted to certify the new CNRI license
[modulo minor editing] as fully Open Source compliant.

Richard Stallman, founder of the Free Software Foundation, is in
discussion with CNRI about the new license's compatibility with the
GPL.  We are hopeful that the remaining issues will be resolved in
favor of GPL compatibility before the release of Python 1.6 final.

We would like to thank all who graciously volunteered their time to
help make these results possible: Bob Kahn for traveling out west to
discuss these issues in person; Eric Raymond and Bruce Perens for
their useful contributions to the discussions; Bob Weiner for taking
care of the bulk of the negotiations; Richard Stallman for GNU; and
the Python Consortium representatives for making the consortium
meeting a success!

(And I would personally like to thank Tim Peters for keeping the
newsgroup informed and for significant editing of the text above.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From akuchlin@mems-exchange.org  Sat Aug  5 05:15:22 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Sat, 5 Aug 2000 00:15:22 -0400
Subject: [Python-Dev] python-dev summary posted
Message-ID: <200008050415.AAA00811@207-172-146-87.s87.tnt3.ann.va.dialup.rcn.com>

I've posted the python-dev summary for July 16-31 to
comp.lang.python/python-list; interested people can go check it out.

--amk


From just@letterror.com  Sat Aug  5 09:03:33 2000
From: just@letterror.com (Just van Rossum)
Date: Sat, 05 Aug 2000 09:03:33 +0100
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com> <bld7joah8z.fsf@bitdiddle.concentric.net> <9b13RLA800i5EwLY@jessikat.fsnet.co.uk> <8mg3au$rtb$1@nnrp1.deja.com>
Message-ID: <398BCA4F.17E23309@letterror.com>

[ CC-d to python-dev from c.l.py ]

Jeremy Hylton wrote:
> It is a conservative response.  JPython is an implementation of Python,
> and compatibility between Python and JPython is important.  It's not
> required for every language feature, of course; you can't load a Java
> class file in C Python.

Jeremy, have you ever *looked* at stackless? Even though it requires
extensive patches in the eval loop, all additional semantics are nicely
hidden in an extension module. The Java argument is a *very* poor one
because of this. No, you can't load a Java class in CPython, and yes,
"import continuation" fails under JPython. So what?

> I'm not sure what you mean by distinguishing between the semantics of
> continuations and the implementation of Stackless Python.  They are
> both issues!  In the second half of my earlier message, I observed that
> we would never add continuations without a PEP detailing their exact
> semantics.  I do not believe such a specification currently exists for
> stackless Python.

That's completely unfair. Stackless has been around *much* longer than
those silly PEPs. It seems stackless isn't in the same league as, say,
"adding @ to the print statement for something that is almost as
conveniently done with a function". I mean, jeez.

> The PEP would also need to document the C interface and how it affects
> people writing extensions and doing embedded work.  Python is a glue
> language and the effects on the glue interface are also important.

The stackless API is 100% b/w compatible. There are (or could/should be)
additional calls for extension writers and embedders that would like
to take advantage of stackless features, but full compatibility is
*there*. To illustrate this: for windows as well as MacOS, there are
DLLs for stackless that you just put in the place if the original
Python core DLLs, and *everything* just works. 

Christian has done an amazing piece of work, and he's gotten much
praise from the community. I mean, if you *are* looking for a killer
feature to distinguish 1.6 from 2.0, I'd know where to look...

Just


From mal@lemburg.com  Sat Aug  5 10:35:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 05 Aug 2000 11:35:06 +0200
Subject: [Python-Dev] Python 1.6b1 out ?!
Message-ID: <398BDFCA.4D5A262D@lemburg.com>

Strange: either I missed it or Guido chose to release 1.6b1 
in silence, but I haven't seen any official announcement of the
release available from http://www.python.org/1.6/.

BTW, nice holiday, Guido ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Sun Aug  6 00:34:43 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 5 Aug 2000 19:34:43 -0400
Subject: [Python-Dev] Python 1.6b1 out ?!
In-Reply-To: <398BDFCA.4D5A262D@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>

[M.-A. Lemburg]
> Strange: either I missed it or Guido chose to release 1.6b1
> in silence, but I haven't seen any official announcement of the
> release available from http://www.python.org/1.6/.
>
> BTW, nice holiday, Guido ;-)

There's an announcement at the top of http://www.python.org/, though, and an
announcement about the new license at http://www.pythonlabs.com/.  Guido
also posted to comp.lang.python.  You probably haven't seen the latter if
you use the mailing list gateway, because many mailing lists at python.org
coincidentally got hosed at the same time due to a full disk.  Else your
news server simply hasn't gotten it yet (I saw it come across on
netnews.msn.com, but then Microsoft customers get everything first <wink>).




From thomas@xs4all.net  Sat Aug  5 16:18:30 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 5 Aug 2000 17:18:30 +0200
Subject: [Python-Dev] UNPACK_LIST & UNPACK_TUPLE
Message-ID: <20000805171829.N266@xs4all.nl>

I'm a tad confused about the 'UNPACK_LIST' and 'UNPACK_TUPLE' opcodes. There
doesn't seem to be a difference between the two, yet the way they are
compiled is slightly different (but not much.) I can list all the
differences I can see, but I just don't understand them, and because of that
I'm not sure how to handle them in augmented assignment. UNPACK_LIST just
seems so redundant :)

Wouldn't it make sense to remove the difference between the two, or better
yet, remove UNPACK_LIST (and possibly rename UNPACK_TUPLE to UNPACK_SEQ ?)
We already lost bytecode compability anyway!

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@thyrsus.com  Sun Aug  6 00:46:00 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 5 Aug 2000 19:46:00 -0400
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <398BCA4F.17E23309@letterror.com>; from just@letterror.com on Sat, Aug 05, 2000 at 09:03:33AM +0100
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com> <bld7joah8z.fsf@bitdiddle.concentric.net> <9b13RLA800i5EwLY@jessikat.fsnet.co.uk> <8mg3au$rtb$1@nnrp1.deja.com> <398BCA4F.17E23309@letterror.com>
Message-ID: <20000805194600.A7242@thyrsus.com>

Just van Rossum <just@letterror.com>:
> Christian has done an amazing piece of work, and he's gotten much
> praise from the community. I mean, if you *are* looking for a killer
> feature to distinguish 1.6 from 2.0, I'd know where to look...

I must say I agree.  Something pretty similar to Stackless Python is
going to have to happen anyway for the language to make its next major
advance in capability -- generators, co-routining, and continuations.

I also agree that this is a more important debate, and a harder set of
decisions, than the PEPs.  Which means we should start paying attention
to it *now*.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

I don't like the idea that the police department seems bent on keeping
a pool of unarmed victims available for the predations of the criminal
class.
         -- David Mohler, 1989, on being denied a carry permit in NYC


From bwarsaw@beopen.com  Sun Aug  6 00:50:04 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 19:50:04 -0400 (EDT)
Subject: [Python-Dev] Python 1.6b1 out ?!
References: <398BDFCA.4D5A262D@lemburg.com>
 <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>
Message-ID: <14732.43052.91330.426211@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> There's an announcement at the top of http://www.python.org/,
    TP> though, and an announcement about the new license at
    TP> http://www.pythonlabs.com/.  Guido also posted to
    TP> comp.lang.python.  You probably haven't seen the latter if you
    TP> use the mailing list gateway, because many mailing lists at
    TP> python.org coincidentally got hosed at the same time due to a
    TP> full disk.  Else your news server simply hasn't gotten it yet
    TP> (I saw it come across on netnews.msn.com, but then Microsoft
    TP> customers get everything first <wink>).

And you should soon see the announcement if you haven't already.  All
the mailing lists on py.org should be back on line now.  It'll take a
while to clear out the queue though.

-Barry


From bwarsaw@beopen.com  Sun Aug  6 00:52:05 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 19:52:05 -0400 (EDT)
Subject: [Python-Dev] UNPACK_LIST & UNPACK_TUPLE
References: <20000805171829.N266@xs4all.nl>
Message-ID: <14732.43173.634118.381282@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> I'm a tad confused about the 'UNPACK_LIST' and 'UNPACK_TUPLE'
    TW> opcodes. There doesn't seem to be a difference between the
    TW> two, yet the way they are compiled is slightly different (but
    TW> not much.) I can list all the differences I can see, but I
    TW> just don't understand them, and because of that I'm not sure
    TW> how to handle them in augmented assignment. UNPACK_LIST just
    TW> seems so redundant :)

    TW> Wouldn't it make sense to remove the difference between the
    TW> two, or better yet, remove UNPACK_LIST (and possibly rename
    TW> UNPACK_TUPLE to UNPACK_SEQ ?)  We already lost bytecode
    TW> compability anyway!

This is a historical artifact.  I don't remember what version it was,
but at one point there was a difference between

    a, b, c = gimme_a_tuple()

and

    [a, b, c] = gimme_a_list()

That difference was removed, and support was added for any sequence
unpacking.  If changing the bytecode is okay, then there doesn't seem
to be any reason to retain the differences.

-Barry


From jack@oratrix.nl  Sat Aug  5 22:14:08 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Sat, 05 Aug 2000 23:14:08 +0200
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
In-Reply-To: Message by "Fredrik Lundh" <effbot@telia.com> ,
 Thu, 3 Aug 2000 19:19:03 +0200 , <007401bffd6e$ed9bbde0$f2a6b5d4@hagrid>
Message-ID: <20000805211413.E1224E2670@oratrix.oratrix.nl>

Fredrik,
could you add a PyOS_CheckStack() call to the recursive part of sre
(within #ifdef USE_STACKCHECK, of course)?
I'm getting really really nasty crashes on the Mac if I run the
regression tests...
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 


From jack@oratrix.nl  Sat Aug  5 22:41:15 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Sat, 05 Aug 2000 23:41:15 +0200
Subject: [Python-Dev] strftime()
Message-ID: <20000805214120.A55EEE2670@oratrix.oratrix.nl>

The test_strftime regression test has been failing on the Mac for
ages, and I finally got round to investigating the problem: the
MetroWerks library returns the strings "am" and "pm" for %p but the
regression test expects "AM" and "PM". According to the comments in
the source of the library (long live vendors who provide it! Yeah!)
this is C9X compatibility.

I can of course move the %p to the nonstandard expectations, but maybe 
someone has a better idea?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From bwarsaw@beopen.com  Sun Aug  6 01:12:58 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 20:12:58 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com>
 <bld7joah8z.fsf@bitdiddle.concentric.net>
 <9b13RLA800i5EwLY@jessikat.fsnet.co.uk>
 <8mg3au$rtb$1@nnrp1.deja.com>
 <398BCA4F.17E23309@letterror.com>
 <20000805194600.A7242@thyrsus.com>
Message-ID: <14732.44426.201651.690336@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr@thyrsus.com> writes:

    ESR> I must say I agree.  Something pretty similar to Stackless
    ESR> Python is going to have to happen anyway for the language to
    ESR> make its next major advance in capability -- generators,
    ESR> co-routining, and continuations.

Stackless definitely appeals to me from a coolness factor, though I
don't know how much I'd use those new capabilities that it allows.
The ability to embed Python on hardware that might otherwise not be
possible without Stackless is also an interesting thing to explore.

    ESR> I also agree that this is a more important debate, and a
    ESR> harder set of decisions, than the PEPs.  Which means we
    ESR> should start paying attention to it *now*.

Maybe a PEP isn't the right venue, but the semantics and externally
visible effects of Stackless need to be documented.  What if JPython
or Python .NET wanted to adopt those same semantics, either by doing
their implementation's equivalent of Stackless or by some other means?
We can't even think about doing that without a clear and complete
specification.

Personally, I don't see Stackless making it into 2.0 and possibly not
2.x.  But I agree it is something to seriously consider for Py3K.

-Barry


From tim_one@email.msn.com  Sun Aug  6 06:07:27 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 01:07:27 -0400
Subject: [Python-Dev] strftime()
In-Reply-To: <20000805214120.A55EEE2670@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com>

[Jack Jansen]
> The test_strftime regression test has been failing on the Mac for
> ages, and I finally got round to investigating the problem: the
> MetroWerks library returns the strings "am" and "pm" for %p but the
> regression test expects "AM" and "PM". According to the comments in
> the source of the library (long live vendors who provide it! Yeah!)
> this is C9X compatibility.

My copy of a draft C99 std agrees (7.23.3.5) with MetroWerks on this point
(i.e., that %p in the "C" locale becomes "am" or "pm").

> I can of course move the %p to the nonstandard expectations, but maybe
> someone has a better idea?

Not really.  If Python thinks this function is valuable, it "should" offer a
platform-independent implementation, but as nobody has time for that ...




From MarkH@ActiveState.com  Sun Aug  6 06:08:46 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Sun, 6 Aug 2000 15:08:46 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000d01bffe6a$7e4bab60$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEHPDDAA.MarkH@ActiveState.com>

[/F]
> > huh?  are you seriously proposing to break every single C extension
> > ever written -- on each and every platform -- just to trap an error
> > message caused by extensions linked against 1.5.2 on your favourite
> > platform?

[Barry]
> 	What makes you think that a crash will not happen under Unix
> 	when you change the API? You just don't get the Windows crash.
>
> 	As this thread has pointed out you have no intention of checking
> 	for binary compatibility on the API as you move up versions.

I imtimated the following, but did not spell it out, so I will here to
clarify.

I was -1 on Barry's solution getting into 1.6, given the time frame.  I
hinted that the solution Guido recently checked in "if
(!Py_IsInitialized()) ..." would not be too great an impact even if Barry's
solution, or one like it, was eventually adopted.

So I think that the adoption of our half-solution (ie, we are really only
forcing a better error message - not even getting a traceback to indicate
_which_ module fails) need not preclude a better solution when we have more
time to implement it...

Mark.



From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug  6 07:23:48 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 6 Aug 2000 09:23:48 +0300 (IDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <20000805194600.A7242@thyrsus.com>
Message-ID: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>

On Sat, 5 Aug 2000, Eric S. Raymond wrote:

> I must say I agree.  Something pretty similar to Stackless Python is
> going to have to happen anyway for the language to make its next major
> advance in capability -- generators, co-routining, and continuations.
> 
> I also agree that this is a more important debate, and a harder set of
> decisions, than the PEPs.  Which means we should start paying attention
> to it *now*.

I tend to disagree. For a while now I'm keeping an eye on the guile
interpreter development (a very cool project, but unfortunately limping
along. It probably will be the .NET of free software, though). In guile,
they were able to implement continuations *without* what we call
stacklessness. Sure, it might look inefficient, but for most applications
(like co-routines) it's actually quite all right. What all that goes to
say is that we should treat stackles exactly like it is -- an
implementation detail. Now, that's not putting down Christian's work -- on
the contrary, I think the Python implementation is very important. But
that alone should indicate there's no need for a PEP. I, for one, am for
it, because I happen to think it's a much better implementation. If it
also has the effect of making continuationsmodule.c easier to write, well,
that's not an issue in this discussion as far as I'm concerned.

brain-dumping-ly y'rs, Z.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From mal@lemburg.com  Sun Aug  6 09:55:55 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sun, 06 Aug 2000 10:55:55 +0200
Subject: [Python-Dev] Python 1.6b1 out ?!
References: <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>
Message-ID: <398D281B.E7F118C0@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Strange: either I missed it or Guido chose to release 1.6b1
> > in silence, but I haven't seen any official announcement of the
> > release available from http://www.python.org/1.6/.
> >
> > BTW, nice holiday, Guido ;-)
> 
> There's an announcement at the top of http://www.python.org/, though, and an
> announcement about the new license at http://www.pythonlabs.com/.  Guido
> also posted to comp.lang.python.  You probably haven't seen the latter if
> you use the mailing list gateway, because many mailing lists at python.org
> coincidentally got hosed at the same time due to a full disk.  Else your
> news server simply hasn't gotten it yet (I saw it come across on
> netnews.msn.com, but then Microsoft customers get everything first <wink>).

I saw the announcement on www.python.org, thanks. (I already
started to miss the usual 100+ Python messages I get into my mailbox
every day ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Sun Aug  6 13:20:56 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sun, 06 Aug 2000 14:20:56 +0200
Subject: [Python-Dev] Pickling using XML as output format
Message-ID: <398D5827.EE8938DD@lemburg.com>

Before starting to reinvent the wheel:

I need a pickle.py compatible module which essentially works
just like pickle.py, but uses XML as output format. I've already
looked at xml_pickle.py (see Parnassus), but this doesn't seem
to handle object references at all. Also, it depends on 
xml.dom which I'd rather avoid.

My idea was to rewrite the format used by pickle in an
XML syntax and then hard-code the DTD into a subclass
of the parser in xmllib.py.

Now, I'm very new to XML, so I may be missing something here...
would this be doable in a fairly sensible way (I'm thinking
of closely sticking to the pickle stream format) ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug  6 13:46:09 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 6 Aug 2000 15:46:09 +0300 (IDT)
Subject: [Python-Dev] Pickling using XML as output format
In-Reply-To: <398D5827.EE8938DD@lemburg.com>
Message-ID: <Pine.GSO.4.10.10008061544180.20069-100000@sundial>

On Sun, 6 Aug 2000, M.-A. Lemburg wrote:

> Before starting to reinvent the wheel:

Ummmm......I'd wait for some DC guy to chime in: I think Zope had
something like that. You might want to ask around on the Zope lists
or search zope.org.

I'm not sure what it has and what it doesn't have, though.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug  6 14:22:09 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 6 Aug 2000 16:22:09 +0300 (IDT)
Subject: [Python-Dev] Warnings on gcc -Wall
Message-ID: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>

As those of you with a firm eye on python-checkins noticed, I've been
trying to clear the source files (as many of them as I could get to
compile on my setup) from warnings. This is only with gcc -Wall: a future
project of mine is to enable much more warnings and try to clean them too.

There are currently two places where warnings still remain:

 -- readline.c -- readline/history.h is included only on BeOS, and
otherwise prototypes are declared by hand. Does anyone remember why? 

-- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
oparg which might be used before initialized. I've had a look at that
code, and I'm certain gcc's flow analysis is simply not good enough.
However, I would like to silence the warning, so I can get used to
building with -Wall -Werror and make sure to mind any warnings. Does
anyone see any problem with putting opcode=0 and oparg=0 near the top?

Any comments welcome, of course.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Sun Aug  6 15:00:26 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 16:00:26 +0200
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>; from moshez@math.huji.ac.il on Sun, Aug 06, 2000 at 04:22:09PM +0300
References: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>
Message-ID: <20000806160025.P266@xs4all.nl>

On Sun, Aug 06, 2000 at 04:22:09PM +0300, Moshe Zadka wrote:

>  -- readline.c -- readline/history.h is included only on BeOS, and
> otherwise prototypes are declared by hand. Does anyone remember why? 

Possibly because old versions of readline don't have history.h ?

> -- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
> oparg which might be used before initialized. I've had a look at that
> code, and I'm certain gcc's flow analysis is simply not good enough.
> However, I would like to silence the warning, so I can get used to
> building with -Wall -Werror and make sure to mind any warnings. Does
> anyone see any problem with putting opcode=0 and oparg=0 near the top?

Actually, I don't think this is true. 'opcode' and 'oparg' get filled inside
the permanent for-loop, but after the check on pending signals and
exceptions. I think it's theoretically possible to have 'things_to_do' on
the first time through the loop, which end up in an exception, thereby
causing the jump to on_error, entering the branch on WHY_EXCEPTION, which
uses oparg and opcode. I'm not sure if initializing opcode/oparg is the
right thing to do, though, but I'm not sure what is, either :-)

As for the checkins, I haven't seen some of the pending checkin-mails pass
by (I did some cleaning up of configure.in last night, for instance, after
the re-indent and grammar change in compile.c that *did* come through.)
Barry (or someone else ;) are those still waiting in the queue, or should we
consider them 'lost' ? 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug  6 15:13:10 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 6 Aug 2000 17:13:10 +0300 (IDT)
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: <20000806160025.P266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008061703040.20069-100000@sundial>

On Sun, 6 Aug 2000, Thomas Wouters wrote:

> On Sun, Aug 06, 2000 at 04:22:09PM +0300, Moshe Zadka wrote:
> 
> >  -- readline.c -- readline/history.h is included only on BeOS, and
> > otherwise prototypes are declared by hand. Does anyone remember why? 
> 
> Possibly because old versions of readline don't have history.h ?

And it did have the history functions? If so, maybe we can include
<readline/readline.h> unconditionally, and switch on the readline version.
If not, I'd just announce support for earlier versions of readline
nonexistant and be over and done with it.

> 'opcode' and 'oparg' get filled inside
> the permanent for-loop, but after the check on pending signals and
> exceptions. I think it's theoretically possible to have 'things_to_do' on
> the first time through the loop, which end up in an exception, thereby
> causing the jump to on_error, entering the branch on WHY_EXCEPTION, which
> uses oparg and opcode. I'm not sure if initializing opcode/oparg is the
> right thing to do, though, but I'm not sure what is, either :-)

Probably initializing them before the "goto no_error" to some dummy value,
then checking for this dummy value in the relevant place. You're right,
of course, I hadn't noticed the goto.

> As for the checkins, I haven't seen some of the pending checkin-mails pass
> by (I did some cleaning up of configure.in last night, for instance, after
> the re-indent and grammar change in compile.c that *did* come through.)
> Barry (or someone else ;) are those still waiting in the queue, or should we
> consider them 'lost' ? 

I got a reject on two e-mails, but I didn't think of saving
them....oooops..well, no matter, most of them were trivial stuff.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tismer@appliedbiometrics.com  Sun Aug  6 15:47:26 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Sun, 06 Aug 2000 16:47:26 +0200
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
Message-ID: <398D7A7E.2AB1BDF3@appliedbiometrics.com>


Moshe Zadka wrote:
> 
> On Sat, 5 Aug 2000, Eric S. Raymond wrote:
> 
> > I must say I agree.  Something pretty similar to Stackless Python is
> > going to have to happen anyway for the language to make its next major
> > advance in capability -- generators, co-routining, and continuations.
> >
> > I also agree that this is a more important debate, and a harder set of
> > decisions, than the PEPs.  Which means we should start paying attention
> > to it *now*.
> 
> I tend to disagree. For a while now I'm keeping an eye on the guile
> interpreter development (a very cool project, but unfortunately limping
> along. It probably will be the .NET of free software, though). In guile,
> they were able to implement continuations *without* what we call
> stacklessness. Sure, it might look inefficient, but for most applications
> (like co-routines) it's actually quite all right.

Despite the fact that I consider the Guile implementation a pile
of junk code that I would never touch like I did with Python*),
you are probably right. Stackless goes a bit too far in a sense,
that it implies abilities for other implementations which are
hard to achieve.

There are in fact other ways to implement coroutines and uthreads.
Stackless happens to achieve all of that and a lot more, and to
be very efficient. Therefore it would be a waste to go back to
a restricted implementation since it exists already. If stackless
didn't go so far, it would probably have been successfully
integrated, already. I wanted it all and luckily got it all.

On the other hand, there is no need to enforce every Python
implementation to do the full continuation support. In CPython,
continuationmodule.c can be used for such purposes, and it can
be used as a basement for coroutine and generator implementations.
Using Guile's way to implement these would be a possible path
for JPython.
The point is to use only parts of the possibilities and don't
enforce everything for every environment. There is just no point
in shrinking the current implementation down; not even a subset
would be helpful in JPython.

> What all that goes to
> say is that we should treat stackles exactly like it is -- an
> implementation detail. Now, that's not putting down Christian's work -- on
> the contrary, I think the Python implementation is very important. But
> that alone should indicate there's no need for a PEP. I, for one, am for
> it, because I happen to think it's a much better implementation. If it
> also has the effect of making continuationsmodule.c easier to write, well,
> that's not an issue in this discussion as far as I'm concerned.

A possible proposal could be this:

- incorporate Stackless into CPython, but don't demand it
  for every implementation
- implement coroutines and others with Stackless for CPython
  try alternative implementations for JPython if there are users
- do *not* make continuations a standard language feature until
  there is a portable way to get it everywhere

Still, I can't see the point with Java. There are enough
C extension which are not available for JPython, but it is
allowed to use them. Same with the continuationmodule, why
does it need to exist for Java, in order to allow it for
CPython?
This is like not implementing new browser features until
they can be implemented on my WAP handy. Sonsense.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com

*) sorry, feel free to disagree, but this was my impression when
   I read the whole code half a year ago.
   This is exactly what I not want :-)


From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug  6 16:11:21 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 6 Aug 2000 18:11:21 +0300 (IDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <Pine.GSO.4.10.10008061807230.20069-100000@sundial>

On Sun, 6 Aug 2000, Christian Tismer wrote:

> On the other hand, there is no need to enforce every Python
> implementation to do the full continuation support. In CPython,
> continuationmodule.c can be used for such purposes, and it can
> be used as a basement for coroutine and generator implementations.
> Using Guile's way to implement these would be a possible path
> for JPython.

Actually, you can't use Guile's way for JPython -- the guile folks
are doing some low-level semi-portable stuff in C...

> - incorporate Stackless into CPython, but don't demand it
>   for every implementation

Again, I want to say I don't think there's a meaning for "for every
implementation" -- Stackless is not part of the language definiton,
it's part of the implementation. The whole Java/.NET is a red herring.

> - implement coroutines and others with Stackless for CPython

I think that should be done in a third-party module. But hey, if Guido
wants to maintain another module...

> - do *not* make continuations a standard language feature until
>   there is a portable way to get it everywhere

I'd got further and say "do *not* make continuations a standard language
feature" <wink>

> Still, I can't see the point with Java. There are enough
> C extension which are not available for JPython, but it is
> allowed to use them. Same with the continuationmodule, why
> does it need to exist for Java, in order to allow it for
> CPython?

My point exactly.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tismer@appliedbiometrics.com  Sun Aug  6 16:22:39 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Sun, 06 Aug 2000 17:22:39 +0200
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008061807230.20069-100000@sundial>
Message-ID: <398D82BF.85D0E5AB@appliedbiometrics.com>


Moshe Zadka wrote:

...
> > - implement coroutines and others with Stackless for CPython
> 
> I think that should be done in a third-party module. But hey, if Guido
> wants to maintain another module...

Right, like now. CPython has the necessary stackless hooks, nuts
and bolts, but nothing else, and no speed impact.

Then is just happens to be *possible* to write such an extension,
and it will be written, but this is no language feature.

> > - do *not* make continuations a standard language feature until
> >   there is a portable way to get it everywhere
> 
> I'd got further and say "do *not* make continuations a standard language
> feature" <wink>

This was my sentence, in the first place, but when reviewing
the message, I could not resist to plug that in again <1.5 wink>

As discussed in a private thread with Just, some continuation
features can only be made "nice" if they are supported by
some language extension. I want to use Python in CS classes
to teach them continuations, therefore I need a backdoor :-)

and-there-will-always-be-a-version-on-my-site-that-goes-
   -beyond-the-standard - ly y'rs  - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From bwarsaw@beopen.com  Sun Aug  6 16:49:07 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Sun, 6 Aug 2000 11:49:07 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
 <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <14733.35059.53619.98300@anthem.concentric.net>

>>>>> "CT" == Christian Tismer <tismer@appliedbiometrics.com> writes:

    CT> Still, I can't see the point with Java. There are enough C
    CT> extension which are not available for JPython, but it is
    CT> allowed to use them. Same with the continuationmodule, why
    CT> does it need to exist for Java, in order to allow it for
    CT> CPython?  This is like not implementing new browser features
    CT> until they can be implemented on my WAP handy. Sonsense.

It's okay if there are some peripheral modules that are available to
CPython but not JPython (include Python .NET here too), and vice
versa.  That'll just be the nature of things.  But whatever basic
language features Stackless allows one to do /from Python/ must be
documented.  That's the only way we'll be able to one of these things:

- support the feature a different way in a different implementation
- agree that the feature is part of the Python language definition,
  but possibly not (yet) supported by all implementations.
- define the feature as implementation dependent (so people writing
  portable code know to avoid those features).

-Barry


From guido@beopen.com  Sun Aug  6 18:23:52 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 12:23:52 -0500
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: Your message of "Sun, 06 Aug 2000 16:22:09 +0300."
 <Pine.GSO.4.10.10008061612490.20069-100000@sundial>
References: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>
Message-ID: <200008061723.MAA14418@cj20424-a.reston1.va.home.com>

>  -- readline.c -- readline/history.h is included only on BeOS, and
> otherwise prototypes are declared by hand. Does anyone remember why? 

Please don't touch that module.  GNU readline is wacky.

> -- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
> oparg which might be used before initialized. I've had a look at that
> code, and I'm certain gcc's flow analysis is simply not good enough.
> However, I would like to silence the warning, so I can get used to
> building with -Wall -Werror and make sure to mind any warnings. Does
> anyone see any problem with putting opcode=0 and oparg=0 near the top?

No problem.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Sun Aug  6 18:34:34 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 12:34:34 -0500
Subject: [Python-Dev] strftime()
In-Reply-To: Your message of "Sun, 06 Aug 2000 01:07:27 -0400."
 <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com>
Message-ID: <200008061734.MAA14488@cj20424-a.reston1.va.home.com>

> [Jack Jansen]
> > The test_strftime regression test has been failing on the Mac for
> > ages, and I finally got round to investigating the problem: the
> > MetroWerks library returns the strings "am" and "pm" for %p but the
> > regression test expects "AM" and "PM". According to the comments in
> > the source of the library (long live vendors who provide it! Yeah!)
> > this is C9X compatibility.
> 
> My copy of a draft C99 std agrees (7.23.3.5) with MetroWerks on this point
> (i.e., that %p in the "C" locale becomes "am" or "pm").
> 
> > I can of course move the %p to the nonstandard expectations, but maybe
> > someone has a better idea?
> 
> Not really.  If Python thinks this function is valuable, it "should" offer a
> platform-independent implementation, but as nobody has time for that ...

No.  The test is too strict.  It should be fixed.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From just@letterror.com  Sun Aug  6 18:59:42 2000
From: just@letterror.com (Just van Rossum)
Date: Sun, 6 Aug 2000 18:59:42 +0100
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <14733.35059.53619.98300@anthem.concentric.net>
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
 <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <l03102800b5b354bd9114@[193.78.237.132]>

At 11:49 AM -0400 06-08-2000, Barry A. Warsaw wrote:
>It's okay if there are some peripheral modules that are available to
>CPython but not JPython (include Python .NET here too), and vice
>versa.  That'll just be the nature of things.  But whatever basic
>language features Stackless allows one to do /from Python/ must be
>documented.

The things stackless offers are no different from:

- os.open()
- os.popen()
- os.system()
- os.fork()
- threading (!!!)

These things are all doable /from Python/, yet their non-portability seems
hardly an issue for the Python Standard Library.

>That's the only way we'll be able to one of these things:
>
>- support the feature a different way in a different implementation
>- agree that the feature is part of the Python language definition,
>  but possibly not (yet) supported by all implementations.

Honest (but possibly stupid) question: are extension modules part of the
language definition?

>- define the feature as implementation dependent (so people writing
>  portable code know to avoid those features).

It's an optional extension module, so this should be obvious. (As it
happens, it depends on a new and improved implementation of ceval.c, but
this is really beside the point.)

Just

PS: thanks to everybody who kept CC-ing me in this thread; it's much
appreciated as I'm not on python-dev.




From jeremy@alum.mit.edu  Sun Aug  6 19:54:56 2000
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Sun, 6 Aug 2000 14:54:56 -0400
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <20000805194600.A7242@thyrsus.com>
Message-ID: <AJEAKILOCCJMDILAPGJNOEFJCBAA.jeremy@alum.mit.edu>

Eric S. Raymond <esr@thyrsus.com> writes:
>Just van Rossum <just@letterror.com>:
>> Christian has done an amazing piece of work, and he's gotten much
>> praise from the community. I mean, if you *are* looking for a killer
>> feature to distinguish 1.6 from 2.0, I'd know where to look...
>
>I must say I agree.  Something pretty similar to Stackless Python is
>going to have to happen anyway for the language to make its next major
>advance in capability -- generators, co-routining, and continuations.
>
>I also agree that this is a more important debate, and a harder set of
>decisions, than the PEPs.  Which means we should start paying attention
>to it *now*.

The PEPs exist as a way to formalize important debates and hard decisions.
Without a PEP that offers a formal description of the changes, it is hard to
have a reasonable debate.  I would not be comfortable with the specification
for any feature from stackless being the implementation.

Given the current release schedule for Python 2.0, I don't see any
possibility of getting a PEP accepted in time.  The schedule, from PEP 200,
is:

    Tentative Release Schedule
        Aug. 14: All 2.0 PEPs finished / feature freeze
        Aug. 28: 2.0 beta 1
        Sep. 29: 2.0 final

Jeremy




From guido@beopen.com  Sun Aug  6 22:17:33 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 16:17:33 -0500
Subject: [Python-Dev] math.rint bites the dust
Message-ID: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>

After a brief consult with Tim, I've decided to drop math.rint() --
it's not standard C, can't be implemented in portable C, and its
naive (non-IEEE-754-aware) effect can easily be had in other ways.

If you disagree, speak up now!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Sun Aug  6 21:25:03 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 22:25:03 +0200
Subject: [Python-Dev] math.rint bites the dust
In-Reply-To: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 06, 2000 at 04:17:33PM -0500
References: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>
Message-ID: <20000806222502.S266@xs4all.nl>

On Sun, Aug 06, 2000 at 04:17:33PM -0500, Guido van Rossum wrote:

> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

I don't particularly disagree, since I hardly do anything with floating
point numbers, but how can something both not be implementable in portable C
*and* it's effect easily be had in other ways ?

I also recall someone who was implementing rint() on platforms that didnt
have it... Or did that idea get trashed because it wasn't portable enough ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Sun Aug  6 23:49:06 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Sun, 06 Aug 2000 22:49:06 +0000
Subject: [Python-Dev] bug-fixes in cnri-16-start branch
Message-ID: <398DEB62.789B4C9C@nowonder.de>

I have a question on the right procedure for fixing a simple
bug in the 1.6 release branch.

Bug #111162 appeared because the tests for math.rint() are
already contained in the cnri-16-start revision of test_math.py
while the "try: ... except AttributeError: ..." construct which
was checked in shortly after was not.

Now the correct bugfix is already known (and has been
applied to the main branch). I have updated the test_math.py
file in my working version with "-r cnri-16-start" and
made the changes.

Now I probably should just commit, close the patch
(with an appropriate follow-up) and be happy.

did-I-get-that-right-or-does-something-else-have-to-be-done-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From tim_one@email.msn.com  Sun Aug  6 21:54:02 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 16:54:02 -0400
Subject: [Python-Dev] math.rint bites the dust
In-Reply-To: <20000806222502.S266@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEFJGOAA.tim_one@email.msn.com>

[Guido]
> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

[Thomas Wouters]
> I don't particularly disagree, since I hardly do anything with floating
> point numbers, but how can something both not be implementable in
> portable C *and* it's effect easily be had in other ways ?

Can't.  rint is not standard C, but is standard C99, where a conforming
implementation requires paying attention to all the details of the 754 fp
model.  It's a *non* 754-aware version of rint that can be easily had in
other ways (e.g., you easily write a rint in Python that always rounds to
nearest/even, by building on math.floor and checking the sign bit, but
ignoring the possibilities of infinities, NaNs, current 754 rounding mode,
and correct treatment of (at least) the 754 inexact and underflow flags --
Python gives no way to get at any of those now, neither does current C, and
a correct rint from the C99 point of view has to deal with all of them).

This is a case where I'm unwilling to support a function at all before it
can be supported correctly; I see no value in exposing current platforms'
divergent and incorrect implementations of rint, not in the math module.
Code that uses them will fail to work at all on some platforms (since rint
is not in today's C, some platfroms don't have it), and change meaning over
time as the other platforms move toward C99 compliance.

> I also recall someone who was implementing rint() on platforms
> that didnt have it... Or did that idea get trashed because it wasn't
> portable enough ?

Bingo.

everyone's-welcome-to-right-their-own-incorrect-version<wink>-ly
    y'rs  - tim




From jack@oratrix.nl  Sun Aug  6 21:56:48 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Sun, 06 Aug 2000 22:56:48 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
Message-ID: <20000806205653.B0341E2670@oratrix.oratrix.nl>

Could the defenders of Stackless Python please explain _why_ this is
such a great idea? Just and Christian seem to swear by it, but I'd
like to hear of some simple examples of programming tasks that will be 
programmable in 50% less code with it (or 50% more understandable
code, for that matter).

And, similarly, could the detractors of Stackless Python explain why
it is such a bad idea. A lot of core-pythoneers seem to have
misgivings about it, even though issues of compatability and
efficiency have been countered many times here by its champions (at
least, it seems that way to a clueless bystander like myself). I'd
really like to be able to take a firm standpoint myself, that's part
of my personality, but I really don't know which firm standpoint at
the moment:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From tim_one@email.msn.com  Sun Aug  6 22:03:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 17:03:23 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFLGOAA.tim_one@email.msn.com>

[Jack Jansen]
> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? ...

But they already have, and many times.  That's why it needs a PEP:  so we
don't have to endure <wink> the exact same heated discussions multiple times
every year for eternity.

> ...
> And, similarly, could the detractors of Stackless Python explain why
> it is such a bad idea.

Ditto.

if-anyone-hasn't-yet-noticed-98%-of-advocacy-posts-go-straight-
    into-a-black-hole-ly y'rs  - tim




From thomas@xs4all.net  Sun Aug  6 22:05:45 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 23:05:45 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>; from jack@oratrix.nl on Sun, Aug 06, 2000 at 10:56:48PM +0200
References: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <20000806230545.T266@xs4all.nl>

On Sun, Aug 06, 2000 at 10:56:48PM +0200, Jack Jansen wrote:

> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? Just and Christian seem to swear by it, but I'd
> like to hear of some simple examples of programming tasks that will be 
> programmable in 50% less code with it (or 50% more understandable
> code, for that matter).

That's *continuations*, not Stackless. Stackless itself is just a way of
implementing the Python bytecode eval loop with minimized use of the C
stack. It doesn't change any functionality except the internal dependance on
the C stack (which is limited on some platforms.) Stackless also makes a
number of things possible, like continuations.

Continuations can certainly reduce code, if used properly, and they can make
it a lot more readable if the choice is between continuations or threaded
spaghetti-code. It can, however, make code a lot less readable too, if used
improperly, or when viewed by someone who doesn't grok continuations.

I'm +1 on Stackless, +0 on continuations. (Continuations are cool, and
Pythonic in one sense (stackframes become even firster class citizens ;) but
not easy to learn or get used to.)

> And, similarly, could the detractors of Stackless Python explain why
> it is such a bad idea.

I think my main reservation towards Stackless is the change to ceval.c,
which is likely to be involved (I haven't looked at it, yet) -- but ceval.c
isn't a childrens' book now, and I think the added complexity (if any) is
worth the loss of some of the dependancies on the C stack.

fl.0,02-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Mon Aug  7 00:18:22 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Sun, 06 Aug 2000 23:18:22 +0000
Subject: [Python-Dev] math.rint bites the dust
References: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>
Message-ID: <398DF23E.D1DDE196@nowonder.de>

Guido van Rossum wrote:
> 
> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

If this is because of Bug #111162, things can be fixed easily.
(as I said in another post just some minutes ago, I just
need to recommit the changes made after cnri-16-start.)

I wouldn't be terribly concerned about its (maybe temporary)
death, though. After I learned more about it I am sure I
want to use round() rather than math.rint().

floating-disap-point-ed-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From esr@thyrsus.com  Sun Aug  6 22:59:35 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 17:59:35 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>; from jack@oratrix.nl on Sun, Aug 06, 2000 at 10:56:48PM +0200
References: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <20000806175935.A14138@thyrsus.com>

Jack Jansen <jack@oratrix.nl>:
> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? Just and Christian seem to swear by it, but I'd
> like to hear of some simple examples of programming tasks that will be 
> programmable in 50% less code with it (or 50% more understandable
> code, for that matter).

My interest in Stackless is that I want to see Icon-style generators,
co-routining, and first-class continuations in Python.  Generators and
co-routining are wrappers around continuations.  Something
functionally equivalent to the Stackless mods is needed to get us
there, because using the processor stack makes it very hard to do
continuations properly.

In their full generality, first-class continuations are hard to think
about and to explain clearly, and I'm not going to try here.  A large
part of Guido's reluctance to introduce them is precisely because they
are so hard to think about; he thinks it's a recipe for trouble stuff
in the language that *he* has trouble understanding, let alone other
people.

He has a point, and part of the debate going on in the group that has
been tracking this stuff (Guido, Barry Warsaw, Jeremy Hylton, Fred
Drake, Eric Tiedemann and myself) is whether Python should expose
support for first-class continuations or only "safer" packagings such
as coroutining and generators.  So for the moment just think of
continuations as the necessary primitive to implement coroutining and
generators.

You can think of a generator as a function that, internally, is coded 
as a special kind of loop.  Let's say, for example, that you want a function
that returns successive entries in the list "squares of integers".  In 
Python-with-generators, that would look something like this.

def square_generator():
    i = 1
    while 1:
	yield i**2
	i = i + 1

Calling this function five times in succession would return 1, 4, 9,
16, 25.  Now what would be going on under the hood is that the new primitive
"yield" says "return the given value, and save a continuation of this
function to be run next time the function is called".  The continuation 
saves the program counter and the state of automatic variables (the stack)
to be restored on the next call -- thus, execution effectively resumes just
after the yield statement.

This example probably does not look very interesting.  It's a very trivial
use of the facility.  But now suppose you had an analogous function 
implemented by a code loop that gets an X event and yields the event data!

Suddenly, X programs don't have to look like a monster loop with all the
rest of the code hanging off of them.  Instead, any function in the program
that needs to do stateful input parsing can just say "give me the next event"
and get it.  

In general, what generators let you do is invert control hierarchies
based on stateful loops or recursions.  This is extremely nice for
things like state machines and tree traversals -- you can bundle away
the control loop away in a generator, interrupt it, and restart it
without losing your place.

I want this feature a lot.  Guido has agreed in principle that we ought
to have generators, but there is not yet a well-defined path forward to
them.  Stackless may be the most promising route.

I was going to explain coroutines separately, but I realized while writing
this that the semantics of "yield" proposed above actually gives full
coroutining.  Two functions can ping-pong control back and forth among
themselves while retaining their individual stack states as a pair of
continuations.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"This country, with its institutions, belongs to the people who
inhabit it. Whenever they shall grow weary of the existing government,
they can exercise their constitutional right of amending it or their
revolutionary right to dismember it or overthrow it."
	-- Abraham Lincoln, 4 April 1861


From tim_one@email.msn.com  Sun Aug  6 23:07:45 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 18:07:45 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806175935.A14138@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>

[ Eric S. Raymond]
> ...
> I want this feature [generators] a lot.  Guido has agreed in principle
> that we ought to have generators, but there is not yet a well-defined
> path forward to them.  Stackless may be the most promising route.

Actually, if we had a PEP <wink>, it would have recorded for all time that
Guido gave a detailed description of how to implement generators with minor
changes to the current code.  It would also record that Steven Majewski had
already done so some 5 or 6 years ago.  IMO, the real reason we don't have
generators already is that they keep getting hijacked by continuations
(indeed, Steven gave up on his patches as soon as he realized he couldn't
extend his approach to continuations).

> I was going to explain coroutines separately, but I realized while
> writing this that the semantics of "yield" proposed above actually
> gives full coroutining.

Well, the Icon semantics for "suspend"-- which are sufficient for Icon's
generators --are not sufficient for Icon's coroutines.  It's for that very
reason that Icon supports generators on all platforms (including JCon, their
moral equivalent of JPython), but supports coroutines only on platforms that
have the magical blob of platform-dependent machine-language cruft needed to
trick out the C stack at coroutine context switches (excepting JCon, where
coroutines are implemented as Java threads).

Coroutines are plain harder.  Generators are just semi-coroutines
(suspend/yield *always* return "to the caller", and that makes life 100x
easier in a conventional eval loop like Python's -- it's still "stack-like",
and the only novel thing needed is a way to resume a suspended frame but
still in call-like fashion).

and-if-we-had-a-pep-every-word-of-this-reply-would-have-been-
    in-it-too<wink>-ly y'rs  - tim




From esr@thyrsus.com  Sun Aug  6 23:51:59 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 18:51:59 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 06, 2000 at 06:07:45PM -0400
References: <20000806175935.A14138@thyrsus.com> <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>
Message-ID: <20000806185159.A14259@thyrsus.com>

Tim Peters <tim_one@email.msn.com>:
> [ Eric S. Raymond]
> > ...
> > I want this feature [generators] a lot.  Guido has agreed in principle
> > that we ought to have generators, but there is not yet a well-defined
> > path forward to them.  Stackless may be the most promising route.
> 
> Actually, if we had a PEP <wink>, it would have recorded for all time that
> Guido gave a detailed description of how to implement generators with minor
> changes to the current code.  It would also record that Steven Majewski had
> already done so some 5 or 6 years ago. 

Christian Tismer, over to you.  I am not going to presume to initiate
the continuations PEP when there's someone with a Python
implementation and extensive usage experience on the list.  However, I
will help with editing and critiques based on my experience with other
languages that have similar features, if you want.

>                                     IMO, the real reason we don't have
> generators already is that they keep getting hijacked by continuations
> (indeed, Steven gave up on his patches as soon as he realized he couldn't
> extend his approach to continuations).

This report of repeated "hijacking" doesn't surprise me a bit.  In fact,
if I'd thought about it I'd have *expected* it.  We know from experience
with other languages (notably Scheme) that call-with-current-continuation
is the simplest orthogonal primitive that this whole cluster of concepts can
be based on.  Implementors with good design taste are going to keep finding
their way back to it, and they're going to feel incompleteness and pressure
if they can't get there.

This is why I'm holding out for continuation objects and 
call-with-continuation to be an explicit Python builtin. We're going to get
there anyway; best to do it cleanly right away.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Taking my gun away because I might shoot someone is like cutting my tongue
out because I might yell `Fire!' in a crowded theater."
        -- Peter Venetoklis


From esr@snark.thyrsus.com  Mon Aug  7 00:18:35 2000
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 19:18:35 -0400
Subject: [Python-Dev] Adding a new class to the library?
Message-ID: <200008062318.TAA14335@snark.thyrsus.com>

I have a candidate for admission to the Python class library.  It's a
framework class for writing things like menu trees and object
browsers.  What's the correct approval procedure for such things?

In more detail, it supports manipulating a stack of sequence objects.
Each sequence object has an associated selection point (the cirrently
selected sequence member) and an associated viewport around it (a
range of indices or sequence members that are considered `visible'.

There are methods to manipulate the object stack.  More importantly,
there are functions which move the selection point in the current
object around, and drag the viewport with it.  (This sort of
thing sounds simple, but is tricky for the same reason BitBlt is
tricky -- lots of funky boundary cases.)

I've used this as the framework for implementing the curses menu
interface for CML2.  It is well-tested and stable.  It might also
be useful for implementing other kinds of data browsers in any
situation where the concept of limited visibility around a selection
point makes sense.  Symbolic debuggers is an example that leaps to mind.

I am, of course, willing to fully document it.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"One of the ordinary modes, by which tyrants accomplish their purposes
without resistance, is, by disarming the people, and making it an
offense to keep arms."
        -- Constitutional scholar and Supreme Court Justice Joseph Story, 1840


From gmcm@hypernet.com  Mon Aug  7 00:34:44 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Sun, 6 Aug 2000 19:34:44 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <1246517606-99838203@hypernet.com>

Jack Jansen wrote:

> Could the defenders of Stackless Python please explain _why_ this
> is such a great idea? Just and Christian seem to swear by it, but
> I'd like to hear of some simple examples of programming tasks
> that will be programmable in 50% less code with it (or 50% more
> understandable code, for that matter).

Here's the complete code for the download of a file (the data 
connection of an FTP server):

    def _doDnStream(self, binary=0):
        mode = 'r'
        if binary:
            mode = mode + 'b'
        f = open(self.cmdconn.filename, mode)
        if self.cmdconn.filepos:
            #XXX check length of file
            f.seek(self.cmdconn.filepos, 0)
        while 1:
            if self.abort:
                break
            data = f.read(8192)
            sz = len(data)
            if sz:
                if not binary:
                    data = '\r\n'.join(data.split('\n'))
                self.write(data)
            if sz < 8192:
                break

[from the base class]
    def write(self, msg):
        while msg:
            sent = self.dispatcher.write(self.sock, msg)
            if sent == 0:
                raise IOError, "unexpected EOF"
            msg = msg[sent:]

Looks like blocking sockets, right? Wrong. That's a fully 
multiplexed socket. About a dozen lines of code (hidden in 
that dispatcher object) mean that I can write async without 
using a state machine. 

stackless-forever-ly y'rs

- Gordon


From guido@beopen.com  Mon Aug  7 02:32:59 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 20:32:59 -0500
Subject: [Python-Dev] Adding a new class to the library?
In-Reply-To: Your message of "Sun, 06 Aug 2000 19:18:35 -0400."
 <200008062318.TAA14335@snark.thyrsus.com>
References: <200008062318.TAA14335@snark.thyrsus.com>
Message-ID: <200008070132.UAA16111@cj20424-a.reston1.va.home.com>

> I have a candidate for admission to the Python class library.  It's a
> framework class for writing things like menu trees and object
> browsers.  What's the correct approval procedure for such things?
> 
> In more detail, it supports manipulating a stack of sequence objects.
> Each sequence object has an associated selection point (the cirrently
> selected sequence member) and an associated viewport around it (a
> range of indices or sequence members that are considered `visible'.
> 
> There are methods to manipulate the object stack.  More importantly,
> there are functions which move the selection point in the current
> object around, and drag the viewport with it.  (This sort of
> thing sounds simple, but is tricky for the same reason BitBlt is
> tricky -- lots of funky boundary cases.)
> 
> I've used this as the framework for implementing the curses menu
> interface for CML2.  It is well-tested and stable.  It might also
> be useful for implementing other kinds of data browsers in any
> situation where the concept of limited visibility around a selection
> point makes sense.  Symbolic debuggers is an example that leaps to mind.
> 
> I am, of course, willing to fully document it.

Have a look at the tree widget in IDLE.  That's Tk specific, but I
believe there's a lot of GUI independent concepts in there.  IDLE's
path and object browsers are built on it.  How does this compare?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From esr@thyrsus.com  Mon Aug  7 01:52:53 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 20:52:53 -0400
Subject: [Python-Dev] Adding a new class to the library?
In-Reply-To: <200008070132.UAA16111@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 06, 2000 at 08:32:59PM -0500
References: <200008062318.TAA14335@snark.thyrsus.com> <200008070132.UAA16111@cj20424-a.reston1.va.home.com>
Message-ID: <20000806205253.B14423@thyrsus.com>

Guido van Rossum <guido@beopen.com>:
> Have a look at the tree widget in IDLE.  That's Tk specific, but I
> believe there's a lot of GUI independent concepts in there.  IDLE's
> path and object browsers are built on it.  How does this compare?

Where is this in the CVS tree? I groveled for it but without success.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

To make inexpensive guns impossible to get is to say that you're
putting a money test on getting a gun.  It's racism in its worst form.
        -- Roy Innis, president of the Congress of Racial Equality (CORE), 1988


From greg@cosc.canterbury.ac.nz  Mon Aug  7 02:04:27 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 07 Aug 2000 13:04:27 +1200 (NZST)
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <200008041511.KAA01925@cj20424-a.reston1.va.home.com>
Message-ID: <200008070104.NAA12334@s454.cosc.canterbury.ac.nz>

BDFL:

> No, problems with literal interpretations traditionally raise
> "runtime" exceptions rather than syntax errors.  E.g.

What about using an exception that's a subclass of *both*
ValueError and SyntaxError?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Mon Aug  7 02:16:44 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 21:16:44 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806185159.A14259@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>

[Tim]
> IMO, the real reason we don't have generators already is that
> they keep getting hijacked by continuations (indeed, Steven gave
> up on his patches as soon as he realized he couldn't extend his
> approach to continuations).

[esr]
> This report of repeated "hijacking" doesn't surprise me a bit.  In
> fact, if I'd thought about it I'd have *expected* it.  We know from
> experience with other languages (notably Scheme) that call-with-
> current-continuation is the simplest orthogonal primitive that this
> whole cluster of concepts can be based on.  Implementors with good
> design taste are going to keep finding their way back to it, and
> they're going to feel incompleteness and pressure if they can't get
> there.

On the one hand, I don't think I know of a language *not* based on Scheme
that has call/cc (or a moral equivalent).  REBOL did at first, but after Joe
Marshal left, Carl Sassenrath ripped it out in favor of a more conventional
implementation.  Even the massive Common Lisp declined to adopt call/cc, the
reasons for which Kent Pitman has posted eloquently and often on
comp.lang.lisp (basically summarized by that continuations are, in Kent's
view, "a semantic mess" in the way Scheme exposed them -- btw, people should
look his stuff up, as he has good ideas for cleaning that mess w/o
sacrificing the power (and so the Lisp world splinters yet again?)).  So
call/cc remains "a Scheme thing" to me after all these years, and even there
by far the most common warning in the release notes for a new implementation
is that call/cc doesn't work correctly yet or at all (but, in the meantime,
here are 3 obscure variations that will work in hard-to-explain special
cases ...).  So, ya, we *do* have experience with this stuff, and it sure
ain't all good.

On the other hand, what implementors other than Schemeheads *do* keep
rediscovering is that generators are darned useful and can be implemented
easily without exotic views of the world.  CLU, Icon and Sather all fit in
that box, and their designers wouldn't touch continuations with a 10-foot
thick condom <wink>.

> This is why I'm holding out for continuation objects and
> call-with-continuation to be an explicit Python builtin. We're
> going to get there anyway; best to do it cleanly right away.

This can get sorted out in the PEP.  As I'm sure someone else has screamed
by now (because it's all been screamed before), Stackless and the
continuation module are distinct beasts (although the latter relies on the
former).  It would be a shame if the fact that it makes continuations
*possible* were to be held against Stackless.  It makes all sorts of things
possible, some of which Guido would even like if people stopped throwing
continuations in his face long enough for him to see beyond them <0.5
wink -- but he doesn't like continuations, and probably never will>.




From jeremy@alum.mit.edu  Mon Aug  7 02:39:46 2000
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Sun, 6 Aug 2000 21:39:46 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>

If someone is going to write a PEP, I hope they will explain how the
implementation deals with the various Python C API calls that can call back
into Python.

In the stackless implementation, builtin_apply is a thin wrapper around
builtin_apply_nr.  The wrapper checks the return value from builtin_apply_nr
for Py_UnwindToken.  If Py_UnwindToken is found, it calls
PyEval_Frame_Dispatch. In this case, builtin_apply returns whatever
PyEval_Frame_Dispatch returns; the frame dispatcher just executes stack
frames until it is ready to return.

How does this control flow at the C level interact with a Python API call
like PySequence_Tuple or PyObject_Compare that can start executing Python
code again?  Say there is a Python function call which in turn calls
PySequence_Tuple, which in turn calls a __getitem__ method on some Python
object, which in turn uses a continuation to transfer control.  After the
continuation is called, the Python function will never return and the
PySquence_Tuple call is no longer necessary, but there is still a call to
PySequence_Tuple on the C stack.  How does stackless deal with the return
through this function?

I expect that any C function that may cause Python code to be executed must
be wrapped the way apply was wrapper.  So in the example, PySequence_Tuple
may return Py_UnwindToken.  This adds an extra return condition that every
caller of PySequence_Tuple must check.  Currently, the caller must check for
NULL/exception in addition to a normal return.  With stackless, I assume the
caller would also need to check for "unwinding."

Is this analysis correct? Or is there something I'm missing?

I see that the current source release of stackless does not do anything
special to deal with C API calls that execute Python code.  For example,
PyDict_GetItem calls PyObject_Hash, which could in theory lead to a call on
a continuation, but neither caller nor callee does anything special to
account for the possibility.  Is there some other part of the implementation
that prevents this from being a problem?

Jeremy



From greg@cosc.canterbury.ac.nz  Mon Aug  7 02:50:32 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 07 Aug 2000 13:50:32 +1200 (NZST)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
In-Reply-To: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
Message-ID: <200008070150.NAA12345@s454.cosc.canterbury.ac.nz>

> dict.default('hello', []).append('hello')

Is this new method going to apply to dictionaries only,
or is it to be considered part of the standard mapping
interface?

If the latter, I wonder whether it would be better to
provide a builtin function instead. The more methods
are added to the mapping interface, the more complicated
it becomes to implement an object which fully complies
with the mapping interface. Operations which can be
carried out through the basic interface are perhaps
best kept "outside" the object, in a function or
wrapper object.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From bwarsaw@beopen.com  Mon Aug  7 03:25:54 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Sun, 6 Aug 2000 22:25:54 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
 <200008070150.NAA12345@s454.cosc.canterbury.ac.nz>
Message-ID: <14734.7730.698860.642851@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    >> dict.default('hello', []).append('hello')

    GE> Is this new method going to apply to dictionaries only,
    GE> or is it to be considered part of the standard mapping
    GE> interface?

I think we've settled on setdefault(), which is more descriptive, even
if it's a little longer.  I have posted SF patch #101102 which adds
setdefault() to both the dictionary object and UserDict (along with
the requisite test suite and doco changes).

-Barry


From pf@artcom-gmbh.de  Mon Aug  7 09:32:00 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Mon, 7 Aug 2000 10:32:00 +0200 (MEST)
Subject: [Python-Dev] Who is the author of lib-tk/Tkdnd.py?
Message-ID: <m13LiKG-000DieC@artcom0.artcom-gmbh.de>

Hi,

I've some ideas (already implemented <0.5 wink>) for
generic Drag'n'Drop in Python/Tkinter applications.  
Before bothering the list here I would like to discuss this with 
the original author of Tkdnd.py.

Thank you for your attention, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
After all, Python is a programming language, not a psychic hotline. --Tim Peters


From mal@lemburg.com  Mon Aug  7 09:57:01 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 10:57:01 +0200
Subject: [Python-Dev] Pickling using XML as output format
References: <Pine.GSO.4.10.10008061544180.20069-100000@sundial>
Message-ID: <398E79DD.3EB21D3A@lemburg.com>

Moshe Zadka wrote:
> 
> On Sun, 6 Aug 2000, M.-A. Lemburg wrote:
> 
> > Before starting to reinvent the wheel:
> 
> Ummmm......I'd wait for some DC guy to chime in: I think Zope had
> something like that. You might want to ask around on the Zope lists
> or search zope.org.
> 
> I'm not sure what it has and what it doesn't have, though.

I'll download the latest beta and check this out.

Thanks for the tip,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Aug  7 10:15:08 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 11:15:08 +0200
Subject: [Python-Dev] Go \x yourself
References: <200008070104.NAA12334@s454.cosc.canterbury.ac.nz>
Message-ID: <398E7E1C.84D28EA5@lemburg.com>

Greg Ewing wrote:
> 
> BDFL:
> 
> > No, problems with literal interpretations traditionally raise
> > "runtime" exceptions rather than syntax errors.  E.g.
> 
> What about using an exception that's a subclass of *both*
> ValueError and SyntaxError?

What would this buy you ?

Note that the contents of a literal string don't really have
anything to do with syntax. The \x escape sequences are
details of the codecs used for converting those literal
strings to Python string objects.

Perhaps we need a CodecError which is subclass of ValueError
and then make the UnicodeError a subclass of this CodecError ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From artcom0!pf@artcom-gmbh.de  Mon Aug  7 09:14:54 2000
From: artcom0!pf@artcom-gmbh.de (artcom0!pf@artcom-gmbh.de)
Date: Mon, 7 Aug 2000 10:14:54 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14734.7730.698860.642851@anthem.concentric.net> from "Barry A. Warsaw" at "Aug 6, 2000 10:25:54 pm"
Message-ID: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>

Hi,

Guido:
>     >> dict.default('hello', []).append('hello')

Greg Ewing <greg@cosc.canterbury.ac.nz>:
>     GE> Is this new method going to apply to dictionaries only,
>     GE> or is it to be considered part of the standard mapping
>     GE> interface?
 
Barry A. Warsaw:
> I think we've settled on setdefault(), which is more descriptive, even
> if it's a little longer.  I have posted SF patch #101102 which adds
> setdefault() to both the dictionary object and UserDict (along with
> the requisite test suite and doco changes).

This didn't answer the question raised by Greg Ewing.  AFAI have seen,
the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
the answer is "applies to dictionaries only".

What is with the other external mapping types already in the core,
like 'dbm', 'shelve' and so on?

If the patch doesn't add this new method to these other mapping types, 
this fact should at least be documented similar to the methods 'items()' 
and 'values' that are already unimplemented in 'dbm':
 """Dbm objects behave like mappings (dictionaries), except that 
    keys and values are always strings.  Printing a dbm object 
    doesn't print the keys and values, and the items() and values() 
    methods are not supported."""

I'm still -1 on the name:  Nobody would expect, that a method 
called 'setdefault()' will actually return something useful.  May be 
it would be better to invent an absolutely obfuscuated new name, so 
that everybody is forced to actually *READ* the documentation of this 
method or nobody will guess, what it is supposed to do or even
worse: how to make clever use of it.

At least it would be a lot more likely, that someone becomes curious, 
what a method called 'grezelbatz()' is suppoed to do, than that someone
will actually lookup the documentation of a method called 'setdefault()'.

If the average Python programmer would ever start to use this method 
at all, then I believe it is very likely that we will see him/her
coding:
	dict.setdefault('key', [])
	dict['key'].append('bar')

So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
it would be better to make this a builtin function, that can be applied
to all mapping types.

Maybe it would be even better to delay this until in Python 3000
builtin types may have become real classes, so that this method may
be inherited by all mapping types from an abstract mapping base class.

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)



From mal@lemburg.com  Mon Aug  7 11:07:09 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 12:07:09 +0200
Subject: [Python-Dev] Pickling using XML as output format
References: <Pine.GSO.4.10.10008061544180.20069-100000@sundial> <398E79DD.3EB21D3A@lemburg.com>
Message-ID: <398E8A4D.CAA87E02@lemburg.com>

"M.-A. Lemburg" wrote:
> 
> Moshe Zadka wrote:
> >
> > On Sun, 6 Aug 2000, M.-A. Lemburg wrote:
> >
> > > Before starting to reinvent the wheel:
> >
> > Ummmm......I'd wait for some DC guy to chime in: I think Zope had
> > something like that. You might want to ask around on the Zope lists
> > or search zope.org.
> >
> > I'm not sure what it has and what it doesn't have, though.
> 
> I'll download the latest beta and check this out.

Ok, Zope has something called ppml.py which aims at converting
Python pickles to XML. It doesn't really pickle directly to XML
though and e.g. uses the Python encoding for various objects.

I guess, I'll start hacking away at my own xmlpickle.py
implementation with the goal of making Python pickles
editable using a XML editor.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tismer@appliedbiometrics.com  Mon Aug  7 11:48:19 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 12:48:19 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
Message-ID: <398E93F3.374B585A@appliedbiometrics.com>


Jeremy Hylton wrote:
> 
> If someone is going to write a PEP, I hope they will explain how the
> implementation deals with the various Python C API calls that can call back
> into Python.

He will.

> In the stackless implementation, builtin_apply is a thin wrapper around
> builtin_apply_nr.  The wrapper checks the return value from builtin_apply_nr
> for Py_UnwindToken.  If Py_UnwindToken is found, it calls
> PyEval_Frame_Dispatch. In this case, builtin_apply returns whatever
> PyEval_Frame_Dispatch returns; the frame dispatcher just executes stack
> frames until it is ready to return.

Correct.

> How does this control flow at the C level interact with a Python API call
> like PySequence_Tuple or PyObject_Compare that can start executing Python
> code again?  Say there is a Python function call which in turn calls
> PySequence_Tuple, which in turn calls a __getitem__ method on some Python
> object, which in turn uses a continuation to transfer control.  After the
> continuation is called, the Python function will never return and the
> PySquence_Tuple call is no longer necessary, but there is still a call to
> PySequence_Tuple on the C stack.  How does stackless deal with the return
> through this function?

Right. What you see here is the incompleteness of Stackless.
In order to get this "right", I would have to change many
parts of the implementation, in order to allow for continuations
in every (probably even unwanted) place.
I could not do this.

Instead, the situation of these still occouring recursions
are handled differently. continuationmodule guarantees, that
in the context of recursive interpreter calls, the given
stack order of execution is obeyed. Violations of this
cause simply an exception.

> I expect that any C function that may cause Python code to be executed must
> be wrapped the way apply was wrapper.  So in the example, PySequence_Tuple
> may return Py_UnwindToken.  This adds an extra return condition that every
> caller of PySequence_Tuple must check.  Currently, the caller must check for
> NULL/exception in addition to a normal return.  With stackless, I assume the
> caller would also need to check for "unwinding."

No, nobody else is allowed to return Py_UnwindToken but the few
functions in the builtins implementation and in ceval. The
continuationmodule may produce it since it knows the context
where it is called. eval_code is supposed to be the main instance
who checks for this special value.

As said, allowing this in any context would have been a huge
change to the whole implementation, and would probably also
have broken existing extensions which do not expect that
a standard function wants to do a callback.

> Is this analysis correct? Or is there something I'm missing?
> 
> I see that the current source release of stackless does not do anything
> special to deal with C API calls that execute Python code.  For example,
> PyDict_GetItem calls PyObject_Hash, which could in theory lead to a call on
> a continuation, but neither caller nor callee does anything special to
> account for the possibility.  Is there some other part of the implementation
> that prevents this from being a problem?

This problem is no problem for itself, since inside the stackless
modification for Python, there are no places where unexpected
Py_UnwindTokens or continuations are produced. This is a closed
system insofar. But with the continuation extension, it is
of course a major problem.

The final solution to the recursive interpreter/continuation
problem was found long after my paper was presented. The idea
is simple, solves everything, and shortened my implementation
substantially:

Whenever a recursive interpreter call takes place, the calling
frame gets a lock flag set. This flag says "this frame is wrapped
in a suspended eval_code call and cannot be a continuation".
continuationmodule always obeys this flag and prevends the
creation of continuations for such frames by raising an
exception. In other words: Stack-like behavior is enforced
in situations where the C stack is involved.

So, a builtin or an extension *can* call a continuation, but
finally, it will have to come back to the calling point.
If not, then one of the locked frames will be touched,
finally, in the wrong C stack order. But by reference
counting, this touching will cause the attempt to create
a continuation, and what I said above will raise an exception.

Probably the wrong place to explain this in more detail here, but
it doesn't apply to the stackless core at all which is just
responsible for the necessary support machinery.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From paul@prescod.net  Mon Aug  7 13:18:43 2000
From: paul@prescod.net (Paul Prescod)
Date: Mon, 07 Aug 2000 08:18:43 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <39866F8D.FCFA85CB@prescod.net> <1246975873-72274187@hypernet.com>
Message-ID: <398EA923.E5400D2B@prescod.net>

Gordon McMillan wrote:
> 
> ...
> 
> As a practical matter, it looks to me like winreg (under any but
> the most well-informed usage) may well leak handles. If so,
> that would be a disaster. But I don't have time to check it out.

I would be very surprised if that was the case. Perhaps you can outline
your thinking so that *I* can check it out.

I claim that:

_winreg never leaks Windows handles as long _winreg handle objects are
destroyed.

winreg is written entirely in Python and destroys _winreg handles as
long as winreg key objects are destroyed.

winreg key objects are destroyed as long as there is no cycle.

winreg does not create cycles.

Therefore, winreg does not leak handles. I'm 98% confident of each
assertion...for a total confidence of 92%.
-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From guido@beopen.com  Mon Aug  7 13:38:11 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 07:38:11 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Mon, 07 Aug 2000 13:50:32 +1200."
 <200008070150.NAA12345@s454.cosc.canterbury.ac.nz>
References: <200008070150.NAA12345@s454.cosc.canterbury.ac.nz>
Message-ID: <200008071238.HAA18076@cj20424-a.reston1.va.home.com>

> > dict.default('hello', []).append('hello')
> 
> Is this new method going to apply to dictionaries only,
> or is it to be considered part of the standard mapping
> interface?
> 
> If the latter, I wonder whether it would be better to
> provide a builtin function instead. The more methods
> are added to the mapping interface, the more complicated
> it becomes to implement an object which fully complies
> with the mapping interface. Operations which can be
> carried out through the basic interface are perhaps
> best kept "outside" the object, in a function or
> wrapper object.

The "mapping interface" has no firm definition.  You're free to
implement something without a default() method and call it a mapping.

In Python 3000, where classes and built-in types will be unified, of
course this will be fixed: there will be a "mapping" base class that
implements get() and default() in terms of other, more primitive
operations.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Moshe Zadka <moshez@math.huji.ac.il>  Mon Aug  7 12:45:45 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Mon, 7 Aug 2000 14:45:45 +0300 (IDT)
Subject: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd)
Message-ID: <Pine.GSO.4.10.10008071444080.4113-100000@sundial>

I've answered him personally about the first part -- but the second part
is interesting (and even troubling)

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez

---------- Forwarded message ----------
Date: Mon, 7 Aug 2000 08:59:30 +0000 (UTC)
From: Eddy De Greef <degreef@imec.be>
To: python-list@python.org
Newsgroups: comp.lang.python
Subject: Minor compilation problem on HP-UX (1.6b1)

Hi,

when I compile version 1.6b1 on HP-UX-10, I get a few compilation errors 
in Python/getargs.c (undefined UCHAR_MAX etc). The following patch fixes this:

------------------------------------------------------------------------------
*** Python/getargs.c.orig       Mon Aug  7 10:19:55 2000
--- Python/getargs.c    Mon Aug  7 10:20:21 2000
***************
*** 8,13 ****
--- 8,14 ----
  #include "Python.h"
  
  #include <ctype.h>
+ #include <limits.h>
  
  
  int PyArg_Parse Py_PROTO((PyObject *, char *, ...));
------------------------------------------------------------------------------

I also have a suggestion to improve the speed on the HP-UX platform. 
By tuning the memory allocation algorithm (see the patch below), it is 
possible to obtain a speed improvement of up to 22% on non-trivial 
Python scripts, especially when lots of (small) objects have to be created. 
I'm aware that platform-specific features are undesirable for a 
multi-platform application such as Python, but 22% is quite a lot
for such a small modification ...
Maybe similar tricks can be used on other platforms too.

------------------------------------------------------------------------------
*** Modules/main.c.orig Mon Aug  7 10:02:09 2000
--- Modules/main.c      Mon Aug  7 10:02:37 2000
***************
*** 83,88 ****
--- 83,92 ----
        orig_argc = argc;       /* For Py_GetArgcArgv() */
        orig_argv = argv;
  
+ #ifdef __hpux
+       mallopt (M_MXFAST, 512);
+ #endif /* __hpux */
+ 
        if ((p = getenv("PYTHONINSPECT")) && *p != '\0')
                inspect = 1;
        if ((p = getenv("PYTHONUNBUFFERED")) && *p != '\0')
------------------------------------------------------------------------------

Regards,

Eddy
-- 
http://www.python.org/mailman/listinfo/python-list



From gmcm@hypernet.com  Mon Aug  7 13:00:10 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 08:00:10 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <398EA923.E5400D2B@prescod.net>
Message-ID: <1246472883-102528128@hypernet.com>

Paul Prescod wrote:

> Gordon McMillan wrote:
> > 
> > ...
> > 
> > As a practical matter, it looks to me like winreg (under any
> > but the most well-informed usage) may well leak handles. If so,
> > that would be a disaster. But I don't have time to check it
> > out.
> 
> I would be very surprised if that was the case. Perhaps you can
> outline your thinking so that *I* can check it out.

Well, I saw RegKey.close nowhere referenced. I saw the 
method it calls in _winreg not getting triggered elsewhere. I 
missed that _winreg closes them another way on dealloc.

BTW, not all your hive names exist on every Windows 
platform (or build of _winreg).
 


- Gordon


From jack@oratrix.nl  Mon Aug  7 13:27:59 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 14:27:59 +0200
Subject: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd)
In-Reply-To: Message by Moshe Zadka <moshez@math.huji.ac.il> ,
 Mon, 7 Aug 2000 14:45:45 +0300 (IDT) , <Pine.GSO.4.10.10008071444080.4113-100000@sundial>
Message-ID: <20000807122800.8D0B1303181@snelboot.oratrix.nl>

> + #ifdef __hpux
> +       mallopt (M_MXFAST, 512);
> + #endif /* __hpux */
> + 

After reading this I went off and actually _read_ the mallopt manpage for the 
first time in my life, and it seems there's quite a few parameters there we 
might want to experiment with. Besides the M_MXFAST there's also M_GRAIN, 
M_BLKSIZ, M_MXCHK and M_FREEHD that could have significant impact on Python 
performance. I know that all the tweaks and tricks I did in the MacPython 
malloc implementation resulted in a speedup of 20% or more (including the 
cache-aligment code in dictobject.c).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From Vladimir.Marangozov@inrialpes.fr  Mon Aug  7 13:59:49 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 14:59:49 +0200 (CEST)
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd))
In-Reply-To: <20000807122800.8D0B1303181@snelboot.oratrix.nl> from "Jack Jansen" at Aug 07, 2000 02:27:59 PM
Message-ID: <200008071259.OAA22446@python.inrialpes.fr>

Jack Jansen wrote:
> 
> 
> > + #ifdef __hpux
> > +       mallopt (M_MXFAST, 512);
> > + #endif /* __hpux */
> > + 
> 
> After reading this I went off and actually _read_ the mallopt manpage for the 
> first time in my life, and it seems there's quite a few parameters there we 
> might want to experiment with. Besides the M_MXFAST there's also M_GRAIN, 
> M_BLKSIZ, M_MXCHK and M_FREEHD that could have significant impact on Python 
> performance. I know that all the tweaks and tricks I did in the MacPython 
> malloc implementation resulted in a speedup of 20% or more (including the 
> cache-aligment code in dictobject.c).

To start with, try the optional object malloc I uploaded yestedray at SF.
[Patch #101104]

Tweaking mallopt and getting 20% speedup for some scripts is no surprise
at all. For me <wink>. It is not portable though.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From jeremy@beopen.com  Mon Aug  7 14:05:20 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 09:05:20 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>
References: <20000806185159.A14259@thyrsus.com>
 <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>
Message-ID: <14734.46096.366920.827786@bitdiddle.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

  TP> On the one hand, I don't think I know of a language *not* based
  TP> on Scheme that has call/cc (or a moral equivalent).

ML also has call/cc, at least the Concurrent ML variant.

Jeremy


From jeremy@beopen.com  Mon Aug  7 14:10:14 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 09:10:14 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <398E93F3.374B585A@appliedbiometrics.com>
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
 <398E93F3.374B585A@appliedbiometrics.com>
Message-ID: <14734.46390.190481.441065@bitdiddle.concentric.net>

>>>>> "CT" == Christian Tismer <tismer@appliedbiometrics.com> writes:

  >> If someone is going to write a PEP, I hope they will explain how
  >> the implementation deals with the various Python C API calls that
  >> can call back into Python.

  CT> He will.

Good!  You'll write a PEP.

  >> How does this control flow at the C level interact with a Python
  >> API call like PySequence_Tuple or PyObject_Compare that can start
  >> executing Python code again?  Say there is a Python function call
  >> which in turn calls PySequence_Tuple, which in turn calls a
  >> __getitem__ method on some Python object, which in turn uses a
  >> continuation to transfer control.  After the continuation is
  >> called, the Python function will never return and the
  >> PySquence_Tuple call is no longer necessary, but there is still a
  >> call to PySequence_Tuple on the C stack.  How does stackless deal
  >> with the return through this function?

  CT> Right. What you see here is the incompleteness of Stackless.  In
  CT> order to get this "right", I would have to change many parts of
  CT> the implementation, in order to allow for continuations in every
  CT> (probably even unwanted) place.  I could not do this.

  CT> Instead, the situation of these still occouring recursions are
  CT> handled differently. continuationmodule guarantees, that in the
  CT> context of recursive interpreter calls, the given stack order of
  CT> execution is obeyed. Violations of this cause simply an
  CT> exception.

Let me make sure I understand: If I invoke a continuation when there
are extra C stack frames between the mainloop invocation that captured
the continuation and the call of the continuation, the interpreter
raises an exception?

If so, continuations don't sound like they would mix well with C
extension modules and callbacks.  I guess it also could not be used
inside methods that implement operator overloading.  Is there a simple
set of rules that describe the situtations where they will not work?

Jeremy


From thomas@xs4all.net  Mon Aug  7 14:07:11 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 7 Aug 2000 15:07:11 +0200
Subject: [Python-Dev] augmented assignment
Message-ID: <20000807150711.W266@xs4all.nl>

--0OAP2g/MAC+5xKAE
Content-Type: text/plain; charset=us-ascii


I 'finished' the new augmented assignment patch yesterday, following the
suggestions made by Guido about using INPLACE_* bytecodes rather than
special GETSET_* opcodes.

I ended up with 13 new opcodes: INPLACE_* opcodes for the 11 binary
operation opcodes, DUP_TOPX which duplicates a number of stack items instead
of just the topmost item, and ROT_FOUR.

I thought I didn't need ROT_FOUR if we had DUP_TOPX but I hadn't realized
assignment needs the new value at the bottom of the 'stack', and the objects
that are used in the assignment above that. So ROT_FOUR is necessary in the
case of slice-assignment:

a[b:c] += i

LOAD a			[a]
LOAD b			[a, b]
LOAD c			[a, b, c]
DUP_TOPX 3		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
ROT_FOUR		[result, a, b, c]
STORE_SLICE+3		[]

When (and if) the *SLICE opcodes are removed, ROT_FOUR can, too :)

The patch is 'done' in my opinion, except for two tiny things:

- PyNumber_InPlacePower() takes just two arguments, not three. Three
argument power() does such 'orrible things to coerce all the arguments, and
you can't do augmented-assignment-three-argument-power anyway. If it's added
it would be for the API only, and I'm not sure if it's worth it :P

- I still don't like the '_ab_' names :) I think __inplace_add__ or __iadd__
  is better, but that's just me.

The PEP is also 'done'. Feedback is more than welcome, including spelling
fixes and the like. I've attached the PEP to this mail, for convenience.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

--0OAP2g/MAC+5xKAE
Content-Type: text/plain
Content-Disposition: attachment; filename="pep-0203.txt"

PEP: 203
Title: Augmented Assignments
Version: $Revision: 1.4 $
Owner: thomas@xs4all.net (Thomas Wouters)
Python-Version: 2.0
Status: Draft


Introduction

    This PEP describes the `augmented assignment' proposal for Python
    2.0.  This PEP tracks the status and ownership of this feature,
    slated for introduction in Python 2.0.  It contains a description
    of the feature and outlines changes necessary to support the
    feature.  This PEP summarizes discussions held in mailing list
    forums, and provides URLs for further information, where
    appropriate.  The CVS revision history of this file contains the
    definitive historical record.


Proposed semantics

    The proposed patch that adds augmented assignment to Python
    introduces the following new operators:
    
       += -= *= /= %= **= <<= >>= &= ^= |=
    
    They implement the same operator as their normal binary form, with
    the exception that the operation is done `in-place' whenever
    possible.
    
    They truly behave as augmented assignment, in that they perform
    all of the normal load and store operations, in addition to the
    binary operation they are intended to do. So, given the expression:
    
       x += y
    
    The object `x' is loaded, then added with 1, and the resulting
    object is stored back in the original place. The precise action
    performed on the two arguments depends on the type of `x', and
    possibly of `y'.

    The idea behind augmented assignment in Python is that it isn't
    just an easier way to write the common practice of storing the
    result of a binary operation in its left-hand operand, but also a
    way for the left-hand operand in question to know that it should
    operate 'on itself', rather than creating a modified copy of
    itself.

    To make this possible, a number of new `hooks' are added to Python
    classes and C extention types, which are called when the object in
    question is used as the left hand side of an augmented assignment
    operation. If the class or type does not implement the `in-place'
    hooks, the normal hooks for the particular binary operation are
    used.
    
    So, given an instance object `x', the expression
    
        x += y
    
    tries to call x.__add_ab__(y), which is the 'in-place' variant of
    __add__. If __add_ab__ is not present, x.__add__(y) is
    attempted, and finally y.__radd__(x) if __add__ is missing too. 
    There is no `right-hand-side' variant of __add_ab__, because that
    would require for `y' to know how to in-place modify `x', which is
    an unsafe assumption. The __add_ab__ hook should behave exactly
    like __add__, returning the result of the operation (which could
    be `self') which is to be stored in the variable `x'.
 
    For C extention types, the `hooks' are members of the
    PyNumberMethods and PySequenceMethods structures, and are called
    in exactly the same manner as the existing non-inplace operations,
    including argument coercion. C methods should also take care to
    return a new reference to the result object, whether it's the same
    object or a new one. So if the original object is returned, it
    should be INCREF()'d appropriately.


New methods

    The proposed implementation adds the following 11 possible `hooks'
    which Python classes can implement to overload the augmented
    assignment operations:
    
        __add_ab__
        __sub_ab__
        __mul_ab__
        __div_ab__
        __mod_ab__
        __pow_ab__
        __lshift_ab__
        __rshift_ab__
        __and_ab__
        __xor_ab__
        __or_ab__
    
    The `__add_ab__' name is one proposed by Guido[1], and stands for `and
    becomes'. Other proposed names include '__iadd__', `__add_in__'
    `__inplace_add__'

    For C extention types, the following struct members are added:
    
    To PyNumberMethods:
        binaryfunc nb_inplace_add;
        binaryfunc nb_inplace_subtract;
        binaryfunc nb_inplace_multiply;
        binaryfunc nb_inplace_divide;
        binaryfunc nb_inplace_remainder;
        binaryfunc nb_inplace_power;
        binaryfunc nb_inplace_lshift;
        binaryfunc nb_inplace_rshift;
        binaryfunc nb_inplace_and;
        binaryfunc nb_inplace_xor;
        binaryfunc nb_inplace_or;

    To PySequenceMethods:
        binaryfunc sq_inplace_concat;
        intargfunc sq_inplace_repeat;

    In order to keep binary compatibility, the tp_flags TypeObject
    member is used to determine whether the TypeObject in question has
    allocated room for these slots. Until a clean break in binary
    compatibility is made (which may or may not happen before 2.0)
    code that wants to use one of the new struct members must first
    check that they are available with the 'PyType_HasFeature()' macro:
    
    if (PyType_HasFeature(x->ob_type, Py_TPFLAGS_HAVE_INPLACE_OPS) &&
        x->ob_type->tp_as_number && x->ob_type->tp_as_number->nb_inplace_add) {
            /* ... */

    This check must be made even before testing the method slots for
    NULL values! The macro only tests whether the slots are available,
    not whether they are filled with methods or not.


Implementation

    The current implementation of augmented assignment[2] adds, in
    addition to the methods and slots alread covered, 13 new bytecodes
    and 13 new API functions.
    
    The API functions are simply in-place versions of the current
    binary-operation API functions:
    
        PyNumber_InPlaceAdd(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceSubtract(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceMultiply(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceDivide(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceRemainder(PyObject *o1, PyObject *o2);
        PyNumber_InPlacePower(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceLshift(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceRshift(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceAnd(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceXor(PyObject *o1, PyObject *o2);
        PyNumber_InPlaceOr(PyObject *o1, PyObject *o2);
        PySequence_InPlaceConcat(PyObject *o1, PyObject *o2);
        PySequence_InPlaceRepeat(PyObject *o, int count);

    They call either the Python class hooks (if either of the objects
    is a Python class instance) or the C type's number or sequence
    methods.

    The new bytecodes are:
        INPLACE_ADD
        INPLACE_SUBTRACT
        INPLACE_MULTIPLY
        INPLACE_DIVIDE
        INPLACE_REMAINDER
        INPLACE_POWER
        INPLACE_LEFTSHIFT
        INPLACE_RIGHTSHIFT
        INPLACE_AND
        INPLACE_XOR
        INPLACE_OR
        ROT_FOUR
        DUP_TOPX
    
    The INPLACE_* bytecodes mirror the BINARY_* bytecodes, except that
    they are implemented as calls to the 'InPlace' API functions. The
    other two bytecodes are 'utility' bytecodes: ROT_FOUR behaves like
    ROT_THREE except that the four topmost stack items are rotated.
    
    DUP_TOPX is a bytecode that takes a single argument, which should
    be an integer between 1 and 5 (inclusive) which is the number of
    items to duplicate in one block. Given a stack like this (where
    the left side of the list is the 'top' of the stack):

        [a, b, c, d, e, f, g]
    
    "DUP_TOPX 3" would duplicate the top 3 items, resulting in this
    stack:
    
        [a, b, c, d, e, f, g, e, f, g]

    DUP_TOPX with an argument of 1 is the same as DUP_TOP. The limit
    of 5 is purely an implementation limit. The implementation of
    augmented assignment requires only DUP_TOPX with an argument of 2
    and 3, and could do without this new opcode at the cost of a fair
    number of DUP_TOP and ROT_*.


Copyright

    This document has been placed in the public domain.


References

    [1] http://www.python.org/pipermail/python-list/2000-June/059556.html
    [2]
http://sourceforge.net/patch?func=detailpatch&patch_id=100699&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

--0OAP2g/MAC+5xKAE--


From guido@beopen.com  Mon Aug  7 15:11:52 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 09:11:52 -0500
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: Your message of "Mon, 07 Aug 2000 10:14:54 +0200."
 <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>
References: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <200008071411.JAA18437@cj20424-a.reston1.va.home.com>

> Guido:
> >     >> dict.default('hello', []).append('hello')
> 
> Greg Ewing <greg@cosc.canterbury.ac.nz>:
> >     GE> Is this new method going to apply to dictionaries only,
> >     GE> or is it to be considered part of the standard mapping
> >     GE> interface?
>  
> Barry A. Warsaw:
> > I think we've settled on setdefault(), which is more descriptive, even
> > if it's a little longer.  I have posted SF patch #101102 which adds
> > setdefault() to both the dictionary object and UserDict (along with
> > the requisite test suite and doco changes).

PF:
> This didn't answer the question raised by Greg Ewing.  AFAI have seen,
> the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
> the answer is "applies to dictionaries only".

I replied to Greg Ewing already: it's not part of the required mapping
protocol.

> What is with the other external mapping types already in the core,
> like 'dbm', 'shelve' and so on?
> 
> If the patch doesn't add this new method to these other mapping types, 
> this fact should at least be documented similar to the methods 'items()' 
> and 'values' that are already unimplemented in 'dbm':
>  """Dbm objects behave like mappings (dictionaries), except that 
>     keys and values are always strings.  Printing a dbm object 
>     doesn't print the keys and values, and the items() and values() 
>     methods are not supported."""

Good point.

> I'm still -1 on the name:  Nobody would expect, that a method 
> called 'setdefault()' will actually return something useful.  May be 
> it would be better to invent an absolutely obfuscuated new name, so 
> that everybody is forced to actually *READ* the documentation of this 
> method or nobody will guess, what it is supposed to do or even
> worse: how to make clever use of it.

I don't get your point.  Since when is it a requirement for a method
to convey its full meaning by just its name?  As long as the name
doesn't intuitively contradict the actual meaning it should be fine.

If you read code that does:

	dict.setdefault('key', [])
	dict['key'].append('bar')

you will have no problem understanding this.  There's no need for the
reader to know that this is suboptimal.  (Of course, if you're an
experienced Python user doing a code review, you might know that.  But
it's not needed to understand what goes on.)

Likewise, if you read code like this:

	dict.setdefault('key', []).append('bar')

it doesn't seem hard to guess what it does (under the assumption that
you already know the program works).  After all, there are at most
three things that setdefault() could *possibly* return:

1. None		-- but then the append() would't work

2. dict		-- but append() is not a dict method so wouldn't work either

3. dict['key']	-- this is the only one that makes sense

> At least it would be a lot more likely, that someone becomes curious, 
> what a method called 'grezelbatz()' is suppoed to do, than that someone
> will actually lookup the documentation of a method called 'setdefault()'.

Bogus.  This would argue that we should give all methods obscure names.

> If the average Python programmer would ever start to use this method 
> at all, then I believe it is very likely that we will see him/her
> coding:
> 	dict.setdefault('key', [])
> 	dict['key'].append('bar')

And I have no problem with that.  It's still less writing than the
currently common idioms to deal with this!

> So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
> it would be better to make this a builtin function, that can be applied
> to all mapping types.

Yes, and let's also make values(), items(), has_key() and get()
builtins instead of methods.  Come on!  Python is an OO language.

> Maybe it would be even better to delay this until in Python 3000
> builtin types may have become real classes, so that this method may
> be inherited by all mapping types from an abstract mapping base class.

Sure, but that's not an argument for not adding it to the dictionary
type today!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jack@oratrix.nl  Mon Aug  7 14:26:40 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 15:26:40 +0200
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX
 (1.6b1) (fwd))
In-Reply-To: Message by Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
 ,
 Mon, 7 Aug 2000 14:59:49 +0200 (CEST) , <200008071259.OAA22446@python.inrialpes.fr>
Message-ID: <20000807132641.A60E6303181@snelboot.oratrix.nl>

Don't worry, Vladimir, I hadn't forgotten your malloc stuff:-) Its just that 
if mallopt is available in the standard C library this may be a way to squeeze 
out a couple of extra percent of performance that the admin who installs 
Python needn't be aware of. And I don't think your allocator can be dropped in 
to the standard distribution, because it has the potential problem of 
fragmenting the heap due to multiple malloc packages in one address space (at 
least, that was the problem when I last looked at it, which is admittedly more 
than a year ago).

And about mallopt not being portable: right, but I would assume that something 
like
#ifdef M_MXFAST
	mallopt(M_MXFAST, xxxx);
#endif
shouldn't do any harm if we set xxxx to be a size that will cause 80% or so of 
the python objects to fall into the M_MXFAST category 
(sizeof(PyObject)+sizeof(void *), maybe?). This doesn't sound 
platform-dependent...

Similarly, M_FREEHD sounds like it could speed up Python allocation, but this 
would need to be measured. Python allocation patterns shouldn't be influenced 
too much by platform, so again if this is good on one platform it is probably 
good on all.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From mark@per.dem.csiro.au  Mon Aug  7 22:34:42 2000
From: mark@per.dem.csiro.au (Mark Favas)
Date: Mon, 7 Aug 0 21:34:42 WST
Subject: [Python-Dev] mallopt (Was: Minor compilation problem on HP-UX (1.6b1))
Message-ID: <200008071334.VAA15707@demperth.per.dem.csiro.au>

To add to Vladimir Marangozov's comments about mallopt, in terms of both
portability and utility (before too much time is expended)...

From the Tru64 Unix man page:

NOTES

  The mallopt() and mallinfo() functions are not supported for multithreaded
  applications.

  The mallopt() and mallinfo() functions are designed for tuning a specific
  algorithm.  The Tru64 UNIX operating system uses a new, more efficient
  algorithm.  The mallopt() and mallinfo() functions are provided for System
  V compatibility only and should not be used by new applications.  The
  behavior of the current malloc() and free() functions is not be affected by
  calls to mallopt().  The structure returned by the mallinfo() function
  might not contain any useful information.

-- 
Mark Favas


From tismer@appliedbiometrics.com  Mon Aug  7 14:47:39 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 15:47:39 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
 <398E93F3.374B585A@appliedbiometrics.com> <14734.46390.190481.441065@bitdiddle.concentric.net>
Message-ID: <398EBDFB.4ED9FAE7@appliedbiometrics.com>

[about recursion and continuations]

>   CT> Right. What you see here is the incompleteness of Stackless.  In
>   CT> order to get this "right", I would have to change many parts of
>   CT> the implementation, in order to allow for continuations in every
>   CT> (probably even unwanted) place.  I could not do this.
> 
>   CT> Instead, the situation of these still occouring recursions are
>   CT> handled differently. continuationmodule guarantees, that in the
>   CT> context of recursive interpreter calls, the given stack order of
>   CT> execution is obeyed. Violations of this cause simply an
>   CT> exception.
> 
> Let me make sure I understand: If I invoke a continuation when there
> are extra C stack frames between the mainloop invocation that captured
> the continuation and the call of the continuation, the interpreter
> raises an exception?

Not always. Frames which are not currently bound by an
interpreter acting on them can always be jump targets.
Only those frames which are currently in the middle of
an opcode are forbidden.

> If so, continuations don't sound like they would mix well with C
> extension modules and callbacks.  I guess it also could not be used
> inside methods that implement operator overloading.  Is there a simple
> set of rules that describe the situtations where they will not work?

Right. In order to have good mixing with C callbacks, extra
work is necessary. The C extension module must then play the
same frame dribbling game as the eval_loop does. An example
can be found in stackless map.
If the C extension does not do so, it restricts execution
order in the way I explained. This is not always needed,
and it is no new requirement for C developers to do so.
Only if they want to support free continuation switching,
they have to implement it.

The simple set of rules where continuations will not work at
the moment is: Generally it does not work across interpreter
recursions. At least restrictions appear:

- you cannot run an import and jump off to the caller's frame
+ but you can save a continuation in your import and use it
  later, when this recursive interpreter is gone.

- all special class functions are restricted.
+ but you can for instance save a continuation in __init__
  and use it later, when the init recursion has gone.

Reducing all these restrictions is a major task, and there
are situations where it looks impossible without an extra
subinterpreter language. If you look into the implementation
of operators like __add__, you will see that there are
repeated method calls which all may cause other interpreters
to show up. I tried to find a way to roll these functions
out in a restartable way, but it is quite a mess. The
clean way to do it would be to have microcodes, and to allow
for continuations to be caught between them.

this-is-a-stackless-3000-feature - ly y'rs - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From Vladimir.Marangozov@inrialpes.fr  Mon Aug  7 15:00:08 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 16:00:08 +0200 (CEST)
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX
In-Reply-To: <20000807132641.A60E6303181@snelboot.oratrix.nl> from "Jack Jansen" at Aug 07, 2000 03:26:40 PM
Message-ID: <200008071400.QAA22652@python.inrialpes.fr>

Jack Jansen wrote:
> 
> Don't worry, Vladimir, I hadn't forgotten your malloc stuff:-)

Me? worried about mallocs? :-)

> if mallopt is available in the standard C library this may be a way
> to squeeze out a couple of extra percent of performance that the admin
> who installs Python needn't be aware of.

As long as you're maintaining a Mac-specific port of Python, you can
do this without pbs on the Mac port.

> And I don't think your allocator can be dropped in 
> to the standard distribution, because it has the potential problem of 
> fragmenting the heap due to multiple malloc packages in one address
> space (at least, that was the problem when I last looked at it, which
> is admittedly more than a year ago).

Things have changed since then. Mainly on the Python side.
Have a look again.

> 
> And about mallopt not being portable: right, but I would assume that
> something like
> #ifdef M_MXFAST
> 	mallopt(M_MXFAST, xxxx);
> #endif
> shouldn't do any harm if we set xxxx to be a size that will cause 80%
> or so of the python objects to fall into the M_MXFAST category 

Which is exactly what pymalloc does, except that this applies for > 95% of
all allocations.

> (sizeof(PyObject)+sizeof(void *), maybe?). This doesn't sound 
> platform-dependent...

Indeed, I also use this trick to tune automatically the object allocator
for 64-bit platforms. I haven't tested it on such machines as I don't have
access to them, though. But it should work.

> Similarly, M_FREEHD sounds like it could speed up Python allocation,
> but this would need to be measured. Python allocation patterns shouldn't
> be influenced too much by platform, so again if this is good on one
> platform it is probably good on all.

I am against any guesses in this domain. Measures and profiling evidence:
that's it.  Being able to make lazy decisions about Python's mallocs is
our main advantage. Anything else is wild hype <0.3 wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gmcm@hypernet.com  Mon Aug  7 15:20:50 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 10:20:50 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <14734.46390.190481.441065@bitdiddle.concentric.net>
References: <398E93F3.374B585A@appliedbiometrics.com>
Message-ID: <1246464444-103035791@hypernet.com>

Jeremy wrote:

> >>>>> "CT" == Christian Tismer <tismer@appliedbiometrics.com>
> >>>>> writes:
> 
>   >> If someone is going to write a PEP, I hope they will explain
>   how >> the implementation deals with the various Python C API
>   calls that >> can call back into Python.
> 
>   CT> He will.
> 
> Good!  You'll write a PEP.

Actually, "He" is me. While I speak terrible German, my 
Tismerish is pretty good (Tismerish to English is a *huge* 
jump <wink>).

But I can't figure out what the h*ll is being PEPed. We know 
that continuations / coroutines / generators have great value. 
We know that stackless is not continuations; it's some mods 
(mostly to ceval.c) that enables continuation.c. But the 
questions you're asking (after protesting that you want a 
formal spec, not a reference implementation) are all about 
Christian's implementation of continuation.c. (Well, OK, it's 
whether the stackless mods are enough to allow a perfect 
continuations implementation.)

Assuming that stackless can get along with GC, ceval.c and 
grammar changes (or Christian can make it so), it seems to 
me the PEPable issue is whether the value this can add is 
worth the price of a less linear implementation.

still-a-no-brainer-to-me-ly y'rs

- Gordon


From jack@oratrix.nl  Mon Aug  7 15:23:14 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 16:23:14 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: Message by Christian Tismer <tismer@appliedbiometrics.com> ,
 Mon, 07 Aug 2000 15:47:39 +0200 , <398EBDFB.4ED9FAE7@appliedbiometrics.com>
Message-ID: <20000807142314.C0186303181@snelboot.oratrix.nl>

> > Let me make sure I understand: If I invoke a continuation when there
> > are extra C stack frames between the mainloop invocation that captured
> > the continuation and the call of the continuation, the interpreter
> > raises an exception?
> 
> Not always. Frames which are not currently bound by an
> interpreter acting on them can always be jump targets.
> Only those frames which are currently in the middle of
> an opcode are forbidden.

And how about the reverse? If I'm inside a Python callback from C code, will 
the Python code be able to use continuations? This is important, because there 
are a lot of GUI applications where almost all code is executed within a C 
callback. I'm pretty sure (and otherwise I'll be corrected within 
milliseconds:-) that this is the case for MacPython IDE and PythonWin (don't 
know about Idle).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From jeremy@beopen.com  Mon Aug  7 15:32:35 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 10:32:35 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246464444-103035791@hypernet.com>
References: <398E93F3.374B585A@appliedbiometrics.com>
 <1246464444-103035791@hypernet.com>
Message-ID: <14734.51331.820955.54653@bitdiddle.concentric.net>

Gordon,

Thanks for channeling Christian, if that's what writing a PEP on this
entails :-).

I am also a little puzzled about the subject of the PEP.  I think you
should hash it out with Barry "PEPmeister" Warsaw.  There are two
different issues -- the stackless implementation and the new control
structure exposed to programmers (e.g. continuations, coroutines,
iterators, generators, etc.).  It seems plausible to address these in
two different PEPs, possibly in competing PEPs (e.g. coroutines
vs. continuations).

Jeremy


From Vladimir.Marangozov@inrialpes.fr  Mon Aug  7 15:38:32 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 16:38:32 +0200 (CEST)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246464444-103035791@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 10:20:50 AM
Message-ID: <200008071438.QAA22748@python.inrialpes.fr>

Gordon McMillan wrote:
> 
> But I can't figure out what the h*ll is being PEPed.
> ...
> Assuming that stackless can get along with GC,

As long as frames are not considered for GC, don't worry about GC.

> ceval.c and grammar changes (or Christian can make it so), it seems to 
> me the PEPable issue is whether the value this can add is 
> worth the price of a less linear implementation.

There's an essay + paper available, slides and an implementation.
What's the problem about formalizing this in a PEP and addressing
the controversial issues + explaining how they are dealt with?

I mean, if you're a convinced long-time Stackless user and everything
is obvious for you, this PEP should try to convince the rest of us -- 
so write it down and ask no more <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tismer@appliedbiometrics.com  Mon Aug  7 15:50:42 2000
From: tismer@appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 16:50:42 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <20000807142314.C0186303181@snelboot.oratrix.nl>
Message-ID: <398ECCC2.957A9F67@appliedbiometrics.com>


Jack Jansen wrote:
> 
> > > Let me make sure I understand: If I invoke a continuation when there
> > > are extra C stack frames between the mainloop invocation that captured
> > > the continuation and the call of the continuation, the interpreter
> > > raises an exception?
> >
> > Not always. Frames which are not currently bound by an
> > interpreter acting on them can always be jump targets.
> > Only those frames which are currently in the middle of
> > an opcode are forbidden.
> 
> And how about the reverse? If I'm inside a Python callback from C code, will
> the Python code be able to use continuations? This is important, because there
> are a lot of GUI applications where almost all code is executed within a C
> callback. I'm pretty sure (and otherwise I'll be corrected within
> milliseconds:-) that this is the case for MacPython IDE and PythonWin (don't
> know about Idle).

Without extra effort, this will be problematic. If C calls back
into Python, not by the trampoline scheme that stackless uses,
but by causing an interpreter recursion, then this interpreter
will be limited. It can jump to any other frame that is not held
by an interpreter on the C stack, but the calling frame of the
C extension for instance is locked. Touching it causes an
exception.
This need not necessarily be a problem. Assume you have one or a
couple of frames sitting around, caught as a continuation.
Your Python callback from C jumps to that continuation and does
something. Afterwards, it returns to the C callback.
Performing some cycles of an idle task may be a use of such
a thing.
But as soon as you want to leave the complete calling chain,
be able to modify it, return to a level above your callback
and such, you need to implement your callback in a different
way.
The scheme is rather simple and can be seen in the stackless
map implementation: You need to be able to store your complete
state information in a frame, and you need to provide an
execute function for your frame. Then you return the magic
Py_UnwindToken, and your prepared frame will be scheduled
like any pure Python function frame.

Summary: By default, C extensions are restricted to stackful
behaviror. By giving them a stackless interface, you can
enable it completely for all continuation stuff.

cheers - chris

-- 
Christian Tismer             :^)   <mailto:tismer@appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com


From gmcm@hypernet.com  Mon Aug  7 16:28:01 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 11:28:01 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <200008071438.QAA22748@python.inrialpes.fr>
References: <1246464444-103035791@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 10:20:50 AM
Message-ID: <1246460413-103278281@hypernet.com>

Vladimir Marangozov wrote:
> Gordon McMillan wrote:
> > 
> > But I can't figure out what the h*ll is being PEPed.
> > ...
...
> 
> > ceval.c and grammar changes (or Christian can make it so), it
> > seems to me the PEPable issue is whether the value this can add
> > is worth the price of a less linear implementation.
> 
> There's an essay + paper available, slides and an implementation.

Of which the most up to date is the implementation. The 
slides / docs describe an earlier, more complex scheme.

> What's the problem about formalizing this in a PEP and addressing
> the controversial issues + explaining how they are dealt with?

That's sort of what I was asking. As far as I can tell, what's 
controversial is "continuations". That's not in scope. I would 
like to know what controversial issues there are that *are* in 
scope. 
 
> I mean, if you're a convinced long-time Stackless user and
> everything is obvious for you, this PEP should try to convince
> the rest of us -- so write it down and ask no more <wink>.

That's exactly wrong. If that were the case, I would be forced 
to vote -1 on any addition / enhancement to Python that I 
personally didn't plan on using.

- Gordon


From Vladimir.Marangozov@inrialpes.fr  Mon Aug  7 16:53:15 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 17:53:15 +0200 (CEST)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246460413-103278281@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 11:28:01 AM
Message-ID: <200008071553.RAA22891@python.inrialpes.fr>

Gordon McMillan wrote:
> 
> > What's the problem about formalizing this in a PEP and addressing
> > the controversial issues + explaining how they are dealt with?
> 
> That's sort of what I was asking. As far as I can tell, what's 
> controversial is "continuations". That's not in scope. I would 
> like to know what controversial issues there are that *are* in 
> scope. 

Here's the context that might help you figure out what I'd
like to see in this PEP. I haven't been at the last conference, I
have read the source and the essay as of years ago and have no idea
that the most up to date thing is the implementation, which I refuse
to look at again, btw, without a clear summary of what this code does,
refreshing my memory on the whole subject.

I'd like to see an overview of the changes, their expected impact on
the core, the extensions, and whatever else you judge worthy to write
about.

I'd like to see a summary of the reactions that have been emitted and
what issues are non-issues for you, and which ones are. I'd like to see
a first draft giving me a horizontal view on the subject in its entirety. 
Code examples are welcome, too. I can then start thinking about it
in a more structured way on this basis. I don't have such a basis right
now, because there's no an up to date document in plain English that
allows me to do that. And without such a document, I won't do it.

it's-simple-<wink>'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From sjoerd@oratrix.nl  Mon Aug  7 17:19:59 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Mon, 07 Aug 2000 18:19:59 +0200
Subject: [Python-Dev] SRE incompatibility
In-Reply-To: Your message of Wed, 05 Jul 2000 01:46:07 +0200.
 <002601bfe612$06e90ec0$f2a6b5d4@hagrid>
References: <20000704095542.8697B31047C@bireme.oratrix.nl>
 <002601bfe612$06e90ec0$f2a6b5d4@hagrid>
Message-ID: <20000807162000.5190631047C@bireme.oratrix.nl>

Is this problem ever going to be solved or is it too hard?
If it's too hard, I can fix xmllib to not depend on this.  This
incompatibility is the only reason I'm still not using sre.

In case you don't remember, the regexp that is referred to is
regexp = '(([a-z]+):)?([a-z]+)$'

On Wed, Jul 5 2000 "Fredrik Lundh" wrote:

> sjoerd wrote:
> 
> > >>> re.match(regexp, 'smil').group(0,1,2,3)
> > ('smil', None, 's', 'smil')
> > >>> import pre
> > >>> pre.match(regexp, 'smil').group(0,1,2,3)
> > ('smil', None, None, 'smil')
> > 
> > Needless to say, I am relying on the third value being None...
> 
> I've confirmed this (last night's fix should have solved this,
> but it didn't).  I'll post patches as soon as I have them...
> 
> </F>
> 
> 

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From pf@artcom-gmbh.de  Mon Aug  7 09:14:54 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Mon, 7 Aug 2000 10:14:54 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14734.7730.698860.642851@anthem.concentric.net> from "Barry A. Warsaw" at "Aug 6, 2000 10:25:54 pm"
Message-ID: <m13Li3i-000DieC@artcom0.artcom-gmbh.de>

Hi,

Guido:
>     >> dict.default('hello', []).append('hello')

Greg Ewing <greg@cosc.canterbury.ac.nz>:
>     GE> Is this new method going to apply to dictionaries only,
>     GE> or is it to be considered part of the standard mapping
>     GE> interface?
 
Barry A. Warsaw:
> I think we've settled on setdefault(), which is more descriptive, even
> if it's a little longer.  I have posted SF patch #101102 which adds
> setdefault() to both the dictionary object and UserDict (along with
> the requisite test suite and doco changes).

This didn't answer the question raised by Greg Ewing.  AFAI have seen,
the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
the answer is "applies to dictionaries only".

What is with the other external mapping types already in the core,
like 'dbm', 'shelve' and so on?

If the patch doesn't add this new method to these other mapping types, 
this fact should at least be documented similar to the methods 'items()' 
and 'values' that are already unimplemented in 'dbm':
 """Dbm objects behave like mappings (dictionaries), except that 
    keys and values are always strings.  Printing a dbm object 
    doesn't print the keys and values, and the items() and values() 
    methods are not supported."""

I'm still -1 on the name:  Nobody would expect, that a method 
called 'setdefault()' will actually return something useful.  May be 
it would be better to invent an absolutely obfuscuated new name, so 
that everybody is forced to actually *READ* the documentation of this 
method or nobody will guess, what it is supposed to do or even
worse: how to make clever use of it.

At least it would be a lot more likely, that someone becomes curious, 
what a method called 'grezelbatz()' is suppoed to do, than that someone
will actually lookup the documentation of a method called 'setdefault()'.

If the average Python programmer would ever start to use this method 
at all, then I believe it is very likely that we will see him/her
coding:
	dict.setdefault('key', [])
	dict['key'].append('bar')

So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
it would be better to make this a builtin function, that can be applied
to all mapping types.

Maybe it would be even better to delay this until in Python 3000
builtin types may have become real classes, so that this method may
be inherited by all mapping types from an abstract mapping base class.

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)


From tim_one@email.msn.com  Mon Aug  7 22:52:18 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 7 Aug 2000 17:52:18 -0400
Subject: Fun with call/cc (was RE: [Python-Dev] Stackless Python - Pros and Cons)
In-Reply-To: <14734.46096.366920.827786@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>

[Tim]
> On the one hand, I don't think I know of a language *not* based
> on Scheme that has call/cc (or a moral equivalent).

[Jeremy Hylton]
> ML also has call/cc, at least the Concurrent ML variant.

So it does!  I've found 3 language lines that have full-blown call/cc (not
counting the early versions of REBOL, since they took it out later), and at
least one web page claiming "that's all, folks":

1. Scheme + derivatives (but not including most Lisps).

2. Standard ML + derivatives (but almost unique among truly
   functional languages):

   http://cm.bell-labs.com/cm/cs/what/smlnj/doc/SMLofNJ/pages/cont.html

   That page is pretty much incomprehensible on its own.  Besides
   callcc (no "/"), SML-NJ also has related "throw", "isolate",
   "capture" and "escape" functions.  At least some of them *appear*
   to be addressing Kent Pitman's specific complaints about the
   excruciating interactions between call/cc and unwind-protect in
   Scheme.

3. Unlambda.  This one is a hoot!  Don't know why I haven't bumped
   into it before:

   http://www.eleves.ens.fr:8080/home/madore/programs/unlambda/
   "Your Functional Programming Language Nightmares Come True"

   Unlambda is a deliberately obfuscated functional programming
   language, whose only data type is function and whose only
   syntax is function application:  no lambdas (or other "special
   forms"), no integers, no lists, no variables, no if/then/else,
   ...  call/cc is spelled with the single letter "c" in Unlambda,
   and the docs note "expressions including c function calls tend
   to be hopelessly difficult to track down.  This was, of course,
   the reason for including it in the language in the first place".

   Not all frivolous, though!  The page goes on to point out that
   writing an interpreter for Unlambda in something other than Scheme
   exposes many of the difficult issues (like implementing call/cc
   in a language that doesn't have any such notion -- which is,
   after all, almost all languages), in a language that's otherwise
   relentlessly simple-minded so doesn't bog you down with
   accidental complexities.

Doesn't mean call/cc sucks, but language designers *have* been avoiding it
in vast numbers -- despite that the Scheme folks have been pushing it (&
pushing it, & pushing it) in every real language they flee to <wink>.

BTW, lest anyone get the wrong idea, I'm (mostly) in favor of it!  It can't
possibly be sold on any grounds other than that "it works, for real Python
programmers with real programming problems they can't solve in other ways",
though.  Christian has been doing a wonderful (if slow-motion <wink>) job of
building that critical base of real-life users.




From guido@beopen.com  Tue Aug  8 00:03:46 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 18:03:46 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre.py,1.23,1.24 sre_compile.py,1.29,1.30 sre_parse.py,1.29,1.30
In-Reply-To: Your message of "Mon, 07 Aug 2000 13:59:08 MST."
 <200008072059.NAA11904@slayer.i.sourceforge.net>
References: <200008072059.NAA11904@slayer.i.sourceforge.net>
Message-ID: <200008072303.SAA31635@cj20424-a.reston1.va.home.com>

> -- reset marks if repeat_one tail doesn't match
>    (this should fix Sjoerd's xmllib problem)

Somebody please add a test case for this to test_re.py !!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From esr@thyrsus.com  Mon Aug  7 23:13:02 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:13:02 -0400
Subject: [Python-Dev] Adding library modules to the core
Message-ID: <20000807181302.A27463@thyrsus.com>

A few days ago I asked about the procedure for adding a module to the
Python core library.  I have a framework class for things like menu systems
and symbolic debuggers I'd like to add.

Guido asked if this was similar to the TreeWidget class in IDLE.  I 
investigated and discovered that it is not, and told him so.  I am left
with a couple of related questions:

1. Has anybody got a vote on the menubrowser framwork facility I described?

1. Do we have a procedure for vetting modules for inclusion in the stock
distribution?  If not, should be institute one?

2. I am willing to do a pass through the Vaults of Parnassus and other
sources for modules that seem both sufficiently useful and sufficiently
mature to be added.  I have in mind things like mimectl, PIL, and Vladimir's
shared-memory module.  

Now, assuming I do 3, would I need to go through the vote process
on each of these, or can I get a ukase from the BDFL authorizing me to
fold in stuff?

I realize I'm raising questions for which there are no easy answers.
But Python is growing.  The Python social machine needs to adapt to
make such decisions in a more timely and less ad-hoc fashion.  I'm not
attached to being the point person in this process, but somebody's gotta be.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Question with boldness even the existence of a God; because, if there
be one, he must more approve the homage of reason, than that of
blindfolded fear.... Do not be frightened from this inquiry from any
fear of its consequences. If it ends in the belief that there is no
God, you will find incitements to virtue in the comfort and
pleasantness you feel in its exercise...
	-- Thomas Jefferson, in a 1787 letter to his nephew


From esr@thyrsus.com  Mon Aug  7 23:24:03 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:24:03 -0400
Subject: Fun with call/cc (was RE: [Python-Dev] Stackless Python - Pros and Cons)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 07, 2000 at 05:52:18PM -0400
References: <14734.46096.366920.827786@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>
Message-ID: <20000807182403.A27485@thyrsus.com>

Tim Peters <tim_one@email.msn.com>:
> Doesn't mean call/cc sucks, but language designers *have* been avoiding it
> in vast numbers -- despite that the Scheme folks have been pushing it (&
> pushing it, & pushing it) in every real language they flee to <wink>.

Yes, we have.  I haven't participated in conspiratorial hugggermugger with
other ex-Schemers, but I suspect we'd all answer pretty much the same way.
Lots of people have been avoiding call/cc not because it sucks but but because
the whole area is very hard to think about even if you have the right set
of primitives.
 
> BTW, lest anyone get the wrong idea, I'm (mostly) in favor of it!  It can't
> possibly be sold on any grounds other than that "it works, for real Python
> programmers with real programming problems they can't solve in other ways",
> though.  Christian has been doing a wonderful (if slow-motion <wink>) job of
> building that critical base of real-life users.

And it's now Christian's job to do the next stop, supplying up-to-date
documentation on his patch and proposal as a PEP.

Suggestion: In order to satisfy the BDFL's conservative instincts, perhaps
it would be better to break the Stackless patch into two pieces -- one 
that de-stack-izes ceval, and one that implements new language features.
That way we can build a firm base for later exploration.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Government is not reason, it is not eloquence, it is force; like fire, a
troublesome servant and a fearful master. Never for a moment should it be left
to irresponsible action."
	-- George Washington, in a speech of January 7, 1790


From thomas@xs4all.net  Mon Aug  7 23:23:35 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 00:23:35 +0200
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807181302.A27463@thyrsus.com>; from esr@thyrsus.com on Mon, Aug 07, 2000 at 06:13:02PM -0400
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <20000808002335.A266@xs4all.nl>

On Mon, Aug 07, 2000 at 06:13:02PM -0400, Eric S. Raymond wrote:

[ You didn't ask for votes on all these, but the best thing I can do is
vote :-]

> 1. Has anybody got a vote on the menubrowser framwork facility I described?

+0. I don't see any harm in adding it, but I can't envision a use for it,
myself.

> I have in mind things like mimectl,

+1. A nice complement to the current mime and message handling routines.

> PIL,

+0. The main reason I don't compile PIL myself is because it's such a hassle
to do it each time, so I think adding it would be nice. However, I'm not
sure if it's doable to add, whether it would present a lot of problems for
'strange' platforms and the like.

> and Vladimir's shared-memory module.

+1. Fits very nicely with the mmapmodule, even if it's supported on less
platforms.

But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
new PEP, 'enriching the Standard Library' ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@thyrsus.com  Mon Aug  7 23:39:30 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:39:30 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:23:35AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl>
Message-ID: <20000807183930.A27556@thyrsus.com>

Thomas Wouters <thomas@xs4all.net>:
> But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> new PEP, 'enriching the Standard Library' ?

I think that leads in a sub-optimal direction.  Adding suitable modules
shouldn't be a one-shot or episodic event but a continuous process of 
incorporating the best work the community has done.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Taking my gun away because I might shoot someone is like cutting my tongue
out because I might yell `Fire!' in a crowded theater."
        -- Peter Venetoklis


From esr@thyrsus.com  Mon Aug  7 23:42:24 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:42:24 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:23:35AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl>
Message-ID: <20000807184224.B27556@thyrsus.com>

Thomas Wouters <thomas@xs4all.net>:
> On Mon, Aug 07, 2000 at 06:13:02PM -0400, Eric S. Raymond wrote:
> 
> [ You didn't ask for votes on all these, but the best thing I can do is
> vote :-]
> 
> > 1. Has anybody got a vote on the menubrowser framwork facility I described?
> 
> +0. I don't see any harm in adding it, but I can't envision a use for it,
> myself.

I'll cheerfully admit that I think it's kind of borderline myself.  It works,
but it teeters on the edge of being too specialized for the core library.  I
might only +0 it myself :-).
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

As with the Christian religion, the worst advertisement for Socialism
is its adherents.
	-- George Orwell 


From thomas@xs4all.net  Mon Aug  7 23:38:39 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 00:38:39 +0200
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807183930.A27556@thyrsus.com>; from esr@thyrsus.com on Mon, Aug 07, 2000 at 06:39:30PM -0400
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl> <20000807183930.A27556@thyrsus.com>
Message-ID: <20000808003839.Q13365@xs4all.nl>

On Mon, Aug 07, 2000 at 06:39:30PM -0400, Eric S. Raymond wrote:
> Thomas Wouters <thomas@xs4all.net>:
> > But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> > new PEP, 'enriching the Standard Library' ?

> I think that leads in a sub-optimal direction.  Adding suitable modules
> shouldn't be a one-shot or episodic event but a continuous process of 
> incorporating the best work the community has done.

That depends on what the PEP does. PEPs can do two things (according to the
PEP that covers PEPs :): argue for a new feature/addition to the Python
language, or describe a standard or procedure of some sort. This PEP could
perhaps do both: describe a standard procedure for proposing and accepting a
new module in the library (and probably also removal, though that's a lot
trickier) AND do some catching-up on that process to get a few good modules
into the stdlib before 2.0 goes into a feature freeze (which is next week,
by the way.)

As for the procedure to add a new module, I think someone volunteering to
'adopt' the module and perhaps a few people reviewing it would about do it,
for the average module. Giving people a chance to say 'no!' of course.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@thyrsus.com  Mon Aug  7 23:59:54 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:59:54 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808003839.Q13365@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:38:39AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl> <20000807183930.A27556@thyrsus.com> <20000808003839.Q13365@xs4all.nl>
Message-ID: <20000807185954.B27636@thyrsus.com>

Thomas Wouters <thomas@xs4all.net>:
> That depends on what the PEP does. PEPs can do two things (according to the
> PEP that covers PEPs :): argue for a new feature/addition to the Python
> language, or describe a standard or procedure of some sort. This PEP could
> perhaps do both: describe a standard procedure for proposing and accepting a
> new module in the library (and probably also removal, though that's a lot
> trickier) AND do some catching-up on that process to get a few good modules
> into the stdlib before 2.0 goes into a feature freeze (which is next week,
> by the way.)
> 
> As for the procedure to add a new module, I think someone volunteering to
> 'adopt' the module and perhaps a few people reviewing it would about do it,
> for the average module. Giving people a chance to say 'no!' of course.

Sounds like my cue to write a PEP.  What's the URL for the PEP on PEPs again?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

See, when the GOVERNMENT spends money, it creates jobs; whereas when the money
is left in the hands of TAXPAYERS, God only knows what they do with it.  Bake
it into pies, probably.  Anything to avoid creating jobs.
	-- Dave Barry


From bwarsaw@beopen.com  Mon Aug  7 23:58:42 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 18:58:42 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <14735.16162.275037.583897@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr@thyrsus.com> writes:

    ESR> 1. Do we have a procedure for vetting modules for inclusion
    ESR> in the stock distribution?  If not, should be institute one?

Is there any way to use the SourceForge machinery to help here?  The
first step would be to upload a patch so at least the new stuff
doesn't get forgotten, and it's always easy to find the latest version
of the changes.

Also SF has a polling or voting tool, doesn't it?  I know nothing
about it, but perhaps there's some way to leverage it to test the
pulse of the community for any new module (with BDFL veto of course).

-Barry


From esr@thyrsus.com  Tue Aug  8 00:09:39 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 19:09:39 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <14735.16162.275037.583897@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 07, 2000 at 06:58:42PM -0400
References: <20000807181302.A27463@thyrsus.com> <14735.16162.275037.583897@anthem.concentric.net>
Message-ID: <20000807190939.A27730@thyrsus.com>

Barry A. Warsaw <bwarsaw@beopen.com>:
> Is there any way to use the SourceForge machinery to help here?  The
> first step would be to upload a patch so at least the new stuff
> doesn't get forgotten, and it's always easy to find the latest version
> of the changes.

Patch?  Eh?  In most cases, adding a library module will consist of adding
one .py and one .tex, with no changes to existing code.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The price of liberty is, always has been, and always will be blood.  The person
who is not willing to die for his liberty has already lost it to the first
scoundrel who is willing to risk dying to violate that person's liberty.  Are
you free? 
	-- Andrew Ford


From bwarsaw@beopen.com  Tue Aug  8 00:04:39 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 19:04:39 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
 <20000808002335.A266@xs4all.nl>
 <20000807183930.A27556@thyrsus.com>
 <20000808003839.Q13365@xs4all.nl>
 <20000807185954.B27636@thyrsus.com>
Message-ID: <14735.16519.185236.794662@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr@thyrsus.com> writes:

    ESR> Sounds like my cue to write a PEP.  What's the URL for the
    ESR> PEP on PEPs again?

http://python.sourceforge.net/peps/pep-0001.html

-Barry


From bwarsaw@beopen.com  Tue Aug  8 00:06:21 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 19:06:21 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
 <14735.16162.275037.583897@anthem.concentric.net>
 <20000807190939.A27730@thyrsus.com>
Message-ID: <14735.16621.369206.564320@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr@thyrsus.com> writes:

    ESR> Patch?  Eh?  In most cases, adding a library module will
    ESR> consist of adding one .py and one .tex, with no changes to
    ESR> existing code.

And there's no good way to put those into SF?  If the Patch Manager
isn't appropriate, what about the Task Manager (I dunno, I've never
looked at it).  The cool thing about using SF is that there's less of
a chance that this stuff will get buried in an inbox.

-Barry


From guido@beopen.com  Tue Aug  8 01:21:43 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 19:21:43 -0500
Subject: [Python-Dev] bug-fixes in cnri-16-start branch
In-Reply-To: Your message of "Sun, 06 Aug 2000 22:49:06 GMT."
 <398DEB62.789B4C9C@nowonder.de>
References: <398DEB62.789B4C9C@nowonder.de>
Message-ID: <200008080021.TAA31766@cj20424-a.reston1.va.home.com>

> I have a question on the right procedure for fixing a simple
> bug in the 1.6 release branch.
> 
> Bug #111162 appeared because the tests for math.rint() are
> already contained in the cnri-16-start revision of test_math.py
> while the "try: ... except AttributeError: ..." construct which
> was checked in shortly after was not.
> 
> Now the correct bugfix is already known (and has been
> applied to the main branch). I have updated the test_math.py
> file in my working version with "-r cnri-16-start" and
> made the changes.
> 
> Now I probably should just commit, close the patch
> (with an appropriate follow-up) and be happy.

That would work, except that I prefer to remove math.rint altogether,
as explained by Tim Peters.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From esr@snark.thyrsus.com  Tue Aug  8 00:31:21 2000
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 19:31:21 -0400
Subject: [Python-Dev] Request for PEP number
Message-ID: <200008072331.TAA27825@snark.thyrsus.com>

In accordance with the procedures in PEP 1, I am applying to initiate PEP 2.  

Proposed title: Procedure for incorporating new modules into the core.

Abstract: This PEP will describes review and voting procedures for 
incorporating candidate modules and extensions into the Python core.

Barry, could I get you to create a pep2@python.org mailing list for
this one?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

That the said Constitution shall never be construed to authorize
Congress to infringe the just liberty of the press or the rights of
conscience; or to prevent the people of the United states who are
peaceable citizens from keeping their own arms...
        -- Samuel Adams, in "Phila. Independent Gazetteer", August 20, 1789


From guido@beopen.com  Tue Aug  8 01:42:40 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 19:42:40 -0500
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: Your message of "Mon, 07 Aug 2000 18:13:02 -0400."
 <20000807181302.A27463@thyrsus.com>
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>

[ESR]
> 1. Has anybody got a vote on the menubrowser framwork facility I described?

Eric, as far as I can tell you haven't shown the code or given a
pointer to it.  I explained to you that your description left me in
the dark as to what it does.  Or did I miss a pointer?  It seems your
module doesn't even have a name!  This is a bad way to start a
discussion about the admission procedure.  Nothing has ever been
accepted into Python before the code was written and shown.

> 1. Do we have a procedure for vetting modules for inclusion in the stock
> distribution?  If not, should be institute one?

Typically, modules get accepted after extensive lobbying and agreement
from multiple developers.  The definition of "developer" is vague, and
I can't give a good rule -- not everybody who has been admitted to the
python-dev list has enough standing to make his opinion count!

Basically, I rely a lot on what various people say, but I have my own
bias about who I trust in what area.  I don't think I'll have to
publish a list of this bias, but one thing is clear: I;m not counting
votes!  Proposals and ideas get approved based on merit, not on how
many people argue for (or against) it.  I want Python to keep its
typical Guido-flavored style, and (apart from the occasional succesful
channeling by TP) there's only one way to do that: let me be the final
arbiter.  I'm willing to be the bottleneck, it gives Python the
typical slow-flowing evolution that has served it well over the past
ten years.  (At the same time, I can't read all messages in every
thread on python-dev any more -- that's why substantial ideas need a
PEP to summarize the discussion.)

> 2. I am willing to do a pass through the Vaults of Parnassus and other
> sources for modules that seem both sufficiently useful and sufficiently
> mature to be added.  I have in mind things like mimectl, PIL, and Vladimir's
> shared-memory module.  

I don't know mimectl or Vladimir's module (how does it compare to
mmap?).  Regarding PIL, I believe the problem there is that it is a
large body of code maintained by a third party.  It should become part
of a SUMO release and of binary releases, but I see no advantage in
carrying it along in the core source distribution.

> Now, assuming I do 3, would I need to go through the vote process
> on each of these, or can I get a ukase from the BDFL authorizing me to
> fold in stuff?

Sorry, I don't write blank checks.

> I realize I'm raising questions for which there are no easy answers.
> But Python is growing.  The Python social machine needs to adapt to
> make such decisions in a more timely and less ad-hoc fashion.  I'm
> not attached to being the point person in this process, but
> somebody's gotta be.

Watch out though: if we open the floodgates now we may seriously
deteriorate the quality of the standard library, without doing much
good.

I'd much rather see an improved Vaults of Parnassus (where every
module uses distutils and installation becomes trivial) than a
fast-track process for including new code in the core.

That said, I think writing a bunch of thoughts up as a PEP makes a lot
of sense!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From esr@thyrsus.com  Tue Aug  8 02:23:34 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 21:23:34 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 07, 2000 at 07:42:40PM -0500
References: <20000807181302.A27463@thyrsus.com> <200008080042.TAA31856@cj20424-a.reston1.va.home.com>
Message-ID: <20000807212333.A27996@thyrsus.com>

--EVF5PPMfhYS0aIcm
Content-Type: text/plain; charset=us-ascii

Guido van Rossum <guido@beopen.com>:
> [ESR]
> > 1. Has anybody got a vote on the menubrowser framwork facility I described?
> 
> Eric, as far as I can tell you haven't shown the code or given a
> pointer to it.  I explained to you that your description left me in
> the dark as to what it does.  Or did I miss a pointer?  It seems your
> module doesn't even have a name!  This is a bad way to start a
> discussion about the admission procedure.  Nothing has ever been
> accepted into Python before the code was written and shown.

Easily fixed.  Code's in an enclosure.
 
> > 1. Do we have a procedure for vetting modules for inclusion in the stock
> > distribution?  If not, should be institute one?
> 
> Typically, modules get accepted after extensive lobbying and agreement
> from multiple developers.  The definition of "developer" is vague, and
> I can't give a good rule -- not everybody who has been admitted to the
> python-dev list has enough standing to make his opinion count!

Understood, and I assume one of those insufficient-standing people is
*me*, given my short tenure on the list, and I cheerfully accept that.
The real problem I'm going after here is that this vague rule won't
scale well.

> Basically, I rely a lot on what various people say, but I have my own
> bias about who I trust in what area.  I don't think I'll have to
> publish a list of this bias, but one thing is clear: I;m not counting
> votes! 

I wasn't necessarily expecting you to.  I can't imagine writing a
procedure in which the BDFL doesn't retain a veto.

> I don't know mimectl or Vladimir's module (how does it compare to
> mmap?).

Different, as Thomas Wouters has already observed.  Vladimir's module is more
oriented towards supporting semaphores and exclusion.  At one point many months
ago, before Vladimir was on the list, I looked into it as a way to do exclusion
locking for shared shelves.  Vladimir and I even negotiated a license change
with INRIA so Python could use it.  That was my first pass at sharable 
shelves; it foundered on problems with the BSDDB 1.85 API.  But shm would
still be worth having in the core librariy, IMO.

The mimecntl module supports classes for representing MIME objects that
include MIME-structure-sensitive mutation operations.  Very strong candidate
for inclusion, IMO.

> > Now, assuming I do 3, would I need to go through the vote process
> > on each of these, or can I get a ukase from the BDFL authorizing me to
> > fold in stuff?
> 
> Sorry, I don't write blank checks.

And I wasn't expecting one.  I'll write up some thoughts about this in the PEP.
 
> > I realize I'm raising questions for which there are no easy answers.
> > But Python is growing.  The Python social machine needs to adapt to
> > make such decisions in a more timely and less ad-hoc fashion.  I'm
> > not attached to being the point person in this process, but
> > somebody's gotta be.
> 
> Watch out though: if we open the floodgates now we may seriously
> deteriorate the quality of the standard library, without doing much
> good.

The alternative is to end up with a Perl-like Case of the Missing Modules,
where lots of things Python writers should be able to count on as standard
builtins can't realistically be used, because the users they deliver to
aren't going to want to go through a download step.
 
> I'd much rather see an improved Vaults of Parnassus (where every
> module uses distutils and installation becomes trivial) than a
> fast-track process for including new code in the core.

The trouble is that I flat don't believe in this solution.  It works OK
for developers, who will be willing to do extra download steps -- but it
won't fly with end-user types.

> That said, I think writing a bunch of thoughts up as a PEP makes a lot
> of sense!

I've applied to initiate PEP 2.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Hoplophobia (n.): The irrational fear of weapons, correctly described by 
Freud as "a sign of emotional and sexual immaturity".  Hoplophobia, like
homophobia, is a displacement symptom; hoplophobes fear their own
"forbidden" feelings and urges to commit violence.  This would be
harmless, except that they project these feelings onto others.  The
sequelae of this neurosis include irrational and dangerous behaviors
such as passing "gun-control" laws and trashing the Constitution.

--EVF5PPMfhYS0aIcm
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="menubrowser.py"

# menubrowser.py -- framework class for abstract browser objects

from sys import stderr

class MenuBrowser:
    "Support abstract browser operations on a stack of indexable objects."
    def __init__(self, debug=0, errout=stderr):
        self.page_stack = []
        self.selection_stack = []
        self.viewbase_stack = []
        self.viewport_height = 0
        self.debug = debug
        self.errout = errout

    def match(self, a, b):
        "Browseable-object comparison."
        return a == b

    def push(self, browseable, selected=None):
        "Push a browseable object onto the location stack."
        if self.debug:
            self.errout.write("menubrowser.push(): pushing %s=@%d, selection=%s\n" % (browseable, id(browseable), `selected`))
        selnum = 0
        if selected == None:
            if self.debug:
                self.errout.write("menubrowser.push(): selection defaulted\n")
        else:
            for i in range(len(browseable)):
                selnum = len(browseable) - i - 1
                if self.match(browseable[selnum], selected):
                     break
            if self.debug:
                self.errout.write("menubrowser.push(): selection set to %d\n" % (selnum))
        self.page_stack.append(browseable)
        self.selection_stack.append(selnum)
        self.viewbase_stack.append(selnum - selnum % self.viewport_height)
        if self.debug:
            object = self.page_stack[-1]
            selection = self.selection_stack[-1]
            viewbase = self.viewbase_stack[-1]
            self.errout.write("menubrowser.push(): pushed %s=@%d->%d, selection=%d, viewbase=%d\n" % (object, id(object), len(self.page_stack), selection, viewbase))

    def pop(self):
        "Pop a browseable object off the location stack."
        if not self.page_stack:
            if self.debug:
                self.errout.write("menubrowser.pop(): stack empty\n")
            return None
        else:
            item = self.page_stack[-1]
            self.page_stack = self.page_stack[:-1]
            self.selection_stack = self.selection_stack[:-1]
            self.viewbase_stack = self.viewbase_stack[:-1]
            if self.debug:
                if len(self.page_stack) == 0:
                    self.errout.write("menubrowser.pop(): stack is empty.")
                else:
                    self.errout.write("menubrowser.pop(): new level %d, object=@%d, selection=%d, viewbase=%d\n" % (len(self.page_stack), id(self.page_stack[-1]), self.selection_stack[-1], self.viewbase_stack[-1]))
            return item

    def stackdepth(self):
        "Return the current stack depth."
        return len(self.page_stack)

    def list(self):
        "Return all elements of the current object that ought to be visible."
        if not self.page_stack:
            return None
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        viewbase = self.viewbase_stack[-1]

        if self.debug:
            self.errout.write("menubrowser.list(): stack level %d. object @%d, listing %s\n" % (len(self.page_stack)-1, id(object), object[viewbase:viewbase+self.viewport_height]))

        # This requires a slice method
        return object[viewbase:viewbase+self.viewport_height]

    def top(self):
        "Return the top-of-stack menu"
        if self.debug >= 2:
            self.errout.write("menubrowser.top(): level=%d, @%d\n" % (len(self.page_stack)-1,id(self.page_stack[-1])))
        return self.page_stack[-1]

    def selected(self):
        "Return the currently selected element in the top menu."
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        if self.debug:
            self.errout.write("menubrowser.selected(): at %d, object=@%d, %s\n" % (len(self.page_stack)-1, id(object), self.selection_stack[-1]))
        return object[selection]

    def viewbase(self):
        "Return the viewport base of the current menu."
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        base = self.viewbase_stack[-1]
        if self.debug:
            self.errout.write("menubrowser.viewbase(): at level=%d, object=@%d, %d\n" % (len(self.page_stack)-1, id(object), base,))
        return base

    def thumb(self):
        "Return top and bottom boundaries of a thumb scaled to the viewport."
        object = self.page_stack[-1]
        windowscale = float(self.viewport_height) / float(len(object))
        thumb_top = self.viewbase() * windowscale
        thumb_bottom = thumb_top + windowscale * self.viewport_height - 1
        return (thumb_top, thumb_bottom)

    def move(self, delta=1, wrap=0):
        "Move the selection on the current item downward."
        if delta == 0:
            return
        object = self.page_stack[-1]
        oldloc = self.selection_stack[-1]

        # Change the selection.  Requires a length method
        if oldloc + delta in range(len(object)):
            newloc = oldloc + delta
        elif wrap:
            newloc = (oldloc + delta) % len(object)
        elif delta > 0:
            newloc = len(object) - 1
        else:
            newloc = 0
        self.selection_stack[-1] = newloc

        # When the selection is moved out of the viewport, move the viewbase
        # just part enough to track it.
        oldbase = self.viewbase_stack[-1]
        if newloc in range(oldbase, oldbase + self.viewport_height):
            pass
        elif newloc < oldbase:
            self.viewbase_stack[-1] = newloc
        else:
            self.scroll(newloc - (oldbase + self.viewport_height) + 1)

        if self.debug:
            self.errout.write("menubrowser.down(): at level=%d, object=@%d, old selection=%d, new selection = %d, new base = %d\n" % (len(self.page_stack)-1, id(object), oldloc, newloc, self.viewbase_stack[-1]))

        return (oldloc != newloc)

    def scroll(self, delta=1, wrap=0):
        "Scroll the viewport up or down in the current option."
        print "delta:", delta
        object = self.page_stack[-1]
        if not wrap:
            oldbase = self.viewbase_stack[-1]
            if delta > 0 and oldbase+delta > len(object)-self.viewport_height:
                return
            elif delta < 0 and oldbase + delta < 0:
                return
        self.viewbase_stack[-1] = (self.viewbase_stack[-1] + delta) % len(object)

    def dump(self):
        "Dump the whole stack of objects."
        self.errout.write("Viewport height: %d\n" % (self.viewport_height,))
        for i in range(len(self.page_stack)):
            self.errout.write("Page: %d\n" % (i,))
            self.errout.write("Selection: %d\n" % (self.selection_stack[i],))
            self.errout.write(`self.page_stack[i]` + "\n");

    def next(self, wrap=0):
        return self.move(1, wrap)

    def previous(self, wrap=0):
        return self.move(-1, wrap)

    def page_down(self):
        return self.move(2*self.viewport_height-1)

    def page_up(self):
        return self.move(-(2*self.viewport_height-1))

if __name__ == '__main__': 
    import cmd, string, readline

    def itemgen(prefix, count):
        return map(lambda x, pre=prefix: pre + `x`, range(count))

    testobject = menubrowser()
    testobject.viewport_height = 6
    testobject.push(itemgen("A", 11))

    class browser(cmd.Cmd):
        def __init__(self):
            self.wrap = 0
            self.prompt = "browser> "

        def preloop(self):
            print "%d: %s (%d) in %s" %  (testobject.stackdepth(), testobject.selected(), testobject.viewbase(), testobject.list())

        def postloop(self):
            print "Goodbye."

        def postcmd(self, stop, line):
            self.preloop()
            return stop

        def do_quit(self, line):
            return 1

        def do_exit(self, line):
            return 1

        def do_EOF(self, line):
            return 1

        def do_list(self, line):
            testobject.dump()

        def do_n(self, line):
            testobject.next()

        def do_p(self, line):
            testobject.previous()

        def do_pu(self, line):
            testobject.page_up()

        def do_pd(self, line):
            testobject.page_down()

        def do_up(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.move(-n, self.wrap)

        def do_down(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.move(n, self.wrap)

        def do_s(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.scroll(n, self.wrap)

        def do_pop(self, line):
            testobject.pop()

        def do_gen(self, line):
            tokens = string.split(line)
            testobject.push(itemgen(tokens[0], string.atoi(tokens[1])))

        def do_dump(self, line):
            testobject.dump()

        def do_wrap(self, line):
            self.wrap = 1 - self.wrap
            if self.wrap:
                print "Wrap is now on."
            else:
                print "Wrap is now off."

        def emptyline(self):
            pass

    browser().cmdloop()

--EVF5PPMfhYS0aIcm--


From MarkH@ActiveState.com  Tue Aug  8 02:36:24 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 8 Aug 2000 11:36:24 +1000
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807181302.A27463@thyrsus.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>

> Guido asked if this was similar to the TreeWidget class in IDLE.  I
> investigated and discovered that it is not, and told him so.  I am left
> with a couple of related questions:
>
> 1. Has anybody got a vote on the menubrowser framwork facility I
> described?

I would have to give it a -1.  It probably should only be a -0, but I
dropped it a level in the interests of keeping the library small and
relevant.

In a nutshell, it is proposed as a "framework class for abstract browser
objects", but I don't see how.  It looks like a reasonable framework for a
particular kind of browser built for a text based system.  I can not see
how a GUI browser could take advantage of it.

For example:
* How does a "page" concept make sense in a high-res GUI?  Why do we have a
stack of pages?
* What is a "viewport height" - is that a measure of pixels?  If not, what
font are you assuming?  (sorry - obviously rhetorical, given my "text only"
comments above.)
* How does a "thumb position" relate to scroll bars that existing GUI
widgets almost certainly have.

etc.

While I am sure you find it useful, I don't see how it helps anyone else,
so I dont see how it qualifies as a standard module.

If it is designed as part of a "curses" package, then I would be +0 - I
would happily defer to your (or someone elses) judgement regarding its
relevance in that domain.

Obviously, there is a reasonable chance I am missing the point....

Mark.



From bwarsaw@beopen.com  Tue Aug  8 03:34:18 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 22:34:18 -0400 (EDT)
Subject: [Python-Dev] Request for PEP number
References: <200008072331.TAA27825@snark.thyrsus.com>
Message-ID: <14735.29098.168698.86981@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr@snark.thyrsus.com> writes:

    ESR> In accordance with the procedures in PEP 1, I am applying to
    ESR> initiate PEP 2.

    ESR> Proposed title: Procedure for incorporating new modules into
    ESR> the core.

    ESR> Abstract: This PEP will describes review and voting
    ESR> procedures for incorporating candidate modules and extensions
    ESR> into the Python core.

Done.

    ESR> Barry, could I get you to create a pep2@python.org mailing
    ESR> list for this one?

We decided not to create separate mailing lists for each PEP.

-Barry


From greg@cosc.canterbury.ac.nz  Tue Aug  8 04:08:48 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 08 Aug 2000 15:08:48 +1200 (NZST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A
 small proposed change to dictionaries' "get" method...)
In-Reply-To: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <200008080308.PAA12740@s454.cosc.canterbury.ac.nz>

artcom0!pf@artcom-gmbh.de:
>	dict.setdefault('key', [])
>	dict['key'].append('bar')

I would agree with this more if it said

   dict.setdefault([])
   dict['key'].append('bar')

But I have a problem with all of these proposals: they require
implicitly making a copy of the default value, which violates
the principle that Python never copies anything unless you
tell it to. The default "value" should really be a thunk, not
a value, e.g.

   dict.setdefault(lambda: [])
   dict['key'].append('bar')

or

   dict.get_or_add('key', lambda: []).append('bar')

But I don't really like that, either, because lambdas look
ugly to me, and I don't want to see any more builtin
constructs that more-or-less require their use.

I keep thinking that the solution to this lies somewhere
in the direction of short-circuit evaluation techniques and/or
augmented assignment, but I can't quite see how yet.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From esr@thyrsus.com  Tue Aug  8 04:30:03 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 23:30:03 -0400
Subject: [Python-Dev] Request for PEP number
In-Reply-To: <14735.29098.168698.86981@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 07, 2000 at 10:34:18PM -0400
References: <200008072331.TAA27825@snark.thyrsus.com> <14735.29098.168698.86981@anthem.concentric.net>
Message-ID: <20000807233003.A28267@thyrsus.com>

Barry A. Warsaw <bwarsaw@beopen.com>:
>     ESR> Barry, could I get you to create a pep2@python.org mailing
>     ESR> list for this one?
> 
> We decided not to create separate mailing lists for each PEP.

OK, where should discussion take place?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

A ``decay in the social contract'' is detectable; there is a growing
feeling, particularly among middle-income taxpayers, that they are not
getting back, from society and government, their money's worth for
taxes paid. The tendency is for taxpayers to try to take more control
of their finances ..
	-- IRS Strategic Plan, (May 1984)


From tim_one@email.msn.com  Tue Aug  8 04:44:05 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 7 Aug 2000 23:44:05 -0400
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <200008080308.PAA12740@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com>

> artcom0!pf@artcom-gmbh.de:
> >	dict.setdefault('key', [])
> >	dict['key'].append('bar')
>

[Greg Ewing]
> I would agree with this more if it said
>
>    dict.setdefault([])
>    dict['key'].append('bar')

Ha!  I *told* Guido people would think that's the proper use of something
named setdefault <0.9 wink>.

> But I have a problem with all of these proposals: they require
> implicitly making a copy of the default value, which violates
> the principle that Python never copies anything unless you
> tell it to.

But they don't.  The occurrence of an, e.g., [] literal in Python source
*always* leads to a fresh list being created whenever the line of code
containing it is executed.  That behavior is guaranteed by the Reference
Manual.  In that respect

    dict.get('hi', [])
or
    dict.getorset('hi', []).append(42)  # getorset is my favorite

is exactly the same as

    x = []

No copy of anything is made; the real irritation is that because arguments
are always evaluated, we end up mucking around allocating an empty list
regardless of whether it's needed; which you partly get away from via your:

 The default "value" should really be a thunk, not
> a value, e.g.
>
>    dict.setdefault(lambda: [])
>    dict['key'].append('bar')
>
> or
>
>    dict.get_or_add('key', lambda: []).append('bar')

except that lambda is also an executable expression and so now we end up
creating an anonymous function dynamically regardless of whether it's
needed.

> But I don't really like that, either, because lambdas look
> ugly to me, and I don't want to see any more builtin
> constructs that more-or-less require their use.

Ditto.

> I keep thinking that the solution to this lies somewhere
> in the direction of short-circuit evaluation techniques and/or
> augmented assignment, but I can't quite see how yet.

If new *syntax* were added, the compiler could generate short-circuiting
code.  Guido will never go for this <wink>, but to make it concrete, e.g.,

    dict['key']||[].append('bar')
    count[word]||0 += 1

I found that dict.get(...) already confused my brain at times because my
*eyes* want to stop at "[]" when scanning code for dict references.
".get()" just doesn't stick out as much; setdefault/default/getorset won't
either.

can't-win-ly y'rs  - tim




From esr@thyrsus.com  Tue Aug  8 04:55:14 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 23:55:14 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Tue, Aug 08, 2000 at 11:36:24AM +1000
References: <20000807181302.A27463@thyrsus.com> <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>
Message-ID: <20000807235514.C28267@thyrsus.com>

Mark Hammond <MarkH@ActiveState.com>:
> For example:
> * How does a "page" concept make sense in a high-res GUI?  Why do we have a
> stack of pages?
> * What is a "viewport height" - is that a measure of pixels?  If not, what
> font are you assuming?  (sorry - obviously rhetorical, given my "text only"
> comments above.)
> * How does a "thumb position" relate to scroll bars that existing GUI
> widgets almost certainly have.

It's not designed for use with graphical browsers.  Here are three contexts
that could use it:

* A menu tree being presented through a window or viewport (this is how it's
  being used now).

* A symbolic debugger that can browse text around a current line.

* A database browser for a sequential record-based file format.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Under democracy one party always devotes its chief energies
to trying to prove that the other party is unfit to rule--and
both commonly succeed, and are right... The United States
has never developed an aristocracy really disinterested or an
intelligentsia really intelligent. Its history is simply a record
of vacillations between two gangs of frauds. 
	--- H. L. Mencken


From tim_one@email.msn.com  Tue Aug  8 05:52:20 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 8 Aug 2000 00:52:20 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJJGOAA.tim_one@email.msn.com>

[Guido]
> ...
> Nothing has ever been accepted into Python before the code
> was written and shown.

C'mon, admit it:  you were sooooo appalled by the thread that lead to the
creation of tabnanny.py that you decided at once it would end up in the
distribution, just so you could justify skipping all the dozens of tedious
long messages in which The Community developed The General Theory of
Tab-Space Equivalence ab initio.  It was just too much of a
stupid-yet-difficult hack to resist <wink>.

> ...
> I want Python to keep its typical Guido-flavored style,

So do most of us, most of the time.  Paradoxically, it may be easier to
stick to that as Python's popularity zooms beyond the point where it's even
*conceivable* "votes" make any sense.

> and (apart from the occasional succesful channeling by TP) there's
> only one way to do that: let me be the final arbiter.

Well, there's only one *obvious* way to do it.  That's what keeps it
Pythonic.

> I'm willing to be the bottleneck, it gives Python the typical slow-
> flowing evolution that has served it well over the past ten years.

Except presumably for 2.0, where we decided at the last second to change
large patches from "postponed" to "gotta have it".  Consistency is the
hobgoblin ...

but-that's-pythonic-too-ly y'rs  - tim




From Moshe Zadka <moshez@math.huji.ac.il>  Tue Aug  8 06:42:30 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Tue, 8 Aug 2000 08:42:30 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008080836470.1417-100000@sundial>

On Tue, 8 Aug 2000, Thomas Wouters wrote:

> > PIL,
> 
> +0. The main reason I don't compile PIL myself is because it's such a hassle
> to do it each time, so I think adding it would be nice. However, I'm not
> sure if it's doable to add, whether it would present a lot of problems for
> 'strange' platforms and the like.
> 
> > and Vladimir's shared-memory module.
> 
> +1. Fits very nicely with the mmapmodule, even if it's supported on less
> platforms.
> 
> But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> new PEP, 'enriching the Standard Library' ?

PIL is definitely in the 206 PEP. The others are not yet there. Please
note that a central database of "useful modules" which are in distutils'
.tar.gz (or maybe .zip, now that Python has zipfile.py") + a simple tool
to download and install. The main reason for the "batteries included" 
PEP is reliance on external libraries, which do not mesh as well with
the distutils.

expect-a-change-of-direction-in-the-pep-ly y'rs, Z.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From pf@artcom-gmbh.de  Tue Aug  8 09:00:29 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Tue, 8 Aug 2000 10:00:29 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com> from Tim Peters at "Aug 7, 2000 11:44: 5 pm"
Message-ID: <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>

Hi Tim!

Tim Peters:
[...]
> Ha!  I *told* Guido people would think that's the proper use of something
> named setdefault <0.9 wink>.
[...]
>     dict.getorset('hi', []).append(42)  # getorset is my favorite

'getorset' is a *MUCH* better name compared to 'default' or 'setdefault'. 

Regards, Peter


From R.Liebscher@gmx.de  Tue Aug  8 10:26:47 2000
From: R.Liebscher@gmx.de (Rene Liebscher)
Date: Tue, 08 Aug 2000 11:26:47 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub>
Message-ID: <398FD257.CFDC3B74@gmx.de>

Greg Ward wrote:
> 
> On 04 August 2000, Mark Hammond said:
> > I would prefer python20_bcpp.lib, but that is not an issue.
> 
> Good suggestion: the contents of the library are more important than the
> format.  Rene, can you make this change and include it in your next
> patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as
> opposed to "python20_bcpp"?
OK, it is no problem to change it.
> 
> > I am a little confused by the intention, tho.  Wouldnt it make sense to
> > have Borland builds of the core create a Python20.lib, then we could keep
> > the pragma in too?
> >
> > If people want to use Borland for extensions, can't we ask them to use that
> > same compiler to build the core too?  That would seem to make lots of the
> > problems go away?
> 
> But that requires people to build all of Python from source, which I'm
> guessing is a bit more bothersome than building an extension or two from
> source.  Especially since Python is already distributed as a very
> easy-to-use binary installer for Windows, but most extensions are not.
> 
> Rest assured that we probably won't be making things *completely*
> painless for those who do not toe Chairman Bill's party line and insist
> on using "non-standard" Windows compilers.  They'll probably have to get
> python20_bcpp.lib (or python20_gcc.lib, or python20_lcc.lib) on their
> own -- whether downloaded or generated, I don't know.  But the
> alternative is to include 3 or 4 python20_xxx.lib files in the standard
> Windows distribution, which I think is silly.
(GCC uses libpython20.a)
It is not necessary to include the libraries for all compilers. The only
thing what is necessary is a def-file for the library. Every compiler I
know has a program to create a import library from a def-file.
BCC55 can even convert python20.lib in its own format. (The program is
called "coff2omf". BCC55 uses the OMF format for its libraries which is
different of MSVC's COFF format. (This answers your question, Tim?))  
Maybe it should be there a file in the distribution, which explains what
to do
if someone wants to use another compiler, especially how to build a
import
library for this compiler or at least some general information what you
need to do. 
( or should it be included in the 'Ext' documentation. )

kind regards

Rene Liebscher


From Vladimir.Marangozov@inrialpes.fr  Tue Aug  8 11:00:35 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 8 Aug 2000 12:00:35 +0200 (CEST)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 07, 2000 07:42:40 PM
Message-ID: <200008081000.MAA29344@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> I don't know mimectl or Vladimir's module (how does it compare to
> mmap?).

To complement ESR:

- written 3 years ago
- exports a file-like interface, defines 2 object types: shm & sem
- resembles buffer but lacks the slice interface.
- has all sysV shared memory bells & whistles + native semaphore support

http://sirac.inrialpes.fr/~marangoz/python/shm

Technically, mmap is often built on top of shared memory OS facilities.
Adding slices + Windows code for shared mem & semaphores + a simplified
unified interface might be a plan.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From mal@lemburg.com  Tue Aug  8 11:46:25 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 08 Aug 2000 12:46:25 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <200008081000.MAA29344@python.inrialpes.fr>
Message-ID: <398FE501.FFF09FAE@lemburg.com>

Vladimir Marangozov wrote:
> 
> Guido van Rossum wrote:
> >
> > I don't know mimectl or Vladimir's module (how does it compare to
> > mmap?).
> 
> To complement ESR:
> 
> - written 3 years ago
> - exports a file-like interface, defines 2 object types: shm & sem
> - resembles buffer but lacks the slice interface.
> - has all sysV shared memory bells & whistles + native semaphore support
> 
> http://sirac.inrialpes.fr/~marangoz/python/shm
> 
> Technically, mmap is often built on top of shared memory OS facilities.
> Adding slices + Windows code for shared mem & semaphores + a simplified
> unified interface might be a plan.

I would be +1 if you could get it to work on Windows, +0
otherwise.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From R.Liebscher@gmx.de  Tue Aug  8 12:41:12 2000
From: R.Liebscher@gmx.de (Rene Liebscher)
Date: Tue, 08 Aug 2000 13:41:12 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de>
Message-ID: <398FF1D8.A91A8C02@gmx.de>

Rene Liebscher wrote:
> 
> Greg Ward wrote:
> >
> > On 04 August 2000, Mark Hammond said:
> > > I would prefer python20_bcpp.lib, but that is not an issue.
> >
> > Good suggestion: the contents of the library are more important than the
> > format.  Rene, can you make this change and include it in your next
> > patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as
> > opposed to "python20_bcpp"?
> OK, it is no problem to change it.
I forgot to ask which name you would like for debug libraries

	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"

may be we should use "bcpp_python20_d.lib", and use the name schema
which 
I suggested first.


kind regards
 
Rene Liebscher


From skip@mojam.com (Skip Montanaro)  Tue Aug  8 14:24:06 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 8 Aug 2000 08:24:06 -0500 (CDT)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>
References: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com>
 <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <14736.2550.586217.758500@beluga.mojam.com>

    >> dict.getorset('hi', []).append(42)  # getorset is my favorite

    Peter> 'getorset' is a *MUCH* better name compared to 'default' or
    Peter> 'setdefault'.

Shouldn't that be getorsetandget?  After all, it doesn't just set or get it
gets, but if it's undefined, it sets, then gets.

I know I'll be shouted down, but I still vote against a method that both
sets and gets dict values.  I don't think the abbreviation in the source is
worth the obfuscation of the code.

Skip



From akuchlin@mems-exchange.org  Tue Aug  8 14:31:29 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 8 Aug 2000 09:31:29 -0400
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 08, 2000 at 08:24:06AM -0500
References: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com> <m13M4JJ-000DieC@artcom0.artcom-gmbh.de> <14736.2550.586217.758500@beluga.mojam.com>
Message-ID: <20000808093129.A18519@kronos.cnri.reston.va.us>

On Tue, Aug 08, 2000 at 08:24:06AM -0500, Skip Montanaro wrote:
>I know I'll be shouted down, but I still vote against a method that both
>sets and gets dict values.  I don't think the abbreviation in the source is
>worth the obfuscation of the code.

-1 from me, too.  A shortcut that only saves a line or two of code
isn't worth the obscurity of the name.

("Ohhh, I get it.  Back on that old minimalism kick, Andrew?"

"Not back on it.  Still on it.")

--amk



From Fredrik Lundh" <effbot@telia.com  Tue Aug  8 16:10:28 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 17:10:28 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <ECEPKNMJLHAPFFJHDOJBMEHPDDAA.MarkH@ActiveState.com>
Message-ID: <00c901c0014a$cc7c9be0$f2a6b5d4@hagrid>

mark wrote:
> So I think that the adoption of our half-solution (ie, we are really =
only
> forcing a better error message - not even getting a traceback to =
indicate
> _which_ module fails)

note that the module name is available to the Py_InitModule4
module (for obvious reasons ;-), so it's not that difficult to
improve the error message.

how about:

...

static char not_initialized_error[] =3D
"ERROR: Module %.200s loaded an uninitialized interpreter!\n\
  This Python has API version %d, module %.200s has version %d.\n";

...

    if (!Py_IsInitialized()) {
        char message[500];
        sprintf(message, not_initialized_error, name, =
PYTHON_API_VERSION,
            name, module_api_version)
        Py_FatalError(message);
    }

</F>



From pf@artcom-gmbh.de  Tue Aug  8 15:48:32 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Tue, 8 Aug 2000 16:48:32 +0200 (MEST)
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com> from Skip Montanaro at "Aug 8, 2000  8:24: 6 am"
Message-ID: <m13MAgC-000DieC@artcom0.artcom-gmbh.de>

Hi,

Tim Peters:
>     >> dict.getorset('hi', []).append(42)  # getorset is my favorite
> 
>     Peter> 'getorset' is a *MUCH* better name compared to 'default' or
>     Peter> 'setdefault'.
 
Skip Montanaro:
> Shouldn't that be getorsetandget?  After all, it doesn't just set or get it
> gets, but if it's undefined, it sets, then gets.

That would defeat the main purpose of this method: abbreviation.
This name is simply too long.

> I know I'll be shouted down, but I still vote against a method that both
> sets and gets dict values.  I don't think the abbreviation in the source is
> worth the obfuscation of the code.

Yes.  
But I got the impression that Patch#101102 can't be avoided any more.  
So in this situation Tims '.getorset()' is the lesser of two evils 
compared to '.default()' or '.setdefault()'.

BTW: 
I think the "informal" mapping interface should get a more
explicit documentation.  The language reference only mentions the
'len()' builtin method and indexing.  But the section about mappings
contains the sentence: "The extension modules dbm, gdbm, bsddb provide
additional examples of mapping types."

On the other hand section "2.1.6 Mapping Types" of the library reference
says: "The following operations are defined on mappings ..." and than
lists all methods including 'get()', 'update()', 'copy()' ...

Unfortunately only a small subset of these methods actually works on
a dbm mapping:

>>> import dbm
>>> d = dbm.open("piff", "c")
>>> d.get('foo', [])
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  AttributeError: get
>>> d.copy()
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  AttributeError: copy
   
That should be documented.

Regards, Peter


From trentm@ActiveState.com  Tue Aug  8 16:18:12 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Tue, 8 Aug 2000 08:18:12 -0700
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <398FF1D8.A91A8C02@gmx.de>; from R.Liebscher@gmx.de on Tue, Aug 08, 2000 at 01:41:12PM +0200
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de>
Message-ID: <20000808081811.A10965@ActiveState.com>

On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> I forgot to ask which name you would like for debug libraries
> 
> 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"
> 
> may be we should use "bcpp_python20_d.lib", and use the name schema
> which 
> I suggested first.

Python20 is most important so it should go first. Then I suppose it is
debatable whether 'd' or 'bcpp' should come first. My preference is
"python20_bcpp_d.lib" because this would maintain the pattern that the
basename of debug-built libs, etc. end in "_d".

Generally speaking this would give a name spec of

python<version>(_<metadata>)*(_d)?.lib


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From thomas@xs4all.net  Tue Aug  8 16:22:17 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 17:22:17 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808081811.A10965@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 08, 2000 at 08:18:12AM -0700
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com>
Message-ID: <20000808172217.G266@xs4all.nl>

On Tue, Aug 08, 2000 at 08:18:12AM -0700, Trent Mick wrote:
> On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> > I forgot to ask which name you would like for debug libraries

> > 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"

> > may be we should use "bcpp_python20_d.lib", and use the name schema
> > which I suggested first.

> Python20 is most important so it should go first.

To clarify something Rene said earlier (I appear to have deleted that mail
eventhough I had intended to reply to it :P) 'gcc' names its libraries
'libpython<version>.{so,a}' because that's the UNIX convention: libraries
are named 'lib<name>.<libtype>', where libtype is '.a' for static libraries
and '.so' for dynamic (ELF, in any case) ones, and you link with -l<name>,
without the 'lib' in front of it. The 'lib' is UNIX-imposed, not something
gcc or Guido made up.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From trentm@ActiveState.com  Tue Aug  8 16:26:03 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Tue, 8 Aug 2000 08:26:03 -0700
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808172217.G266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 05:22:17PM +0200
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com> <20000808172217.G266@xs4all.nl>
Message-ID: <20000808082603.B10965@ActiveState.com>

On Tue, Aug 08, 2000 at 05:22:17PM +0200, Thomas Wouters wrote:
> On Tue, Aug 08, 2000 at 08:18:12AM -0700, Trent Mick wrote:
> > On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> > > I forgot to ask which name you would like for debug libraries
> 
> > > 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"
> 
> > > may be we should use "bcpp_python20_d.lib", and use the name schema
> > > which I suggested first.
> 
> > Python20 is most important so it should go first.
> 
> To clarify something Rene said earlier (I appear to have deleted that mail
> eventhough I had intended to reply to it :P) 'gcc' names its libraries
> 'libpython<version>.{so,a}' because that's the UNIX convention: libraries
> are named 'lib<name>.<libtype>', where libtype is '.a' for static libraries
> and '.so' for dynamic (ELF, in any case) ones, and you link with -l<name>,
> without the 'lib' in front of it. The 'lib' is UNIX-imposed, not something
> gcc or Guido made up.
> 

Yes, you are right. I was being a Windows bigot there for an email. :)


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From thomas@xs4all.net  Tue Aug  8 16:35:24 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 17:35:24 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808082603.B10965@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 08, 2000 at 08:26:03AM -0700
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com> <20000808172217.G266@xs4all.nl> <20000808082603.B10965@ActiveState.com>
Message-ID: <20000808173524.H266@xs4all.nl>

On Tue, Aug 08, 2000 at 08:26:03AM -0700, Trent Mick wrote:

[ Discussion about what to call the Borland version of python20.dll:
  bcpp_python20.dll or python20_bcpp.dll. Rene brought up that gcc calls
  "its" library libpython.so, and Thomas points out that that isn't Python's
  decision. ]

> Yes, you are right. I was being a Windows bigot there for an email. :)

And rightly so ! :) I think the 'python20_bcpp' name is more Windows-like,
and if there is some area in which Python should try to stay as platform
specific as possible, it's platform specifics such as libraries :)

Would Windows users(*) when seeing 'bcpp_python20.dll' be thinking "this is a
bcpp specific library of python20", or would they be thinking "this is a
bcpp library for use with python20" ? I'm more inclined to think the second,
myself :-)

*) And the 'user' in this context is the extention-writer and
python-embedder, of course.
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@thyrsus.com  Tue Aug  8 16:46:55 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 11:46:55 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081000.MAA29344@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 12:00:35PM +0200
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr>
Message-ID: <20000808114655.C29686@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr>:
> Guido van Rossum wrote:
> > 
> > I don't know mimectl or Vladimir's module (how does it compare to
> > mmap?).
> 
> To complement ESR:
> 
> - written 3 years ago
> - exports a file-like interface, defines 2 object types: shm & sem
> - resembles buffer but lacks the slice interface.
> - has all sysV shared memory bells & whistles + native semaphore support
> 
> http://sirac.inrialpes.fr/~marangoz/python/shm
> 
> Technically, mmap is often built on top of shared memory OS facilities.
> Adding slices + Windows code for shared mem & semaphores + a simplified
> unified interface might be a plan.

Vladimir, I suggest that the most useful thing you could do to advance
the process at this point would be to document shm in core-library style.

At the moment, core Python has nothing (with the weak and nonportable 
exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
shm would address a real gap in the language.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Are we at last brought to such a humiliating and debasing degradation,
that we cannot be trusted with arms for our own defence?  Where is the
difference between having our arms in our own possession and under our
own direction, and having them under the management of Congress?  If
our defence be the *real* object of having those arms, in whose hands
can they be trusted with more propriety, or equal safety to us, as in
our own hands?
        -- Patrick Henry, speech of June 9 1788


From tim_one@email.msn.com  Tue Aug  8 16:46:00 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 8 Aug 2000 11:46:00 -0400
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com>

[Skip Montanaro, on .getorset]
> Shouldn't that be getorsetandget?  After all, it doesn't just set
> or get it gets, but if it's undefined, it sets, then gets.

It's mnemonic enough for me.  You can take comfort in that Guido seems to
like "default" better, and is merely incensed by arguments about names
<wink>.

> I know I'll be shouted down, but I still vote against a method that both
> sets and gets dict values.  I don't think the abbreviation in the
> source is worth the obfuscation of the code.

So this is at least your second vote, while I haven't voted at all?  I
protest.

+1 from me.  I'd use it a lot.  Yes, I'm one of those who probably has more
dicts mapping to lists than to strings, and

    if dict.has_key(thing):
        dict[thing].append(newvalue)
    else:
        dict[thing] = [newvalue]

litters my code -- talk about obfuscated!  Of course I know shorter ways to
spell that, but I find them even more obscure than the above.  Easing a
common operation is valuable, firmly in the tradition of the list.extend(),
list.pop(), dict.get(), 3-arg getattr() and no-arg "raise" extensions.  The
*semantics* are clear and non-controversial and frequently desired, they're
simply clumsy to spell now.

The usual ploy in cases like this is to add the new gimmick and call it
"experimental".  Phooey.  Add it or don't.

for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim




From guido@beopen.com  Tue Aug  8 17:51:27 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 08 Aug 2000 11:51:27 -0500
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: Your message of "Tue, 08 Aug 2000 11:46:55 -0400."
 <20000808114655.C29686@thyrsus.com>
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr>
 <20000808114655.C29686@thyrsus.com>
Message-ID: <200008081651.LAA01319@cj20424-a.reston1.va.home.com>

> At the moment, core Python has nothing (with the weak and nonportable 
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

If it also works on Windows.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Tue Aug  8 16:58:27 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Tue, 8 Aug 2000 17:58:27 +0200 (CEST)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808114655.C29686@thyrsus.com> from "Eric S. Raymond" at Aug 08, 2000 11:46:55 AM
Message-ID: <200008081558.RAA30190@python.inrialpes.fr>

Eric S. Raymond wrote:
>
> At the moment, core Python has nothing (with the weak and nonportable
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

There's a Semaphore class in Lib/threading.py. Are there any problems
with it? I haven't used it, but threading.py has thread mutexes and
semaphores on top of them, so as long as you don't need IPC, they should
be fine. Or am I missing something?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From esr@thyrsus.com  Tue Aug  8 17:07:15 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 12:07:15 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081558.RAA30190@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 05:58:27PM +0200
References: <20000808114655.C29686@thyrsus.com> <200008081558.RAA30190@python.inrialpes.fr>
Message-ID: <20000808120715.A29873@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr>:
> Eric S. Raymond wrote:
> >
> > At the moment, core Python has nothing (with the weak and nonportable
> > exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> > shm would address a real gap in the language.
> 
> There's a Semaphore class in Lib/threading.py. Are there any problems
> with it? I haven't used it, but threading.py has thread mutexes and
> semaphores on top of them, so as long as you don't need IPC, they should
> be fine. Or am I missing something?

If I'm not mistaken, that's semaphores across a thread bundle within
a single process. It's semaphores visible across processes that I 
don't think we currently have a facility for.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The people cannot delegate to government the power to do anything
which would be unlawful for them to do themselves.
	-- John Locke, "A Treatise Concerning Civil Government"


From esr@thyrsus.com  Tue Aug  8 17:07:58 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 12:07:58 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081651.LAA01319@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Tue, Aug 08, 2000 at 11:51:27AM -0500
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr> <20000808114655.C29686@thyrsus.com> <200008081651.LAA01319@cj20424-a.reston1.va.home.com>
Message-ID: <20000808120758.B29873@thyrsus.com>

Guido van Rossum <guido@beopen.com>:
> > At the moment, core Python has nothing (with the weak and nonportable 
> > exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> > shm would address a real gap in the language.
> 
> If it also works on Windows.

As usual, I expect Unix to lead and Windows to follow.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Government is actually the worst failure of civilized man. There has
never been a really good one, and even those that are most tolerable
are arbitrary, cruel, grasping and unintelligent.
	-- H. L. Mencken 


From guido@beopen.com  Tue Aug  8 18:18:49 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 08 Aug 2000 12:18:49 -0500
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: Your message of "Tue, 08 Aug 2000 11:46:00 -0400."
 <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com>
Message-ID: <200008081718.MAA01681@cj20424-a.reston1.va.home.com>

Enough said.  I've checked it in and closed Barry's patch.  Since
'default' is a Java reserved word, I decided that that would not be a
good name for it after all, so I stuck with setdefault().

> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Fredrik Lundh" <effbot@telia.com  Tue Aug  8 17:17:01 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 18:17:01 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com><14735.16162.275037.583897@anthem.concentric.net><20000807190939.A27730@thyrsus.com> <14735.16621.369206.564320@anthem.concentric.net>
Message-ID: <001101c00155$cc86ad00$f2a6b5d4@hagrid>

barry wrote:
> And there's no good way to put those into SF?  If the Patch Manager
> isn't appropriate, what about the Task Manager (I dunno, I've never
> looked at it).  The cool thing about using SF is that there's less of
> a chance that this stuff will get buried in an inbox.

why not just switch it on, and see what happens.  I'd prefer
to get a concise TODO list on the login page, rather than having
to look in various strange places (like PEP-160 and PEP-200 ;-)

</F>



From gmcm@hypernet.com  Tue Aug  8 18:51:51 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 8 Aug 2000 13:51:51 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808120715.A29873@thyrsus.com>
References: <200008081558.RAA30190@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 05:58:27PM +0200
Message-ID: <1246365382-108994225@hypernet.com>

Eric Raymond wrote:
> Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr>:

> > There's a Semaphore class in Lib/threading.py. Are there any
> > problems with it? I haven't used it, but threading.py has
> > thread mutexes and semaphores on top of them, so as long as you
> > don't need IPC, they should be fine. Or am I missing something?
> 
> If I'm not mistaken, that's semaphores across a thread bundle
> within a single process. It's semaphores visible across processes
> that I don't think we currently have a facility for. 

There's the interprocess semaphore / mutex stuff in 
win32event... oh, never mind...

- Gordon


From ping@lfw.org  Tue Aug  8 21:29:52 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 8 Aug 2000 13:29:52 -0700 (PDT)
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <m13MAgC-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <Pine.LNX.4.10.10008081256050.497-100000@skuld.lfw.org>

On Tue, 8 Aug 2000, Peter Funk wrote:
> 
> Unfortunately only a small subset of these methods actually works on
> a dbm mapping:
> 
> >>> import dbm
> >>> d = dbm.open("piff", "c")
> >>> d.get('foo', [])
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
>   AttributeError: get

I just got burned (again!) because neither the cgi.FieldStorage()
nor the cgi.FormContentDict() support .get().

I've submitted a patch that adds FieldStorage.get() and makes
FormContentDict a subclass of UserDict (the latter nicely eliminates
almost all of the code in FormContentDict).

(I know it says we're supposed to use FieldStorage, but i rarely if
ever need to use file-upload forms, so SvFormContentDict() is still
by far the most useful to me of the 17 different form implementations
<wink> in the cgi module, i don't care what anyone says...)

By the way, when/why did all of the documentation at the top of
cgi.py get blown away?


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box



From Fredrik Lundh" <effbot@telia.com  Tue Aug  8 21:46:15 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 22:46:15 +0200
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
References: <Pine.LNX.4.10.10008081256050.497-100000@skuld.lfw.org>
Message-ID: <015901c00179$b718cba0$f2a6b5d4@hagrid>

ping wrote:
> By the way, when/why did all of the documentation at the top of
> cgi.py get blown away?

    Date: Thu, 3 Aug 2000 13:57:47 -0700
    From: Jeremy Hylton <jhylton@users.sourceforge.net>
    To: python-checkins@python.org
    Subject: [Python-checkins] CVS: python/dist/src/Lib cgi.py,1.48,1.49

    Update of /cvsroot/python/python/dist/src/Lib
    In directory slayer.i.sourceforge.net:/tmp/cvs-serv2916

    Modified Files:
     cgi.py=20
    Log Message:
    Remove very long doc string (it's all in the docs)
    Modify parse_qsl to interpret 'a=3Db=3Dc' as key 'a' and value =
'b=3Dc'
    (which matches Perl's CGI.pm)=20

</F>



From tim_one@email.msn.com  Wed Aug  9 05:57:02 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 9 Aug 2000 00:57:02 -0400
Subject: [Python-Dev] Task Manager on SourceForge
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>

Under the "what the heck" theory, I enabled the Task Manager on the Python
project -- beware the 6-hour delay!  Created two "subprojects" in it, P1.6
and P2, for tasks generally related to finishing the Python 1.6 and 2.0
releases, respectively.

Don't know anything more about it.  It appears you can set up a web of tasks
under a "subproject", with fields for who's assigned, percent complete,
status, hours of work, priority, start & end dates, and a list of tasks each
task depends on.

If anyone can think of a use for it, be my guest <wink>.

I *suspect* everyone already has admin privileges for the Task Manager, but
can't be sure.  Today I couldn't fool either Netscape or IE5 into displaying
the user-permissions Admin page correctly.  Everyone down to "lemburg" does
have admin privs for TaskMan, but from the middle of MAL's line on on down
it's all empty space for me.




From greg@cosc.canterbury.ac.nz  Wed Aug  9 06:27:24 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 09 Aug 2000 17:27:24 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
Message-ID: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>

I think I've actually found a syntax for lockstep
iteration that looks reasonable (or at least not
completely unreasonable) and is backward compatible:

   for (x in a, y in b):
      ...

Not sure what the implications are for the parser
yet.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From MarkH@ActiveState.com  Wed Aug  9 07:39:30 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Wed, 9 Aug 2000 16:39:30 +1000
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>

>    for (x in a, y in b):
>       ...

Hmmm.  Until someone smarter than me shoots it down for some obvious reason
<wink>, it certainly appeals to me.

My immediate reaction _is_ lockstep iteration, and that is the first time I
can say that.  Part of the reason is that it looks like a tuple unpack,
which I think of as a "lockstep/parallel/atomic" operation...

Mark.



From jack@oratrix.nl  Wed Aug  9 09:31:27 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 09 Aug 2000 10:31:27 +0200
Subject: [Python-Dev] A question for the Python Secret Police
Message-ID: <20000809083127.7FFF6303181@snelboot.oratrix.nl>

A question for the Python Secret Police (or P. Inquisition, or whoever 
else:-).

Is the following morally allowed:

package1/mod.py:
class Foo:
    def method1(self):
        ...

package2/mod.py:
from package1.mod import *

class Foo(Foo):
    def method2(self):
        ...

(The background is that the modules are machine-generated and contain
AppleEvent classes. There's a large set of standard classes, such as
Standard_Suite, and applications can signal that they implement
Standard_Suite with a couple of extensions to it. So, in the
Application-X Standard_Suite I'd like to import everything from the
standard Standard_Suite and override/add those methods that are
specific to Application X)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++





From tim_one@email.msn.com  Wed Aug  9 08:15:02 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 9 Aug 2000 03:15:02 -0400
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <200008081718.MAA01681@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>

[Tim]
>> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

[Guido]
> Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

But it doesn't need to be, right?  That is, change the stuff following
'import' in

    'from' dotted_name 'import' ('*' | NAME (',' NAME)*)

to

    ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

and verify that whenever the 3-NAME form triggers that the middle of the
NAMEs is exactly "as".  The grammar in the Reference Manual can still
advertise it as a syntactic constraint; if a particular implementation
happens to need to treat it as a semantic constraint due to parser
limitations (and CPython specifically would), the user will never know it.

It doesn't interfere with using "as" a regular NAME elsewhere.  Anyone
pointing out that the line

    from as import as as as

would then be legal will be shot.  Fortran had no reserved words of any
kind, and nobody abused that in practice.  Users may be idiots, but they're
not infants <wink>.




From thomas@xs4all.net  Wed Aug  9 09:42:32 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 10:42:32 +0200
Subject: [Python-Dev] Task Manager on SourceForge
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 09, 2000 at 12:57:02AM -0400
References: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>
Message-ID: <20000809104232.I266@xs4all.nl>

On Wed, Aug 09, 2000 at 12:57:02AM -0400, Tim Peters wrote:

> Don't know anything more about it.  It appears you can set up a web of tasks
> under a "subproject", with fields for who's assigned, percent complete,
> status, hours of work, priority, start & end dates, and a list of tasks each
> task depends on.

Well, it seems mildly useful... It's missing some things that would make it
fairly useful (per-subtask and per-project todo-list, where you an say 'I
need help with this' and such things, the ability to attach patches to
subtasks (which would be useful for 'my' task of adding augmented
assignment ;) and probably more) but I can imagine why SF didn't include all
that (yet) -- it's a lot of work to do right, and I'm not sure if SF has
much projects of the size that needs a project manager like this ;)

But unless Guido and the rest of the PyLab team want to keep an overview of
what us overseas or at least other-state lazy bums are doing by trusting us
to keep a webpage up to date rather than informing the mailing list, I don't
think we'll see much use for it. If you *do* want such an overview, it might
be useful. In which case I'll send out some RFE's on my wishes for the
project manager ;)
 
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From ping@lfw.org  Wed Aug  9 10:37:07 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Wed, 9 Aug 2000 02:37:07 -0700 (PDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>

On Wed, 9 Aug 2000, Greg Ewing wrote:
> 
>    for (x in a, y in b):
>       ...

It looks nice, but i'm pretty sure it won't fly.  (x in a, y in b)
is a perfectly valid expression.  For compatibility the parser must
also accept

    for (x, y) in list_of_pairs:

and since the thing after the open-paren can be arbitrarily long,
how is the parser to know whether the lockstep form has been invoked?

Besides, i think Guido has Pronounced quite firmly on zip().

I would much rather petition now to get indices() and irange() into
the built-ins... please pretty please?


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box



From thomas@xs4all.net  Wed Aug  9 12:06:45 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 13:06:45 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Wed, Aug 09, 2000 at 04:39:30PM +1000
References: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz> <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>
Message-ID: <20000809130645.J266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:39:30PM +1000, Mark Hammond wrote:

> >    for (x in a, y in b):
> >       ...

> Hmmm.  Until someone smarter than me shoots it down for some obvious reason
> <wink>, it certainly appeals to me.

The only objection I can bring up is that parentheses are almost always
optional, in Python, and this kind of violates it. Suddenly the presence of
parentheses changes the entire expression, not just the grouping of it. Oh,
and there is the question of whether 'for (x in a):' is allowed, too (it
isn't, currently.)

I'm not entirely sure that the parser will swallow this, however, because
'for (x in a, y in b) in z:' *is* valid syntax... so it might be ambiguous.
Then again, it can probably be worked around. It might not be too pretty,
but it can be worked around ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Wed Aug  9 12:29:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 13:29:13 +0200
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 09, 2000 at 03:15:02AM -0400
References: <200008081718.MAA01681@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>
Message-ID: <20000809132913.K266@xs4all.nl>

[Tim]
> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

[Guido]
> Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

[Tim]
> But it doesn't need to be, right?  That is, change the stuff following
> 'import' in
>     'from' dotted_name 'import' ('*' | NAME (',' NAME)*)
> to
>     ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

I'm very, very much +1 on this. the fact that (for example) 'from' is a
reserved word bothers me no end. If noone is going to comment anymore on
range literals or augmented assignment, I might just tackle this ;)

> Anyone pointing out that the line
>     from as import as as as
> would then be legal will be shot. 

"Cool, that would make 'from from import as as as' a legal sta"<BANG>

Damned American gun laws ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Wed Aug  9 13:30:43 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:30:43 -0500
Subject: [Python-Dev] Task Manager on SourceForge
In-Reply-To: Your message of "Wed, 09 Aug 2000 00:57:02 -0400."
 <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>
Message-ID: <200008091230.HAA23379@cj20424-a.reston1.va.home.com>

> Under the "what the heck" theory, I enabled the Task Manager on the Python
> project -- beware the 6-hour delay!  Created two "subprojects" in it, P1.6
> and P2, for tasks generally related to finishing the Python 1.6 and 2.0
> releases, respectively.

Beauuuutiful!

> Don't know anything more about it.  It appears you can set up a web of tasks
> under a "subproject", with fields for who's assigned, percent complete,
> status, hours of work, priority, start & end dates, and a list of tasks each
> task depends on.
> 
> If anyone can think of a use for it, be my guest <wink>.

I played with it a bit.  I added three tasks under 1.6 that need to be
done.

> I *suspect* everyone already has admin privileges for the Task Manager, but
> can't be sure.  Today I couldn't fool either Netscape or IE5 into displaying
> the user-permissions Admin page correctly.  Everyone down to "lemburg" does
> have admin privs for TaskMan, but from the middle of MAL's line on on down
> it's all empty space for me.

That must be a Windows limitation on how many popup menus you can
have.  Stupid Windows :-) !  This looks fine on Linux in Netscape (is
there any other browser :-) ?  and indeed the permissions are set
correctly.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido@beopen.com  Wed Aug  9 13:42:49 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:42:49 -0500
Subject: [Python-Dev] A question for the Python Secret Police
In-Reply-To: Your message of "Wed, 09 Aug 2000 10:31:27 +0200."
 <20000809083127.7FFF6303181@snelboot.oratrix.nl>
References: <20000809083127.7FFF6303181@snelboot.oratrix.nl>
Message-ID: <200008091242.HAA23451@cj20424-a.reston1.va.home.com>

> A question for the Python Secret Police (or P. Inquisition, or whoever 
> else:-).

That would be the Namespace Police in this case.

> Is the following morally allowed:
> 
> package1/mod.py:
> class Foo:
>     def method1(self):
>         ...
> 
> package2/mod.py:
> from package1.mod import *
> 
> class Foo(Foo):
>     def method2(self):
>         ...

I see no problem with this.  It's totally well-defined and I don't
expect I'll ever have a reason to disallow it.  Future picky compilers
or IDEs might warn about a redefined name, but I suppose you can live
with that given that it's machine-generated.

> (The background is that the modules are machine-generated and contain
> AppleEvent classes. There's a large set of standard classes, such as
> Standard_Suite, and applications can signal that they implement
> Standard_Suite with a couple of extensions to it. So, in the
> Application-X Standard_Suite I'd like to import everything from the
> standard Standard_Suite and override/add those methods that are
> specific to Application X)

That actually looks like a *good* reason to do exactly what you
propose.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido@beopen.com  Wed Aug  9 13:49:43 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:49:43 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: Your message of "Wed, 09 Aug 2000 02:37:07 MST."
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>

> On Wed, 9 Aug 2000, Greg Ewing wrote:
> > 
> >    for (x in a, y in b):
> >       ...

No, for exactly the reasons Ping explained.  Let's give this a rest okay?

> I would much rather petition now to get indices() and irange() into
> the built-ins... please pretty please?

I forget what indices() was -- is it the moreal equivalent of keys()?
That's range(len(s)), I don't see a need for a new function.  In fact
I think indices() would reduce readability because you have to guess
what it means.  Everybody knows range() and len(); not everybody will
know indices() because it's not needed that often.

If irange(s) is zip(range(len(s)), s), I see how that's a bit
unwieldy.  In the past there were syntax proposals, e.g. ``for i
indexing s''.  Maybe you and Just can draft a PEP?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Wed Aug  9 13:58:00 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 09 Aug 2000 14:58:00 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
Message-ID: <39915558.A68D7792@lemburg.com>

Guido van Rossum wrote:
> 
> > On Wed, 9 Aug 2000, Greg Ewing wrote:
> > >
> > >    for (x in a, y in b):
> > >       ...
> 
> No, for exactly the reasons Ping explained.  Let's give this a rest okay?
> 
> > I would much rather petition now to get indices() and irange() into
> > the built-ins... please pretty please?
> 
> I forget what indices() was -- is it the moreal equivalent of keys()?

indices() and irange() are both builtins which originated from
mx.Tools. See:

	http://starship.python.net/crew/lemburg/mxTools.html

* indices(object) is the same as tuple(range(len(object))) - only faster
and using a more intuitive and less convoluted name.

* irange(object[,indices]) (in its mx.Tools version) creates
a tuple of tuples (index, object[index]). indices defaults
to indices(object) if not given, otherwise, only the indexes
found in indices are used to create the mentioned tuple -- and
this even works with arbitrary keys, since the PyObject_GetItem()
API is used.

Typical use is:

for i,value in irange(sequence):
    sequence[i] = value + 1


In practice I found that I could always use irange() where indices()
would have been used, since I typically need the indexed
sequence object anyway.

> That's range(len(s)), I don't see a need for a new function.  In fact
> I think indices() would reduce readability because you have to guess
> what it means.  Everybody knows range() and len(); not everybody will
> know indices() because it's not needed that often.
> 
> If irange(s) is zip(range(len(s)), s), I see how that's a bit
> unwieldy.  In the past there were syntax proposals, e.g. ``for i
> indexing s''.  Maybe you and Just can draft a PEP?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From nowonder@nowonder.de  Wed Aug  9 16:19:02 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Wed, 09 Aug 2000 15:19:02 +0000
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
References: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>
Message-ID: <39917666.87C823E9@nowonder.de>

Tim Peters wrote:
> 
> But it doesn't need to be, right?  That is, change the stuff following
> 'import' in
> 
>     'from' dotted_name 'import' ('*' | NAME (',' NAME)*)
> 
> to
> 
>     ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

What about doing the same for the regular import?

import_stmt: 'import' dotted_name [NAME NAME] (',' dotted_name [NAME
NAME])* | 'from' dotted_name 'import' ('*' | NAME (',' NAME)*)

"import as as as"-isn't-that-impressive-though-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From just@letterror.com  Wed Aug  9 16:01:18 2000
From: just@letterror.com (Just van Rossum)
Date: Wed, 9 Aug 2000 16:01:18 +0100
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <l03102802b5b71c40f9fc@[193.78.237.121]>

At 7:49 AM -0500 09-08-2000, Guido van Rossum wrote:
>In the past there were syntax proposals, e.g. ``for i
>indexing s''.  Maybe you and Just can draft a PEP?

PEP:            1716099-3
Title:          Index-enhanced sequence iteration
Version:        $Revision: 1.1 $
Owner:          Someone-with-commit-rights
Python-Version: 2.0
Status:         Incomplete

Introduction

    This PEP proposes a way to more conveniently iterate over a
    sequence and its indices.

Features

    It adds an optional clause to the 'for' statement:

        for <index> indexing <element> in <seq>:
            ...

    This is equivalent to (see the zip() PEP):

        for <index>, <element> in zip(range(len(seq)), seq):
            ...

    Except no new list is created.

Mechanism

    The index of the current element in a for-loop already
    exists in the implementation, however, it is not reachable
    from Python. The new 'indexing' keyword merely exposes the
    internal counter.

Implementation

    Implementation should be trivial for anyone named Guido,
    Tim or Thomas. Justs better not try.

Advantages:

    Less code needed for this common operation, which is
    currently most often written as:

        for index in range(len(seq)):
            element = seq[i]
            ...

Disadvantages:

    It will break that one person's code that uses "indexing"
    as a variable name.

Copyright

    This document has been placed in the public domain.

Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:




From thomas@xs4all.net  Wed Aug  9 17:15:39 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 18:15:39 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>; from just@letterror.com on Wed, Aug 09, 2000 at 04:01:18PM +0100
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <20000809181539.M266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:01:18PM +0100, Just van Rossum wrote:

> PEP:            1716099-3
> Title:          Index-enhanced sequence iteration
> Version:        $Revision: 1.1 $
> Owner:          Someone-with-commit-rights

I'd be willing to adopt this PEP, if the other two PEPs on my name don't
need extensive rewrites anymore.

> Features
> 
>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:

Ever since I saw the implementation of FOR_LOOP I've wanted this, but I
never could think up a backwards compatible and readable syntax for it ;P

> Disadvantages:

>     It will break that one person's code that uses "indexing"
>     as a variable name.

This needn't be true, if it's done in the same way as Tim proposed the 'form
from import as as as' syntax change ;)

for_stmt: 'for' exprlist [NAME exprlist] 'in' testlist ':' suite ['else' ':' suite]

If the 5th subnode of the expression is 'in', the 3rd should be 'indexing'
and the 4th would be the variable to assign the index number to. If it's
':', the loop is index-less.

(this is just a quick and dirty example; 'exprlist' is probably not the
right subnode for the indexing variable, because it can't be a tuple or
anything like that.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From skip@mojam.com (Skip Montanaro)  Wed Aug  9 17:40:27 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 9 Aug 2000 11:40:27 -0500 (CDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809181539.M266@xs4all.nl>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
 <l03102802b5b71c40f9fc@[193.78.237.121]>
 <20000809181539.M266@xs4all.nl>
Message-ID: <14737.35195.31385.867664@beluga.mojam.com>

    >> Disadvantages:

    >> It will break that one person's code that uses "indexing" as a
    >> variable name.

    Thomas> This needn't be true, if it's done in the same way as Tim
    Thomas> proposed the 'form from import as as as' syntax change ;)

Could this be extended to many/most/all current instances of keywords in
Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
(for example) can't define a method named "print".

Skip



From nowonder@nowonder.de  Wed Aug  9 19:49:53 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Wed, 09 Aug 2000 18:49:53 +0000
Subject: [Python-Dev] cannot commit 1.6 changes
Message-ID: <3991A7D0.4D2479C7@nowonder.de>

I have taken care of removing all occurences of math.rint()
from the 1.6 sources. The commit worked fine for the Doc,
Include and Module directory, but cvs won't let me commit
the changes to config.h.in, configure.in, configure:

cvs server: sticky tag `cnri-16-start' for file `config.h.in' is not a
branch
cvs server: sticky tag `cnri-16-start' for file `configure' is not a
branch
cvs server: sticky tag `cnri-16-start' for file `configure.in' is not a
branch
cvs [server aborted]: correct above errors first!

What am I missing?

confused-ly y'rs Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From esr@thyrsus.com  Wed Aug  9 19:03:21 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Wed, 9 Aug 2000 14:03:21 -0400
Subject: [Python-Dev] Un-stalling Berkeley DB support
Message-ID: <20000809140321.A836@thyrsus.com>

I'm still interested in getting support for the version 3 Berkeley DB
into the core.  This is one of my top three Python priorities currently, along
with drafting PEP2 and overhauling the curses HOWTO.  (I'd sure like to see
shm get in, too, but that's blocked on Vladimir writing suitable documentation,
too.

I'd like to get the necessary C extension in before 2.0 freeze, if
possible.  I've copied its author.  Again, the motivation here is to make
shelving transactional, with useful read-many/write-once guarantees.
Thousands of CGI programmers would thank us for this.

When we last discussed this subject, there was general support for the
functionality, but a couple of people went "bletch!" about SWIG-generated
code (there was unhappiness about pointers being treated as strings).

Somebody said something about having SWIG patches to address this.  Is this
the only real issue with SWIG-generated code?  If so, we can pursue two paths:
(1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
extension that meets our cleanliness criteria, and (2) press the SWIG guy 
to incorporate these patches in his next release.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"The best we can hope for concerning the people at large is that they be
properly armed."
        -- Alexander Hamilton, The Federalist Papers at 184-188


From akuchlin@mems-exchange.org  Wed Aug  9 19:09:55 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Wed, 9 Aug 2000 14:09:55 -0400
Subject: [Python-Dev] py-howto project now operational
Message-ID: <20000809140955.C4838@kronos.cnri.reston.va.us>

I've just gotten around to setting up the checkin list for the Python
HOWTO project on SourceForge (py-howto.sourceforge.net), so the
project is now fully operational.  People who want to update the
HOWTOs, such as ESR and the curses HOWTO, can now go ahead and make
changes.

And this is the last you'll hear about the HOWTOs on python-dev;
please use the Doc-SIG mailing list (doc-sig@python.org) for further
discussion of the HOWTOs.

--amk



From thomas@xs4all.net  Wed Aug  9 19:28:54 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 20:28:54 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>; from skip@mojam.com on Wed, Aug 09, 2000 at 11:40:27AM -0500
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]> <20000809181539.M266@xs4all.nl> <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <20000809202854.N266@xs4all.nl>

On Wed, Aug 09, 2000 at 11:40:27AM -0500, Skip Montanaro wrote:

>     >> Disadvantages:

>     >> It will break that one person's code that uses "indexing" as a
>     >> variable name.

>     Thomas> This needn't be true, if it's done in the same way as Tim
>     Thomas> proposed the 'form from import as as as' syntax change ;)

> Could this be extended to many/most/all current instances of keywords in
> Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
> (for example) can't define a method named "print".

No. I just (in the trainride from work to home ;) wrote a patch that adds
'from x import y as z' and 'import foo as fee', and came to the conclusion
that we can't make 'from' a non-reserved word, for instance. Because if we
change

'from' dotted_name 'import' NAME*

into

NAME dotted_name 'import' NAME*

the parser won't know how to parse other expressions that start with NAME,
like 'NAME = expr' or 'NAME is expr'. I know this because I tried it and it
didn't work :-) So we can probably make most names that are *part* of a
statement non-reserved words, but not those that uniquely identify a
statement. That doesn't leave much words, except perhaps for the 'in' in
'for' -- but 'in' is already a reserved word for other purposes ;)

As for the patch that adds 'as' (as a non-reserved word) to both imports,
I'll upload it to SF after I rewrite it a bit ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From bckfnn@worldonline.dk  Wed Aug  9 20:43:58 2000
From: bckfnn@worldonline.dk (Finn Bock)
Date: Wed, 09 Aug 2000 19:43:58 GMT
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809202854.N266@xs4all.nl>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]> <20000809181539.M266@xs4all.nl> <14737.35195.31385.867664@beluga.mojam.com> <20000809202854.N266@xs4all.nl>
Message-ID: <3991acc4.10990753@smtp.worldonline.dk>

[Skip Montanaro]
> Could this be extended to many/most/all current instances of keywords in
> Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
> (for example) can't define a method named "print".

[Thomas Wouters]
>No. I just (in the trainride from work to home ;) wrote a patch that adds
>'from x import y as z' and 'import foo as fee', and came to the conclusion
>that we can't make 'from' a non-reserved word, for instance. Because if we
>change
>
>'from' dotted_name 'import' NAME*
>
>into
>
>NAME dotted_name 'import' NAME*
>
>the parser won't know how to parse other expressions that start with NAME,
>like 'NAME = expr' or 'NAME is expr'. I know this because I tried it and it
>didn't work :-) So we can probably make most names that are *part* of a
>statement non-reserved words, but not those that uniquely identify a
>statement. That doesn't leave much words, except perhaps for the 'in' in
>'for' -- but 'in' is already a reserved word for other purposes ;)

Just a datapoint.

JPython goes a bit further in its attempt to unreserve reserved words in
certain cases:

- after "def"
- after a dot "."
- after "import"
- after "from" (in an import stmt)
- and as argument names

This allow JPython to do:

   from from import x
   def def(): pass
   x.exec(from=1, to=2)


This feature was added to ease JPython's integration to existing java
libraries. IIRC it was remarked that CPython could also make use of such
a feature when integrating to f.ex Tk or COM.


regards,
finn


From nascheme@enme.ucalgary.ca  Wed Aug  9 21:11:04 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 9 Aug 2000 14:11:04 -0600
Subject: [Python-Dev] test_fork1 on SMP? (was Re: [Python Dev] test_fork1 failing --with-threads (for some people)...)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>; from Tim Peters on Mon, Jul 31, 2000 at 04:42:50AM -0400
References: <14724.22554.818853.722906@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>
Message-ID: <20000809141104.A10805@keymaster.enme.ucalgary.ca>

On Mon, Jul 31, 2000 at 04:42:50AM -0400, Tim Peters wrote:
> It's a baffler!  AFAIK, nobody yet has thought of a way that a fork can
> screw up the state of the locks in the *parent* process (it must be easy to
> see how they can get screwed up in a child, because two of us already did
> <wink>).

If I add Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around fork()
in posixmodule then the child is the process which always seems to hang.
The child is hanging at:

#0  0x4006d58b in __sigsuspend (set=0xbf7ffac4)
    at ../sysdeps/unix/sysv/linux/sigsuspend.c:48
#1  0x4001f1a0 in pthread_cond_wait (cond=0x8264e1c, mutex=0x8264e28)
    at restart.h:49
#2  0x806f3c3 in PyThread_acquire_lock (lock=0x8264e18, waitflag=1)
    at thread_pthread.h:311
#3  0x80564a8 in PyEval_RestoreThread (tstate=0x8265a78) at ceval.c:178
#4  0x80bf274 in posix_fork (self=0x0, args=0x8226ccc) at ./posixmodule.c:1659
#5  0x8059460 in call_builtin (func=0x82380e0, arg=0x8226ccc, kw=0x0)
    at ceval.c:2376
#6  0x8059378 in PyEval_CallObjectWithKeywords (func=0x82380e0, arg=0x8226ccc, 
    kw=0x0) at ceval.c:2344
#7  0x80584f2 in eval_code2 (co=0x8265e98, globals=0x822755c, locals=0x0, 
    args=0x8226cd8, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, 
    owner=0x0) at ceval.c:1682
#8  0x805974b in call_function (func=0x8264ddc, arg=0x8226ccc, kw=0x0)
    at ceval.c:2498
#9  0x805936b in PyEval_CallObjectWithKeywords (func=0x8264ddc, arg=0x8226ccc, 
    kw=0x0) at ceval.c:2342
#10 0x80af26a in t_bootstrap (boot_raw=0x8264e00) at ./threadmodule.c:199
#11 0x4001feca in pthread_start_thread (arg=0xbf7ffe60) at manager.c:213

Since there is only one thread in the child this should not be
happening.  Can someone explain this?  I have tested this both a SMP
Linux machine and a UP Linux machine.

   Neil


From thomas@xs4all.net  Wed Aug  9 21:27:50 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 22:27:50 +0200
Subject: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd)
Message-ID: <20000809222749.O266@xs4all.nl>

For those of you not on the patches list, here's the summary of the patch I
just uploaded to SF. In short, it adds "import x as y" and "from module
import x as y", in the way Tim proposed this morning. (Probably late last
night for most of you.)

----- Forwarded message from noreply@sourceforge.net -----

This patch adds the oft-proposed 'import as' syntax, to both 'import module'
and 'from module import ...', but without making 'as' a reserved word (by
using the technique Tim Peters proposed on python-dev.)

'import spam as egg' is a very simple patch to compile.c, which doesn't need
changes to the VM, but 'from spam import dog as meat' needs a new bytecode,
which this patch calls 'FROM_IMPORT_AS'. The bytecode loads an object from a
module onto the stack, so a STORE_NAME can store it later. This can't be
done by the normal FROM_IMPORT opcode, because it needs to take the special
case of '*' into account. Also, because it uses 'STORE_NAME', it's now
possible to mix 'import' and 'global', like so:

global X
from foo import X as X

The patch still generates the old code for

from foo import X

(without 'as') mostly to save on bytecode size, and for the 'compatibility'
with mixing 'global' and 'from .. import'... I'm not sure what's the best
thing to do.

The patch doesn't include a test suite or documentation, yet.

-------------------------------------------------------
For more info, visit:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101135&group_id=5470

----- End forwarded message -----

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From greg@mad-scientist.com  Wed Aug  9 21:27:33 2000
From: greg@mad-scientist.com (Gregory P . Smith)
Date: Wed, 9 Aug 2000 13:27:33 -0700
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809140321.A836@thyrsus.com>; from esr@thyrsus.com on Wed, Aug 09, 2000 at 02:03:21PM -0400
References: <20000809140321.A836@thyrsus.com>
Message-ID: <20000809132733.C2019@mad-scientist.com>

On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
> 
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).
> 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy 
> to incorporate these patches in his next release.

I'm not surprised to see the "bletch!" for SWIG's string/pointer things,
they are technically gross.  Anyone know what SWIG v1.3a3 does (v1.3
is a total rewrite from v1.1)?  py-bsddb3 as distributed was build
using SWIG v1.1-883.  In the meantime, if someone knows of a version of
SWIG that does this better, try using it to build bsddb3 (just pass a
SWIG=/usr/spam/eggs/bin/swig to the Makefile).  If you run into problems,
send them and a copy of that swig my way.

I'll take a quick look at SWIG v1.3alpha3 here and see what that does.

Greg


From skip@mojam.com (Skip Montanaro)  Wed Aug  9 21:41:57 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 9 Aug 2000 15:41:57 -0500 (CDT)
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809132733.C2019@mad-scientist.com>
References: <20000809140321.A836@thyrsus.com>
 <20000809132733.C2019@mad-scientist.com>
Message-ID: <14737.49685.902542.576229@beluga.mojam.com>

>>>>> "Greg" == Gregory P Smith <greg@mad-scientist.com> writes:

    Greg> On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
    >> 
    >> When we last discussed this subject, there was general support for
    >> the functionality, but a couple of people went "bletch!" about
    >> SWIG-generated code (there was unhappiness about pointers being
    >> treated as strings).
    ...
    Greg> I'm not surprised to see the "bletch!" for SWIG's string/pointer
    Greg> things, they are technically gross.

We're talking about a wrapper around a single smallish library (probably <
20 exposed functions), right?  Seems to me that SWIG is the wrong tool to
use here.  It's for wrapping massive libraries automatically.  Why not just
recode the current SWIG-generated module manually?

What am I missing?

-- 
Skip Montanaro, skip@mojam.com, http://www.mojam.com/, http://www.musi-cal.com/
"To get what you want you must commit yourself for sometime" - fortune cookie


From nascheme@enme.ucalgary.ca  Wed Aug  9 21:49:25 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 9 Aug 2000 14:49:25 -0600
Subject: [Python-Dev] Re: [Patches] [Patch #101135] 'import x as y' and 'from x import y as z'
In-Reply-To: <200008092014.NAA08040@delerium.i.sourceforge.net>; from noreply@sourceforge.net on Wed, Aug 09, 2000 at 01:14:52PM -0700
References: <200008092014.NAA08040@delerium.i.sourceforge.net>
Message-ID: <20000809144925.A11242@keymaster.enme.ucalgary.ca>

On Wed, Aug 09, 2000 at 01:14:52PM -0700, noreply@sourceforge.net wrote:
> Patch #101135 has been updated. 
> 
> Project: 
> Category: core (C code)
> Status: Open
> Summary: 'import x as y' and 'from x import y as z'

+1.  This is much more useful and clear than setdefault (which I was -1
on, not that it matters).

  Neil


From esr@thyrsus.com  Wed Aug  9 22:03:51 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Wed, 9 Aug 2000 17:03:51 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101135] 'import x as y' and 'from x import y as z'
In-Reply-To: <20000809144925.A11242@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Wed, Aug 09, 2000 at 02:49:25PM -0600
References: <200008092014.NAA08040@delerium.i.sourceforge.net> <20000809144925.A11242@keymaster.enme.ucalgary.ca>
Message-ID: <20000809170351.A1550@thyrsus.com>

Neil Schemenauer <nascheme@enme.ucalgary.ca>:
> On Wed, Aug 09, 2000 at 01:14:52PM -0700, noreply@sourceforge.net wrote:
> > Patch #101135 has been updated. 
> > 
> > Project: 
> > Category: core (C code)
> > Status: Open
> > Summary: 'import x as y' and 'from x import y as z'
> 
> +1.  This is much more useful and clear than setdefault (which I was -1
> on, not that it matters).

I'm +0 on this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The most foolish mistake we could possibly make would be to permit 
the conquered Eastern peoples to have arms.  History teaches that all 
conquerors who have allowed their subject races to carry arms have 
prepared their own downfall by doing so.
        -- Hitler, April 11 1942, revealing the real agenda of "gun control"


From greg@mad-scientist.com  Wed Aug  9 22:16:39 2000
From: greg@mad-scientist.com (Gregory P . Smith)
Date: Wed, 9 Aug 2000 14:16:39 -0700
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809140321.A836@thyrsus.com>; from esr@thyrsus.com on Wed, Aug 09, 2000 at 02:03:21PM -0400
References: <20000809140321.A836@thyrsus.com>
Message-ID: <20000809141639.D2019@mad-scientist.com>

On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
> 
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).
> 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy 
> to incorporate these patches in his next release.

Out of curiosity, I just made a version of py-bsddb3 that uses SWIG
v1.3alpha3 instead of SWIG v1.1-883.  It looks like 1.3a3 is still
using strings for pointerish things.  One thing to note that may calm
some peoples sense of "eww gross, pointer strings" is that programmers
should never see them.  They are "hidden" behind the python shadow class.
The pointer strings are only contained within the shadow objects "this"
member.

example:

  >>> from bsddb3.db import *
  >>> e = DbEnv()
  >>> e
  <C DbEnv instance at _807eea8_MyDB_ENV_p>
  >>> e.this
  '_807eea8_MyDB_ENV_p'

Anyways, the update if anyone is curious about a version using the more
recent swig is on the py-bsddb3 web site:

http://electricrain.com/greg/python/bsddb3/


Greg



From guido@beopen.com  Wed Aug  9 23:29:58 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 17:29:58 -0500
Subject: [Python-Dev] cannot commit 1.6 changes
In-Reply-To: Your message of "Wed, 09 Aug 2000 18:49:53 GMT."
 <3991A7D0.4D2479C7@nowonder.de>
References: <3991A7D0.4D2479C7@nowonder.de>
Message-ID: <200008092229.RAA24802@cj20424-a.reston1.va.home.com>

> I have taken care of removing all occurences of math.rint()
> from the 1.6 sources. The commit worked fine for the Doc,
> Include and Module directory, but cvs won't let me commit
> the changes to config.h.in, configure.in, configure:
> 
> cvs server: sticky tag `cnri-16-start' for file `config.h.in' is not a
> branch
> cvs server: sticky tag `cnri-16-start' for file `configure' is not a
> branch
> cvs server: sticky tag `cnri-16-start' for file `configure.in' is not a
> branch
> cvs [server aborted]: correct above errors first!
> 
> What am I missing?

The error message is right.  Somehow whoever set those tags on those
files did not make them branch tags.  I don't know why -- I think it
was Fred, I don't know why he did that.  The quickest way to fix this
is to issue the command

  cvs tag -F -b -r <revision> cnri-16-start <file>

for each file, where <whatever> is the revision where the tag should
be and <file> is the file.  Note that -F means "force" (otherwise you
get a complaint because the tag is already defined) and -b means
"branch" which makes the tag a branch tag.  I *believe* that branch
tags are recognized because they have the form
<major>.<minor>.0.<branch> but I'm not sure this is documented.

I alread did this for you for these three files!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Wed Aug  9 23:43:35 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 17:43:35 -0500
Subject: [Python-Dev] test_fork1 on SMP? (was Re: [Python Dev] test_fork1 failing --with-threads (for some people)...)
In-Reply-To: Your message of "Wed, 09 Aug 2000 14:11:04 CST."
 <20000809141104.A10805@keymaster.enme.ucalgary.ca>
References: <14724.22554.818853.722906@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>
 <20000809141104.A10805@keymaster.enme.ucalgary.ca>
Message-ID: <200008092243.RAA24914@cj20424-a.reston1.va.home.com>

> If I add Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around fork()
> in posixmodule then the child is the process which always seems to hang.

I first thought that the lock should be released around the fork too,
but later I realized that that was exactly wrong: if you release the
lock before you fork, another thread will likely grab the lock before
you fork; then in the child the lock is held by that other thread but
that thread doesn't exist, so when the main thread tries to get the
lock back it hangs in the Py_END_ALLOW_THREADS.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From ping@lfw.org  Wed Aug  9 23:06:15 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Wed, 9 Aug 2000 15:06:15 -0700 (PDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>

On Wed, 9 Aug 2000, Guido van Rossum wrote:
> I forget what indices() was -- is it the moreal equivalent of keys()?

Yes, it's range(len(s)).

> If irange(s) is zip(range(len(s)), s), I see how that's a bit
> unwieldy.  In the past there were syntax proposals, e.g. ``for i
> indexing s''.  Maybe you and Just can draft a PEP?

In the same vein as zip(), i think it's much easier to just toss in
a couple of built-ins than try to settle on a new syntax.  (I already
uploaded a patch to add indices() and irange() to the built-ins,
immediately after i posted my first message on this thread.)

Surely a PEP isn't required for a couple of built-in functions that
are simple and well understood?  You can just call thumbs-up or
thumbs-down and be done with it.


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box



From klm@digicool.com  Wed Aug  9 23:05:57 2000
From: klm@digicool.com (Ken Manheimer)
Date: Wed, 9 Aug 2000 18:05:57 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <Pine.LNX.4.21.0008091739020.1282-100000@korak.digicool.com>

On Wed, 9 Aug 2000, Just van Rossum wrote:

> PEP:            1716099-3
> Title:          Index-enhanced sequence iteration
> [...]
>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:
>             ...
> [...]
> Disadvantages:
> 
>     It will break that one person's code that uses "indexing"
>     as a variable name.

      It creates a new 'for' variant, increasing challenge for beginners 
      (and the befuddled, like me) of tracking the correct syntax.

I could see that disadvantage being justified by a more significant change
- lockstep iteration would qualify, for me (though it's circumventing this
drawback with zip()).  List comprehensions have that weight, and analogize
elegantly against the existing slice syntax.  I don't think the 'indexing'
benefits are of that order, not enough so to double the number of 'for'
forms, even if there are some performance gains over the (syntactically
equivalent) zip(), so, sorry, but i'm -1.

Ken
klm@digicool.com



From klm@digicool.com  Wed Aug  9 23:13:37 2000
From: klm@digicool.com (Ken Manheimer)
Date: Wed, 9 Aug 2000 18:13:37 -0400 (EDT)
Subject: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import
 y as z' (fwd)
In-Reply-To: <20000809222749.O266@xs4all.nl>
Message-ID: <Pine.LNX.4.21.0008091808390.1282-100000@korak.digicool.com>

On Wed, 9 Aug 2000, Thomas Wouters wrote:

> For those of you not on the patches list, here's the summary of the patch I
> just uploaded to SF. In short, it adds "import x as y" and "from module
> import x as y", in the way Tim proposed this morning. (Probably late last
> night for most of you.)

I guess the criteria i used in my thumbs down on 'indexing' is very
subjective, because i would say the added functionality of 'import x as y'
*does* satisfy my added-functionality test, and i'd be +1.  (I think the
determining thing is the ability to avoid name collisions without any
gross shuffle.)

I also really like the non-keyword basis for the, um, keyword.

Ken
klm@digicool.com



From guido@beopen.com  Thu Aug 10 00:14:19 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 18:14:19 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: Your message of "Wed, 09 Aug 2000 15:06:15 MST."
 <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>
References: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>
Message-ID: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>

> On Wed, 9 Aug 2000, Guido van Rossum wrote:
> > I forget what indices() was -- is it the moreal equivalent of keys()?

[Ping]
> Yes, it's range(len(s)).
> 
> > If irange(s) is zip(range(len(s)), s), I see how that's a bit
> > unwieldy.  In the past there were syntax proposals, e.g. ``for i
> > indexing s''.  Maybe you and Just can draft a PEP?
> 
> In the same vein as zip(), i think it's much easier to just toss in
> a couple of built-ins than try to settle on a new syntax.  (I already
> uploaded a patch to add indices() and irange() to the built-ins,
> immediately after i posted my first message on this thread.)
> 
> Surely a PEP isn't required for a couple of built-in functions that
> are simple and well understood?  You can just call thumbs-up or
> thumbs-down and be done with it.

-1 for indices

-0 for irange

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Wed Aug  9 23:15:10 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 00:15:10 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>; from just@letterror.com on Wed, Aug 09, 2000 at 04:01:18PM +0100
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <20000810001510.P266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:01:18PM +0100, Just van Rossum wrote:

> Features

>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:
>             ...

> Implementation
> 
>     Implementation should be trivial for anyone named Guido,
>     Tim or Thomas.

Well, to justify that vote of confidence <0.4 wink> I wrote a quick hack
that adds this to Python for loops. It can be found on SF, patch #101138.
It's small, but it works. I'll iron out any bugs if there's enough positive
feelings towards this kind of syntax change.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Wed Aug  9 23:22:55 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 00:22:55 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 09, 2000 at 06:14:19PM -0500
References: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> <200008092314.SAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <20000810002255.Q266@xs4all.nl>

On Wed, Aug 09, 2000 at 06:14:19PM -0500, Guido van Rossum wrote:

> -1 for indices
> 
> -0 for irange

The same for me, though I prefer 'for i indexing x in l' over 'irange()'. 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From beazley@schlitz.cs.uchicago.edu  Wed Aug  9 23:34:16 2000
From: beazley@schlitz.cs.uchicago.edu (David M. Beazley)
Date: Wed,  9 Aug 2000 17:34:16 -0500 (CDT)
Subject: [Python-Dev] Python-Dev digest, Vol 1 #737 - 17 msgs
In-Reply-To: <20000809221115.AC4E61D182@dinsdale.python.org>
References: <20000809221115.AC4E61D182@dinsdale.python.org>
Message-ID: <14737.55249.87871.538988@schlitz.cs.uchicago.edu>

python-dev-request@python.org writes:
 > 
 > I'd like to get the necessary C extension in before 2.0 freeze, if
 > possible.  I've copied its author.  Again, the motivation here is to make
 > shelving transactional, with useful read-many/write-once guarantees.
 > Thousands of CGI programmers would thank us for this.
 > 
 > When we last discussed this subject, there was general support for the
 > functionality, but a couple of people went "bletch!" about SWIG-generated
 > code (there was unhappiness about pointers being treated as strings).
 > 
 > Somebody said something about having SWIG patches to address this.  Is this
 > the only real issue with SWIG-generated code?  If so, we can pursue
 > two paths:

Well, as the guilty party on the SWIG front, I can say that the
current development version of SWIG is using CObjects instead of
strings (well, actually I lie---you have to compile the wrappers with
-DSWIG_COBJECT_TYPES to turn that feature on).  Just as a general
aside on this topic, I did a number of experiments comparing the
performance of using CObjects vs.the gross string-pointer hack about 6
months ago.  Strangely enough, there was virtually no-difference in
runtime performance and if recall correctly, the string hack might
have even been just a little bit faster. Go figure :-).

Overall, the main difference between SWIG1.3 and SWIG1.1 is in runtime 
performance of the wrappers as well as various changes to reduce the
amount of wrapper code.   However, 1.3 is also very much an alpha release
right now---if you're going to use that, make sure you thoroughly test 
everything.

On the subject of the Berkeley DB module, I would definitely like to 
see a module for this.  If there is anything I can do to either modify
the behavior of SWIG or to build an extension module by hand, let me know.

Cheers,

Dave




From MarkH@ActiveState.com  Thu Aug 10 00:03:19 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:03:19 +1000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>

[Skip laments...]
> Could this be extended to many/most/all current instances of
> keywords in Python?  As Tim pointed out, Fortran has no
> keywords.  It annoys me that I (for example) can't define
> a method named "print".

Sometimes it is worse than annoying!

In the COM and CORBA worlds, it can be a showstopper - if an external
object happens to expose a method or property named after a Python keyword,
then you simply can not use it!

This has lead to COM support having to check _every_ attribute name it sees
externally, and mangle it if a keyword.

A bigger support exists for .NET.  The .NET framework explicitly dictates
that a compliant language _must_ have a way of overriding its own keywords
when calling external methods (it was either that, or try and dictate a
union of reserved words they can ban)

Eg, C# allows you to surround a keyword with brackets.  ie, I believe
something like:

object.[if]

Would work in C# to provide access to an attribute named "if"

Unfortunately, Python COM is a layer ontop of CPython, and Python .NET
still uses the CPython parser - so in neither of these cases is there a
simple hack I can use to work around it at the parser level.

Needless to say, as this affects the 2 major technologies I work with
currently, I would like an official way to work around Python keywords!

Mark.



From guido@beopen.com  Thu Aug 10 01:12:59 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 19:12:59 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: Your message of "Thu, 10 Aug 2000 09:03:19 +1000."
 <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>
Message-ID: <200008100012.TAA25968@cj20424-a.reston1.va.home.com>

> [Skip laments...]
> > Could this be extended to many/most/all current instances of
> > keywords in Python?  As Tim pointed out, Fortran has no
> > keywords.  It annoys me that I (for example) can't define
> > a method named "print".
> 
> Sometimes it is worse than annoying!
> 
> In the COM and CORBA worlds, it can be a showstopper - if an external
> object happens to expose a method or property named after a Python keyword,
> then you simply can not use it!
> 
> This has lead to COM support having to check _every_ attribute name it sees
> externally, and mangle it if a keyword.
> 
> A bigger support exists for .NET.  The .NET framework explicitly dictates
> that a compliant language _must_ have a way of overriding its own keywords
> when calling external methods (it was either that, or try and dictate a
> union of reserved words they can ban)
> 
> Eg, C# allows you to surround a keyword with brackets.  ie, I believe
> something like:
> 
> object.[if]
> 
> Would work in C# to provide access to an attribute named "if"
> 
> Unfortunately, Python COM is a layer ontop of CPython, and Python .NET
> still uses the CPython parser - so in neither of these cases is there a
> simple hack I can use to work around it at the parser level.
> 
> Needless to say, as this affects the 2 major technologies I work with
> currently, I would like an official way to work around Python keywords!

The JPython approach should be added to CPython.  This effectively
turns off keywords directly after ".", "def" and in a few other
places.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From MarkH@ActiveState.com  Thu Aug 10 00:17:35 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:17:35 +1000
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEBFDEAA.MarkH@ActiveState.com>

Guido commented yesterday that he doesnt tally votes (yay), but obviously
he still issues them!  It made me think of a Dutch Crocodile Dundee on a
visit to New York, muttering to his harassers as he whips something out
from under his clothing...

> -1 for indices

"You call that a -1,  _this_ is a -1"

:-)

[Apologies to anyone who hasnt seen the knife scene in the forementioned
movie ;-]

Mark.



From MarkH@ActiveState.com  Thu Aug 10 00:21:33 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:21:33 +1000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <200008100012.TAA25968@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>

[Guido]
> The JPython approach should be added to CPython.  This effectively
> turns off keywords directly after ".", "def" and in a few other
> places.

Excellent.  I saw a reference to this after I sent my mail.

I'd be happy to help, in a long, drawn out, when-I-find-time kind of way.
What is the general strategy - is it simply to maintain a state in the
parser?  Where would I start to look into?

Mark.



From guido@beopen.com  Thu Aug 10 01:36:30 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 19:36:30 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: Your message of "Thu, 10 Aug 2000 09:21:33 +1000."
 <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>
Message-ID: <200008100036.TAA26235@cj20424-a.reston1.va.home.com>

> [Guido]
> > The JPython approach should be added to CPython.  This effectively
> > turns off keywords directly after ".", "def" and in a few other
> > places.
> 
> Excellent.  I saw a reference to this after I sent my mail.
> 
> I'd be happy to help, in a long, drawn out, when-I-find-time kind of way.
> What is the general strategy - is it simply to maintain a state in the
> parser?  Where would I start to look into?
> 
> Mark.

Alas, I'm not sure how easy it will be.  The parser generator will
probably have to be changed to allow you to indicate not to do a
resword lookup at certain points in the grammar.  I don't know where
to start. :-(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 10 02:12:59 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 10 Aug 2000 04:12:59 +0300 (IDT)
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809141639.D2019@mad-scientist.com>
Message-ID: <Pine.GSO.4.10.10008100411500.26961-100000@sundial>

On Wed, 9 Aug 2000, Gregory P . Smith wrote:

> Out of curiosity, I just made a version of py-bsddb3 that uses SWIG
> v1.3alpha3 instead of SWIG v1.1-883.  It looks like 1.3a3 is still
> using strings for pointerish things.  One thing to note that may calm
> some peoples sense of "eww gross, pointer strings" is that programmers
> should never see them.  They are "hidden" behind the python shadow class.
> The pointer strings are only contained within the shadow objects "this"
> member.

It's not "ewww gross", it's "dangerous!". This makes Python "not safe",
since users can access random memory location.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 10 02:28:00 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 10 Aug 2000 04:28:00 +0300 (IDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>
Message-ID: <Pine.GSO.4.10.10008100425430.26961-100000@sundial>

On Thu, 10 Aug 2000, Mark Hammond wrote:

> Sometimes it is worse than annoying!
> 
> In the COM and CORBA worlds, it can be a showstopper - if an external
> object happens to expose a method or property named after a Python keyword,
> then you simply can not use it!

How about this (simple, but relatively unannoying) convention:

To COM name:
	- remove last "_", if any

From COM name:
	- add "_" is it's a keyword
	- add "_" if last character is "_"

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From greg@cosc.canterbury.ac.nz  Thu Aug 10 02:29:38 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 13:29:38 +1200 (NZST)
Subject: [Python-Dev] A question for the Python Secret Police
In-Reply-To: <20000809083127.7FFF6303181@snelboot.oratrix.nl>
Message-ID: <200008100129.NAA13775@s454.cosc.canterbury.ac.nz>

Jack Jansen <jack@oratrix.nl>:

> Is the following morally allowed:
>   class Foo(Foo):

Well, the standard admonitions against 'import *' apply.
Whether using 'import *' or not, though, in the interests 
of clarity I think I would write it as

   class Foo(package1.mod.Foo):

On the other hand, the funkiness factor of it does
have a certain appeal!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 10 02:56:55 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 13:56:55 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <200008100156.NAA13782@s454.cosc.canterbury.ac.nz>

> It looks nice, but i'm pretty sure it won't fly.

It will! Try it:

>>> for (x in a, y in b):
  File "<stdin>", line 1
    for (x in a, y in b):
                        ^
SyntaxError: invalid syntax

> how is the parser to know whether the lockstep form has been
> invoked?

The parser doesn't have to know as long as the compiler can
tell, and clearly one of them can.

> Besides, i think Guido has Pronounced quite firmly on zip().

That won't stop me from gently trying to change his mind
one day. The existence of zip() doesn't preclude something
more elegant being adopted in a future version.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 10 03:12:08 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:12:08 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809130645.J266@xs4all.nl>
Message-ID: <200008100212.OAA13789@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas@xs4all.net>:

> The only objection I can bring up is that parentheses are almost always
> optional, in Python, and this kind of violates it.

They're optional around tuple constructors, but this is not
a tuple constructor.

The parentheses around function arguments aren't optional
either, and nobody complains about that.

> 'for (x in a, y in b) in z:' *is* valid syntax...

But it's not valid Python:

>>> for (x in a, y in b) in z:
...   print x,y
... 
SyntaxError: can't assign to operator

> It might not be too pretty, but it can be worked around ;)

It wouldn't be any uglier than what's currently done with
the LHS of an assignment, which is parsed as a general
expression and treated specially later on.

There's-more-to-the-Python-syntax-than-what-it-says-in-
the-Grammar-file-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 10 03:19:32 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:19:32 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <200008100219.OAA13793@s454.cosc.canterbury.ac.nz>

Just van Rossum <just@letterror.com>:

>        for <index> indexing <element> in <seq>:

Then idea is good, but I don't like this particular syntax much. It
seems to be trying to do too much at once, giving you both an index
and an element.  Also, the wording reminds me unpleasantly of COBOL
for some reason.

Some time ago I suggested

   for <index> over <sequence>:

as a way of getting hold of the index, and as a direct
replacement for 'for i in range(len(blarg))' constructs.
It could also be used for lockstep iteration applications,
e.g.

   for i over a:
      frobulate(a[i], b[i])

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 10 03:23:50 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:23:50 +1200 (NZST)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <200008100036.TAA26235@cj20424-a.reston1.va.home.com>
Message-ID: <200008100223.OAA13796@s454.cosc.canterbury.ac.nz>

BDFL:

> The parser generator will probably have to be changed to allow you to
> indicate not to do a resword lookup at certain points in the grammar.

Isn't it the scanner which recognises reserved words?

In that case, just make it not do that for the next
token after certain tokens.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From billtut@microsoft.com  Thu Aug 10 04:24:11 2000
From: billtut@microsoft.com (Bill Tutt)
Date: Wed, 9 Aug 2000 20:24:11 -0700
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
 !)
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com>

The parser actually recognizes keywords atm.

We could change that so that each keyword is a token.
Then you would have something like:

keyword_allowed_name: KEY1 | KEY2 | KEY3 | ... | KEYN | NAME
and then tweak func_def like so:
func_def:  DEF keyword_allowed_name parameters ':' suite

I haven't pondered whether or not this would cause the DFA to fall into a
degenerate case or not.

Wondering where the metagrammer source file went to,
Bill


 -----Original Message-----
From: 	Greg Ewing [mailto:greg@cosc.canterbury.ac.nz] 
Sent:	Wednesday, August 09, 2000 7:24 PM
To:	python-dev@python.org
Subject:	Re: [Python-Dev] Python keywords (was Lockstep iteration -
eureka!)

BDFL:

> The parser generator will probably have to be changed to allow you to
> indicate not to do a resword lookup at certain points in the grammar.

Isn't it the scanner which recognises reserved words?

In that case, just make it not do that for the next
token after certain tokens.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://www.python.org/mailman/listinfo/python-dev


From guido@beopen.com  Thu Aug 10 05:44:45 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 23:44:45 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka !)
In-Reply-To: Your message of "Wed, 09 Aug 2000 20:24:11 MST."
 <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com>
References: <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <200008100444.XAA27348@cj20424-a.reston1.va.home.com>

> The parser actually recognizes keywords atm.
> 
> We could change that so that each keyword is a token.
> Then you would have something like:
> 
> keyword_allowed_name: KEY1 | KEY2 | KEY3 | ... | KEYN | NAME
> and then tweak func_def like so:
> func_def:  DEF keyword_allowed_name parameters ':' suite
> 
> I haven't pondered whether or not this would cause the DFA to fall into a
> degenerate case or not.

This would be a good and simple approach.

> Wondering where the metagrammer source file went to,

It may not have existed; I may have handcrafted the metagrammar.c
file.

I believe the metagrammar was something like this:

MSTART: RULE*
RULE: NAME ':' RHS
RHS: ITEM ('|' ITEM)*
ITEM: (ATOM ['*' | '?'])+
ATOM: NAME | STRING | '(' RHS ')' | '[' RHS ']'

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From nowonder@nowonder.de  Thu Aug 10 08:02:12 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:02:12 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135]
 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl>
Message-ID: <39925374.59D974FA@nowonder.de>

Thomas Wouters wrote:
> 
> For those of you not on the patches list, here's the summary of the patch I
> just uploaded to SF. In short, it adds "import x as y" and "from module
> import x as y", in the way Tim proposed this morning. (Probably late last
> night for most of you.)

-1 on the implementation. Although it looked okay on a first visual
   inspection, it builds a segfaulting python executable on linux:
      make distclean && ./configure && make test
   segfaults when first time starting python to run regrtest.py.
   Reversing the patch and doing a simple 'make test' has everything
   running again.

+1 on the idea, though. It just seems sooo natural. My first
   reaction before applying the patch was testing if Python
   did not already do this <0.25 wink - really did it>

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From nowonder@nowonder.de  Thu Aug 10 08:21:13 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:21:13 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de>
Message-ID: <399257E9.E399D52D@nowonder.de>

Peter Schneider-Kamp wrote:
> 
> -1 on the implementation. Although it looked okay on a first visual
>    inspection, it builds a segfaulting python executable on linux:
>       make distclean && ./configure && make test
>    segfaults when first time starting python to run regrtest.py.
>    Reversing the patch and doing a simple 'make test' has everything
>    running again.

Also note the following problems:

nowonder@mobility:~/python/python/dist/src > ./python
Python 2.0b1 (#12, Aug 10 2000, 07:17:46)  [GCC 2.95.2 19991024
(release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> from string import join
Speicherzugriffsfehler
nowonder@mobility:~/python/python/dist/src > ./python
Python 2.0b1 (#12, Aug 10 2000, 07:17:46)  [GCC 2.95.2 19991024
(release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> from string import join as j
  File "<stdin>", line 1
    from string import join as j
                             ^
SyntaxError: invalid syntax
>>>  

I think the problem is in compile.c, but that's just my bet.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Thu Aug 10 06:24:19 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 07:24:19 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39925374.59D974FA@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 07:02:12AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de>
Message-ID: <20000810072419.A17171@xs4all.nl>

On Thu, Aug 10, 2000 at 07:02:12AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:
> > 
> > For those of you not on the patches list, here's the summary of the patch I
> > just uploaded to SF. In short, it adds "import x as y" and "from module
> > import x as y", in the way Tim proposed this morning. (Probably late last
> > night for most of you.)

> -1 on the implementation. Although it looked okay on a first visual
>    inspection, it builds a segfaulting python executable on linux:
>       make distclean && ./configure && make test
>    segfaults when first time starting python to run regrtest.py.
>    Reversing the patch and doing a simple 'make test' has everything
>    running again.

Try running 'make' in 'Grammar/' first. None of my patches that touch
Grammar include the changes to graminit.h and graminit.c, because they can
be quite lengthy (in the order of several thousand lines, in this case, if
I'm not mistaken.) So the same goes for the 'indexing for', 'range literal'
and 'augmented assignment' patches ;)

If it still goes crashy crashy after you re-make the grammar, I'll, well,
I'll, I'll make Baldrick eat one of his own dirty socks ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Thu Aug 10 08:37:44 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:37:44 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl>
Message-ID: <39925BC8.17CD051@nowonder.de>

Thomas Wouters wrote:
> 
> If it still goes crashy crashy after you re-make the grammar, I'll, well,
> I'll, I'll make Baldrick eat one of his own dirty socks ;)

I just found that out for myself. The syntaxerror in the
second examples lead my way ...

Sorry for the hassle, but next time please remind me that
I have to remake the grammar.

+1 on the implementation now.

perversely-minded-note:
What about 'from string import *, join as j'?
I think that would make sense, but as we are not fond of
the star in any case maybe we don't need that.

Peter

P.S.: I'd like to see Baldrick do that. What the heck is
      a Baldrick? I am longing for breakfast, so I hope
      I can eat it. Mjam.
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Thu Aug 10 06:55:10 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 07:55:10 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39925BC8.17CD051@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 07:37:44AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de>
Message-ID: <20000810075510.B17171@xs4all.nl>

On Thu, Aug 10, 2000 at 07:37:44AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > If it still goes crashy crashy after you re-make the grammar, I'll, well,
> > I'll, I'll make Baldrick eat one of his own dirty socks ;)

> I just found that out for myself. The syntaxerror in the
> second examples lead my way ...

> Sorry for the hassle, but next time please remind me that
> I have to remake the grammar.

It was late, last night, and I have to force myself not to write essays when
submitting a patch in the first place ;-P How about we fix the dependencies
so that the grammar gets re-made when necessary ? Or is there a good reason
not to do that ?

> perversely-minded-note:
> What about 'from string import *, join as j'?
> I think that would make sense, but as we are not fond of
> the star in any case maybe we don't need that.

'join as j' ? What would it do ? Import all symbols from 'string' into a
new namespace 'j' ? How about you do 'import string as j' instead ? It means
you will still be able to do 'j._somevar', which you probably wouldn't in
your example, but I don't think that's enough reason :P

> P.S.: I'd like to see Baldrick do that. What the heck is
>       a Baldrick? I am longing for breakfast, so I hope
>       I can eat it. Mjam.

Sorry :) They've been doing re-runs of Blackadder (1st through 4th, they're
nearly done) on one of the belgian channels, and it happens to be one of my
favorite comedy shows ;) It's a damned sight funnier than Crocodile Dundee,
hey, Mark ? <nudge> <nudge> <wink> <wink> :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Thu Aug 10 09:10:13 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 08:10:13 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl>
Message-ID: <39926365.909B2835@nowonder.de>

Thomas Wouters wrote:
> 
> 'join as j' ? What would it do ? Import all symbols from 'string' into a
> new namespace 'j' ? How about you do 'import string as j' instead ? It means
> you will still be able to do 'j._somevar', which you probably wouldn't in
> your example, but I don't think that's enough reason :P

Okay, your misunderstanding of the semantics I had in mind are
reason enough <0.5 wink>.

from string import *, join as j
(or equivalently)
from string import join as j, *

would (in my book) import all "public" symbols from string
and assign j = join.

Assuming we have a Tkinter app (where all the tutorials
do a 'from Tkinter import *') and we don't like
'createtimerhandle'. Then the following would give
us tk_timer instead while still importing all the stuff
from Tkinter with their regular names:

from Tkinter import *, createtimerhandle as tk_timer

An even better way of doing this were if it would not
only give you another name but if it would not import
the original one. In this example our expression
would import all the symbols from Tkinter but would
rename createtimerhandle as tk_timer. In this way you
could still use * if you have a namespace clash. E.g.:

from Tkinter import *, mainloop as tk_mainloop

def mainloop():
  <do some really useful stuff calling tk_mainloop()>

if __name__ == '__main__':
  mainloop()

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Thu Aug 10 07:23:16 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 08:23:16 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39926365.909B2835@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 08:10:13AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl> <39926365.909B2835@nowonder.de>
Message-ID: <20000810082316.C17171@xs4all.nl>

On Thu, Aug 10, 2000 at 08:10:13AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > 'join as j' ? What would it do ? Import all symbols from 'string' into a
> > new namespace 'j' ? How about you do 'import string as j' instead ? It means
> > you will still be able to do 'j._somevar', which you probably wouldn't in
> > your example, but I don't think that's enough reason :P

> Okay, your misunderstanding of the semantics I had in mind are
> reason enough <0.5 wink>.

> from string import *, join as j
> (or equivalently)
> from string import join as j, *

Ahh, like that :) Well, I'd say 'no'. "from module import *" has only one
legitimate use, as far as I'm concerned, and that's taking over all symbols
without prejudice, to encapsulate another module. It shouldn't be used in
code that attempts to stay readable, so 'import join as j' is insanity ;-)
If you really must do the above, do it in two import statements.

> An even better way of doing this were if it would not
> only give you another name but if it would not import
> the original one. In this example our expression
> would import all the symbols from Tkinter but would
> rename createtimerhandle as tk_timer. In this way you
> could still use * if you have a namespace clash. E.g.:

No, that isn't possible. You can't pass a list of names to 'FROM_IMPORT *'
to omit loading them. (That's also the reason the patch needs a new opcode,
because you can't pass both the name to be imported from a module and the
name it should be stored at, to the FROM_IMPORT bytecode :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Thu Aug 10 09:52:31 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 08:52:31 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl> <39926365.909B2835@nowonder.de> <20000810082316.C17171@xs4all.nl>
Message-ID: <39926D4F.83CAE9C2@nowonder.de>

Thomas Wouters wrote:
> 
> On Thu, Aug 10, 2000 at 08:10:13AM +0000, Peter Schneider-Kamp wrote:
> > An even better way of doing this were if it would not
> > only give you another name but if it would not import
> > the original one. In this example our expression
> > would import all the symbols from Tkinter but would
> > rename createtimerhandle as tk_timer. In this way you
> > could still use * if you have a namespace clash. E.g.:
> 
> No, that isn't possible. You can't pass a list of names to 'FROM_IMPORT *'
> to omit loading them. (That's also the reason the patch needs a new opcode,
> because you can't pass both the name to be imported from a module and the
> name it should be stored at, to the FROM_IMPORT bytecode :)

Yes, it is possible. But as you correctly point out, not
without some serious changes to compile.c and ceval.c.

As we both agree (trying to channel you) it is not worth it
to make 'from import *' more usable, I think we should stop
this discussion before somebody thinks we seriously want
to do this.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From mal@lemburg.com  Thu Aug 10 09:36:07 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 10 Aug 2000 10:36:07 +0200
Subject: [Python-Dev] Un-stalling Berkeley DB support
References: <20000809140321.A836@thyrsus.com>
Message-ID: <39926977.F8495AAD@lemburg.com>

"Eric S. Raymond" wrote:
> [Berkeley DB 3]
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).

AFAIK, recent versions of SWIG now make proper use of PyCObjects
to store pointers. Don't know how well this works though: I've
had a report that the new support can cause core dumps.
 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy
> to incorporate these patches in his next release.

Perhaps these patches are what I was talking about above ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From sjoerd@oratrix.nl  Thu Aug 10 11:59:06 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Thu, 10 Aug 2000 12:59:06 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib mailbox.py,1.20,1.21
In-Reply-To: Your message of Wed, 09 Aug 2000 20:05:30 -0700.
 <200008100305.UAA05018@slayer.i.sourceforge.net>
References: <200008100305.UAA05018@slayer.i.sourceforge.net>
Message-ID: <20000810105907.713B331047C@bireme.oratrix.nl>

On Wed, Aug 9 2000 Guido van Rossum wrote:

>           files = os.listdir(self.dirname)
> !         list = []
>           for f in files:
>               if pat.match(f):
> !                 list.append(f)
> !         list = map(long, list)
> !         list.sort()

Isn't this just:
	list = os.listdir(self.dirname)
	list = filter(pat.match, list)
	list = map(long, list)
	list.sort()

Or even shorter:
	list = map(long, filter(pat.match, os.listdir(self.dirname)))
	list.sort()
(Although I can and do see the advantage of the slightly longer
version.)

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From gward@mems-exchange.org  Thu Aug 10 13:38:02 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Thu, 10 Aug 2000 08:38:02 -0400
Subject: [Python-Dev] Adding library modules to the core
Message-ID: <20000810083802.A7912@ludwig.cnri.reston.va.us>

[hmmm, this bounced 'cause the root partition on python.org was
full... let's try again, shall we?]

On 07 August 2000, Eric S. Raymond said:
> A few days ago I asked about the procedure for adding a module to the
> Python core library.  I have a framework class for things like menu systems
> and symbolic debuggers I'd like to add.
> 
> Guido asked if this was similar to the TreeWidget class in IDLE.  I 
> investigated and discovered that it is not, and told him so.  I am left
> with a couple of related questions:

Well, I just ploughed through this entire thread, and no one came up
with an idea I've been toying with for a while: the Python Advanced
Library.

This would be the place for well-known, useful, popular, tested, robust,
stable, documented module collections that are just too big or too
specialized to go in the core.  Examples: PIL, mxDateTime, mxTextTools,
mxODBC, ExtensionClass, ZODB, and anything else that I use in my daily
work and wish that we didn't have maintain separate builds of.  ;-)

Obviously this would be most useful as an RPM/Debian package/Windows
installer/etc., so that non-developers could be told, "You need to
install Python 1.6 and the Python Advanced Library 1.0 from ..." and
that's *it*.

Thoughts?  Candidates for admission?  Proposed requirements for admission?

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From gward@mems-exchange.org  Thu Aug 10 14:47:48 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Thu, 10 Aug 2000 09:47:48 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <Pine.GSO.4.10.10008101557580.1582-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 10, 2000 at 04:00:51PM +0300
References: <20000810083802.A7912@ludwig.cnri.reston.va.us> <Pine.GSO.4.10.10008101557580.1582-100000@sundial>
Message-ID: <20000810094747.C7912@ludwig.cnri.reston.va.us>

[cc'd to python-dev, since I think this belongs out in the open: Moshe,
if you really meant to keep this private, go ahead and slap my wrist]

On 10 August 2000, Moshe Zadka said:
> Greg, this sounds very close to PEP-206. Please let me know if you see
> any useful collaboration with it.

They're definitely related, and I think we're trying to address the same
problem -- but in a different way.

If I read the PEP (http://python.sourceforge.net/peps/pep-0206.html)
correctly, you want to fatten the standard Python distribution
considerably, first by adding lots of third-party C libraries to it, and
second by adding lots of third-party Python libraries ("module
distributions") to it.  This has the advantage of making all these
goodies immediately available in a typical Python installation.  But it
has a couple of serious disadvantages:
  * makes Python even harder to build and install; why should I have
    to build half a dozen major C libraries just to get a basic
    Python installation working?
  * all these libraries are redundant on modern free Unices -- at
    least the Linux distributions that I have experience with all
    include zlib, Tcl/Tk, libjpeg, and ncurses out of the box.
    Including copies of them with throws out one of the advantages
    of having all these installed as shared libraries, namely that
    there only has to be one copy of each in memory.
  * tell me again: what was the point of the Distutils if we just
    throw "everything useful" into the standard distribution?

Anyways, my idea -- the Python Advanced Library -- is to make all of
these goodies available as a single download, *separate* from Python
itself.  It could well be at the the Advanced Library would be larger
than the Python distribution.  (Especially if Tcl/Tk migrates from the
standard Windows installer to the Advanced Library.)

Advantages:
  * keeps the standard distribution relatively small and focussed;
    IMHO the "big framework" libraries (PIL, NumPy, etc.) don't
    belong in the standard library.  (I think there could someday
    be a strong case for moving Tkinter out of the standard library
    if the Advanced Library really takes off.)
  * relieves licensing problems in the Python distribution; if something
    can't be included with Python for licence reasons, then put
    it in the Advanced Library
  * can have variations on the PAL for different platforms.  Eg. could
    have an RPM or Debian package that just requires libjpeg,
    libncurses, libtcl, libtk etc. for the various Linuces, and an
    enormous installer with separate of absolutely everything for
    Windows
  * excellent test case for the Distutils ;-)
  * great acronym: the Python Advanced Library is your PAL.

Sounds worth a PEP to me; I think it should be distinct from (and in
competition with!) PEP 206.

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 10 15:09:23 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 10 Aug 2000 17:09:23 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000810094747.C7912@ludwig.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008101707240.1582-100000@sundial>

On Thu, 10 Aug 2000, Greg Ward wrote:

> Sounds worth a PEP to me; I think it should be distinct from (and in
> competition with!) PEP 206.

That's sort of why I wanted to keep this off Python-Dev: I don't think
so (I don't really want competing PEPs), I'd rather we hashed out our
differences in private and come up with a unified PEP to save everyone
on Python-Dev a lot of time. 

So let's keep the conversation off python-dev until we either reach
a consensus or agree to disagree.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From mal@lemburg.com  Thu Aug 10 15:28:34 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 10 Aug 2000 16:28:34 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <Pine.GSO.4.10.10008101707240.1582-100000@sundial>
Message-ID: <3992BC12.BFA16AAC@lemburg.com>

Moshe Zadka wrote:
> 
> On Thu, 10 Aug 2000, Greg Ward wrote:
> 
> > Sounds worth a PEP to me; I think it should be distinct from (and in
> > competition with!) PEP 206.
> 
> That's sort of why I wanted to keep this off Python-Dev: I don't think
> so (I don't really want competing PEPs), I'd rather we hashed out our
> differences in private and come up with a unified PEP to save everyone
> on Python-Dev a lot of time.
> 
> So let's keep the conversation off python-dev until we either reach
> a consensus or agree to disagree.

Just a side note: As I recall Guido is not willing to include
all these third party tools to the core distribution, but rather
to a SUMO Python distribution, which then includes Python +
all those nice goodies available to the Python Community.

Maintaining this SUMO distribution should, IMHO, be left to
a commercial entity like e.g. ActiveState or BeOpen to insure
quality and robustness -- this is not an easy job, believe me.
I've tried something like this before: it was called Python
PowerTools and should still be available at:

  http://starship.python.net/crew/lemburg/PowerTools-0.2.zip

I never got far, though, due to the complexity of getting
all that Good Stuff under one umbrella.

Perhaps you ought to retarget you PEP206, Moshe ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 10 15:30:40 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 10 Aug 2000 17:30:40 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <3992BC12.BFA16AAC@lemburg.com>
Message-ID: <Pine.GSO.4.10.10008101729280.17061-100000@sundial>

On Thu, 10 Aug 2000, M.-A. Lemburg wrote:

> Just a side note: As I recall Guido is not willing to include
> all these third party tools to the core distribution, but rather
> to a SUMO Python distribution, which then includes Python +
> all those nice goodies available to the Python Community.

Yes, that's correct. 

> Maintaining this SUMO distribution should, IMHO, be left to
> a commercial entity like e.g. ActiveState or BeOpen to insure
> quality and robustness -- this is not an easy job, believe me.

Well, I'm hoping that distutils will make this easier.

> Perhaps you ought to retarget you PEP206, Moshe ?!

I'm sorry -- I'm too foolhardy. 

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From nowonder@nowonder.de  Thu Aug 10 18:00:14 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 17:00:14 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
Message-ID: <3992DF9E.BF5A080C@nowonder.de>

Hi Guido!

After submitting the patch to smtplib, I got a bad feeling
about only trying to get the FQDN for the localhost case.

Shouldn't _get_fdqn_hostname() try to get the FQDN
for every argument passed? Currently it does so only
for len(name) == 0

I think (but couldn't immediately find a reference) it
is required by some RFC. There is at least an internet
draft by the the ietf that says it is required
and a lot of references (mostly from postfix) to some
RFC, too.

Of course, automatically trying to get the fully
qualified domain name would mean that the programmer
looses some flexibility (by loosing responsibility).

If that is a problem I would make _get_fqdn_hostname
a public function (and choose a better name). helo()
and ehlo() could still call it for the local host case.

or-should-I-just-leave-things-as-they-are-ly y'rs
Peter

P.S.: I am cc'ing the list so everyone and Thomas can
      rush in and provide their RFC knowledge.
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From guido@beopen.com  Thu Aug 10 17:14:20 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 10 Aug 2000 11:14:20 -0500
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: Your message of "Thu, 10 Aug 2000 17:00:14 GMT."
 <3992DF9E.BF5A080C@nowonder.de>
References: <3992DF9E.BF5A080C@nowonder.de>
Message-ID: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>

> Hi Guido!
> 
> After submitting the patch to smtplib, I got a bad feeling
> about only trying to get the FQDN for the localhost case.
> 
> Shouldn't _get_fdqn_hostname() try to get the FQDN
> for every argument passed? Currently it does so only
> for len(name) == 0
> 
> I think (but couldn't immediately find a reference) it
> is required by some RFC. There is at least an internet
> draft by the the ietf that says it is required
> and a lot of references (mostly from postfix) to some
> RFC, too.
> 
> Of course, automatically trying to get the fully
> qualified domain name would mean that the programmer
> looses some flexibility (by loosing responsibility).
> 
> If that is a problem I would make _get_fqdn_hostname
> a public function (and choose a better name). helo()
> and ehlo() could still call it for the local host case.
> 
> or-should-I-just-leave-things-as-they-are-ly y'rs
> Peter
> 
> P.S.: I am cc'ing the list so everyone and Thomas can
>       rush in and provide their RFC knowledge.

Good idea -- I don't know anything about SMTP!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Thu Aug 10 16:40:26 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 17:40:26 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 10, 2000 at 11:14:20AM -0500
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
Message-ID: <20000810174026.D17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:14:20AM -0500, Guido van Rossum wrote:

> > After submitting the patch to smtplib, I got a bad feeling
> > about only trying to get the FQDN for the localhost case.
> > for len(name) == 0

> > I think (but couldn't immediately find a reference) it
> > is required by some RFC. There is at least an internet
> > draft by the the ietf that says it is required
> > and a lot of references (mostly from postfix) to some
> > RFC, too.

If this is for helo() and ehlo(), screw it. No sane mailer, technician or
abuse desk employee pays any attention what so ever to the HELO message,
except possibly for debugging.

The only use I've ever had for the HELO message is with clients that setup a
WinGate or similar braindead port-forwarding service on their dail-in
machine, and then buy one of our products, batched-SMTP. They then get their
mail passed to them via SMTP when they dial in... except that these
*cough*users*cough* redirect their SMTP port to *our* smtp server, creating
a confusing mail loop. We first noticed that because their server connected
to our server using *our* HELO message ;)

> > If that is a problem I would make _get_fqdn_hostname
> > a public function (and choose a better name). helo()
> > and ehlo() could still call it for the local host case.

I don't think this is worth the trouble. Assembling a FQDN is tricky at
best, and it's not needed in that many cases. (Sometimes you can break
something by trying to FQDN a name and getting it wrong ;) Where would this
function be used ? In SMTP chats ? Not necessary. A 'best guess' is enough
-- the remote SMTP server won't listen to you, anyway, and provide the
ipaddress and it's reverse DNS entry in the mail logs. Mailers that rely on
the HELO message are (rightly!) considered insecure, spam-magnets, and are a
thankfully dying race.

Of course, if anyone else needs a FQDN, it might be worth exposing this
algorithm.... but smtplib doesn't seem like the proper place ;P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Thu Aug 10 19:13:04 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 18:13:04 +0000
Subject: [Python-Dev] open 'Accepted' patches
Message-ID: <3992F0B0.C8CBF85B@nowonder.de>

Changing the patch view at sf to 'Accepted' in order to find
my patch, I was surprised by the amount of patches that have
been accepted and are still lying around. In an insane attack
of self-destructiveness I decided to bring up the issue<wink>.

I know there can be a lot of issues with patches relative to
another patch etc., but letting them rot won't improve the
situation. "Checked in they should be." <PYoda> If there
are still problems with them or the have already been
checked in, the status should at least be 'Postponed',
'Out of Date', 'Rejected', 'Open' or 'Closed'.

Here is a list of the open 'Accepted' patches that have had
no comment for more than a week and which are not obviously
checked in yet (those that are, I have closed):

patch# | summary                             | last comment
-------+-------------------------------------+--------------
100510 | largefile support for Win64 (and...)| 2000-Jul-31
100511 | test largefile support (test_lar...)| 2000-Jul-31
100851 | traceback.py, with unicode except...| 2000-Aug-01
100874 | Better error message with Unbound...| 2000-Jul-26
100955 | ptags, eptags: regex->re, 4 char ...| 2000-Jul-26
100978 | Minor updates for BeOS R5           | 2000-Jul-25
100994 | Allow JPython to use more tests     | 2000-Jul-27

If I should review, adapt and/or check in some of these,
please tell me which ones.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Thu Aug 10 17:30:10 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 18:30:10 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 10, 2000 at 11:14:20AM -0500
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
Message-ID: <20000810183010.E17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:14:20AM -0500, Guido van Rossum wrote:

> > P.S.: I am cc'ing the list so everyone and Thomas can
> >       rush in and provide their RFC knowledge.

Oh, I forgot to point out: I have some RFC knowledge, but decided not to use
it in the case of the HELO message ;) I do have a lot of hands-on experience
with SMTP, and I know for a fact very little MUA that talk SMTP send a FQDN
in the HELO message. I think that sending the FQDN when we can (like we do,
now) is a good idea, but I don't see a reason to force the HELO message to
be a FQDN. 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 10 17:43:41 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 10 Aug 2000 19:43:41 +0300 (IDT)
Subject: [Python-Dev] open 'Accepted' patches
In-Reply-To: <3992F0B0.C8CBF85B@nowonder.de>
Message-ID: <Pine.GSO.4.10.10008101941220.19610-100000@sundial>

(Meta: seems every now and again, a developer has a fit of neurosa. I
think this is a good thing)

On Thu, 10 Aug 2000, Peter Schneider-Kamp wrote:

> patch# | summary                             | last comment
> -------+-------------------------------------+--------------
...
> 100955 | ptags, eptags: regex->re, 4 char ...| 2000-Jul-26

This is the only one I actually know about: Jeremy, Guido has approved it,
I assigned it to you for final eyeballing -- shouldn't be *too* hard to
check it in...
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From DavidA@ActiveState.com  Thu Aug 10 17:47:54 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Thu, 10 Aug 2000 09:47:54 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] FYI: Software Carpentry winners announced
Message-ID: <Pine.WNT.4.21.0008100945480.1052-100000@loom>

I wanted to make sure that everyone here knew that the Software Carpentry
winners were announced, and that our very own Ping won in the Track
category.  Winners in the Config and Build category were Linday Todd
(SapCat) and Steven Knight (sccons) respectively.  Congrats to all.

--david

http://software-carpentry.codesourcery.com/entries/second-round/results.html



From trentm@ActiveState.com  Thu Aug 10 17:50:15 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 10 Aug 2000 09:50:15 -0700
Subject: [Python-Dev] open 'Accepted' patches
In-Reply-To: <3992F0B0.C8CBF85B@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 06:13:04PM +0000
References: <3992F0B0.C8CBF85B@nowonder.de>
Message-ID: <20000810095015.A28562@ActiveState.com>

On Thu, Aug 10, 2000 at 06:13:04PM +0000, Peter Schneider-Kamp wrote:
> 
> Here is a list of the open 'Accepted' patches that have had
> no comment for more than a week and which are not obviously
> checked in yet (those that are, I have closed):
> 
> patch# | summary                             | last comment
> -------+-------------------------------------+--------------
> 100510 | largefile support for Win64 (and...)| 2000-Jul-31
> 100511 | test largefile support (test_lar...)| 2000-Jul-31

These two are mine. For a while I just thought that they had been checked in.
Guido poked me to check them in a week or so ago and I will this week.


Trent


-- 
Trent Mick
TrentM@ActiveState.com


From nowonder@nowonder.de  Fri Aug 11 00:29:28 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 23:29:28 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl>
Message-ID: <39933AD8.B8EF5D59@nowonder.de>

Thomas Wouters wrote:
> 
> If this is for helo() and ehlo(), screw it. No sane mailer, technician or
> abuse desk employee pays any attention what so ever to the HELO message,
> except possibly for debugging.

Well, there are some MTAs (like Postfix) that seem to care. Postfix has
an option called "reject_non_fqdn_hostname" with the following description:

"""
Reject the request when the hostname in the client HELO (EHLO) command is not in 
fully-qualified domain form, as required by the RFC. The non_fqdn_reject_code
specifies the response code to rejected requests (default: 504)."""

The submittor of the bug which was addressed by the patch I checked in had
a problem with mailman and a postfix program that seemed to have this option
turned on.

What I am proposing for smtplib is to send every name given to
helo (or ehlo) through the guessing framework of gethostbyaddr()
if possible. Could this hurt anything?

> Of course, if anyone else needs a FQDN, it might be worth exposing this
> algorithm.... but smtplib doesn't seem like the proper place ;P

Agreed. Where could it go?

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From nowonder@nowonder.de  Fri Aug 11 00:34:38 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 23:34:38 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810183010.E17171@xs4all.nl>
Message-ID: <39933C0E.7A84D6E2@nowonder.de>

Thomas Wouters wrote:
> 
> Oh, I forgot to point out: I have some RFC knowledge, but decided not to use
> it in the case of the HELO message ;) I do have a lot of hands-on experience
> with SMTP, and I know for a fact very little MUA that talk SMTP send a FQDN
> in the HELO message. I think that sending the FQDN when we can (like we do,
> now) is a good idea, but I don't see a reason to force the HELO message to
> be a FQDN.

I don't want to force anything. I think it's time for some
code to speak for itself, rather than me trying to
speak for it <0.8 wink>:

def _get_fqdn_hostname(name):
    name = string.strip(name)
    if len(name) == 0:
        name = socket.gethostname()
    try:
        hostname, aliases, ipaddrs = socket.gethostbyaddr(name)
    except socket.error:
        pass
    else:
        aliases.insert(0, hostname)
        for name in aliases:
            if '.' in name:
                break
        else:
            name = hostname
    return name

This is the same function as the one I checked into
smtplib.py with the exception of executing the try-block
also for names with len(name) != 0.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From bckfnn@worldonline.dk  Thu Aug 10 23:17:47 2000
From: bckfnn@worldonline.dk (Finn Bock)
Date: Thu, 10 Aug 2000 22:17:47 GMT
Subject: [Python-Dev] Freezing unicode codecs.
Message-ID: <3993287a.1852013@smtp.worldonline.dk>

While porting the unicode API and the encoding modules to JPython I came
across a problem which may also (or maybe not) exists in CPython.

jpythonc is a compiler for jpython which try to track dependencies
between modules in an attempt to detect which modules an application or
applet uses. I have the impression that some of the freeze tools for
CPython does something similar.

A call to unicode("abc", "cp1250") and "abc".encode("cp1250") will cause
the encoding.cp1250 module to be loaded as a side effect. The freeze
tools will have a hard time figuring this out by scanning the python
source.


For JPython I'm leaning towards making it a requirement that the
encodings must be loading explicit from somewhere in application. Adding


   import encoding.cp1250

somewhere in the application will allow jpythonc to include this python
module in the frozen application.

How does CPython solve this?


PS. The latest release of the JPython errata have full unicode support
and includes the "sre" module and unicode codecs.

    http://sourceforge.net/project/filelist.php?group_id=1842


regards,
finn


From thomas@xs4all.net  Thu Aug 10 23:50:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 00:50:13 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <39933AD8.B8EF5D59@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 11:29:28PM +0000
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl> <39933AD8.B8EF5D59@nowonder.de>
Message-ID: <20000811005013.F17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:29:28PM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > If this is for helo() and ehlo(), screw it. No sane mailer, technician or
> > abuse desk employee pays any attention what so ever to the HELO message,
> > except possibly for debugging.

> Well, there are some MTAs (like Postfix) that seem to care. Postfix has
> an option called "reject_non_fqdn_hostname" with the following description:

> """
> Reject the request when the hostname in the client HELO (EHLO) command is not in 
> fully-qualified domain form, as required by the RFC. The non_fqdn_reject_code
> specifies the response code to rejected requests (default: 504)."""

> The submittor of the bug which was addressed by the patch I checked in had
> a problem with mailman and a postfix program that seemed to have this option
> turned on.

Fine, the patch addresses that. When the hostname passed to smtplib is ""
(which is the default), it should be turned into a FQDN. I agree. However,
if someone passed in a name, we do not know if they even *want* the name
turned into a FQDN. In the face of ambiguity, refuse the temptation to
guess.

Turning on this Postfix feature (which is completely along the lines of
Postfix, and I applaud Wietse(*) for supplying it ;) is a tricky decision at
best. Like I said in the other email, there are a *lot* of MUAs and MTAs and
other throw-away-programs-gone-commercial that don't speak proper SMTP, and
don't even pretend to send a FQDN. Most Windows clients send the machine's
netbios name, for crying out loud. Turning this on would break all those
clients, and more. I'm not too worried about it breaking Python scripts that
are explicitly setting the HELO response -- those scripts are probably doing
it for a reason.

To note, I haven't seen software that uses smtplib that does supply their
own HELO message, except for a little script I saw that was *explicitly*
setting the HELO message in order to test the SMTP server on the other end.
That instance would certainly have been broken by rewriting the name into a
FQDN.

> > Of course, if anyone else needs a FQDN, it might be worth exposing this
> > algorithm.... but smtplib doesn't seem like the proper place ;P

> Agreed. Where could it go?

On second though, I can't imagine anyone needing such a function outside of
smtplib. FQDN's are nice for reporting URIs to the outside world, but for
connecting to a certain service you simply pass the hostname you got (which
can be an ipaddress) through to the OS-provided network layer. Kind of like
not doing type checking on the objects passed to your function, but instead
assuming it conforms to an interface and will work correctly or fail
obviously when attempted to be used as an object of a certain type.

So, make it an exposed function on smtplib, for those people who don't want
to set the HELO message to "", but do want it to be rewritten into a FQDN.

(*) Amazing how all good software came to be through Dutch people. Even
Linux: if it wasn't for Tanenbaum, it wouldn't be what it is today :-)

PS: I'm talking as a sysadmin for a large ISP here, not as a user-friendly
library-implementor. We won't be able to turn on this postfix feature for
many, many years, and I wouldn't advise anyone who expects mail to be sent
from the internet to a postfix machine to enable it, either. But if your
mailserver is internal-only, or with fixed entrypoints that are running
reliable software, I can imagine people turning it on. It would please me no
end if we could turn this on ! I spend on average an hour a day closing
customer-accounts and helping them find out why their mailserver sucks. And
I still discover new mailserver software and new ways for them to suck, it's
really amazing ;)

that-PS-was-a-bit-long-for-a-signoff-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Fri Aug 11 01:44:06 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 10 Aug 2000 20:44:06 -0400
Subject: Keyword abuse  (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOECAGPAA.tim_one@email.msn.com>

[Skip Montanaro]
> Could this be extended to many/most/all current instances of
> keywords in Python?  As Tim pointed out, Fortran has no keywords.
> It annoys me that I (for example) can't define a method named "print".

This wasn't accidental in Fortran, though:  X3J3 spent many tedious hours
fiddling the grammar to guarantee it was always possible.  Python wasn't
designed with this in mind, and e.g. there's no meaningful way to figure out
whether

    raise

is an expression or a "raise stmt" in the absence of keywords.  Fortran is
very careful to make sure such ambiguities can't arise.

A *reasonable* thing is to restrict global keywords to special tokens that
can begin a line.  There's real human and machine parsing value in being
able to figure out what *kind* of stmt a line represents from its first
token.  So letting "print" be a variable name too would, IMO, really suck.

But after that, I don't think users have any problem understanding that
different stmt types can have different syntax.  For example, if "@" has a
special meaning in "print" statments, big deal.  Nobody splits a spleen over
seeing

    a   b, c, d

when "a" happens to be "exec" or "print" today, despite that most stmts
don't allow that syntax, and even between "exec" and "print" it has very
different meanings.  Toss in "global", "del" and "import" too for other
twists on what the "b, c, d" part can look like and mean.

As far as I'm concerned, each stmt type can have any syntax it damn well
likes!   Identifiers with special meaning *after* a keyword-introduced stmt
can usually be anything at all without making them global keywords (be it
"as" after "import", or "indexing" after "for", or ...).  The only thing
Python is missing then is a lisp stmt <wink>:

    lisp (setq a (+ a 1))

Other than that, the JPython hack looks cool too.

Note that SSKs (stmt-specific keywords) present a new problem to colorizers
(or moral equivalents like font-lock), and to other tools that do more than
a trivial parse.

the-road-to-p3k-has-toll-booths-ly y'rs  - tim




From tim_one@email.msn.com  Fri Aug 11 01:44:08 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 10 Aug 2000 20:44:08 -0400
Subject: PEP praise (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCAECBGPAA.tim_one@email.msn.com>

[Ka-Ping Yee]
> ...
> Surely a PEP isn't required for a couple of built-in functions that
> are simple and well understood?  You can just call thumbs-up or
> thumbs-down and be done with it.

Only half of that is true, and even then only partially:  if the verdict is
thumbs-up, *almost* cool, except that newcomers delight in pestering "but
how come it wasn't done *my* way instead?".  You did a bit of that yourself
in your day, you know <wink>.  We're hoping the stream of newcomers never
ends, but the group of old-timers willing and able to take an hour or two to
explain the past in detail is actually dwindling (heck, you can count the
Python-Dev members chipping in on Python-Help with a couple of fingers, and
if anything fewer still active on c.l.py).

If it's thumbs-down, in the absence of a PEP it's much worse:  it will just
come back again, and again, and again, and again.  The sheer repetition in
these endlessly recycled arguments all but guarantees that most old-timers
ignore these threads completely.

A prime purpose of the PEPs is to be the community's collective memory, pro
or con, so I don't have to be <wink>.  You surely can't believe this is the
first time these particular functions have been pushed for core adoption!?
If not, why do we need to have the same arguments all over again?  It's not
because we're assholes, and neither because there's anything truly new here,
it's simply because a mailing list has no coherent memory.

Not so much as a comma gets changed in an ANSI or ISO std without an
elaborate pile of proposal paperwork and formal reviews.  PEPs are a very
lightweight mechanism compared to that.  And it would take you less time to
write a PEP for this than I alone spent reading the 21 msgs waiting for me
in this thread today.  Multiply the savings by billions <wink>.

world-domination-has-some-scary-aspects-ly y'rs  - tim




From Vladimir.Marangozov@inrialpes.fr  Fri Aug 11 02:59:30 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 03:59:30 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <200008110159.DAA09540@python.inrialpes.fr>

I'm looking at preventing core dumps due to recursive calls. With
simple nested call counters for every function in object.c, limited to
500 levels deep recursions, I think this works okay for repr, str and
print. It solves most of the complaints, like:

class Crasher:
	def __str__(self): print self

print Crasher()

With such protection, instead of a core dump, we'll get an exception:

RuntimeError: Recursion too deep


So far, so good. 500 nested calls to repr, str or print are likely
to be programming bugs. Now I wonder whether it's a good idea to do
the same thing for getattr and setattr, to avoid crashes like:

class Crasher:
	def __getattr__(self, x): return self.x 

Crasher().bonk

Solving this the same way is likely to slow things down a bit, but
would prevent the crash. OTOH, in a complex object hierarchy with
tons of delegation and/or lookup dispatching, 500 nested calls is
probably not enough. Or am I wondering too much? Opinions?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From bwarsaw@beopen.com  Fri Aug 11 04:00:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 10 Aug 2000 23:00:32 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>
 <200008100036.TAA26235@cj20424-a.reston1.va.home.com>
Message-ID: <14739.27728.960099.342321@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    GvR> Alas, I'm not sure how easy it will be.  The parser generator
    GvR> will probably have to be changed to allow you to indicate not
    GvR> to do a resword lookup at certain points in the grammar.  I
    GvR> don't know where to start. :-(

Yet another reason why it would be nice to (eventually) merge the
parsing technology in CPython and JPython.

i-don't-wanna-work-i-jes-wanna-bang-on-my-drum-all-day-ly y'rs,
-Barry


From MarkH@ActiveState.com  Fri Aug 11 07:15:00 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 11 Aug 2000 16:15:00 +1000
Subject: [Python-Dev] Patches and checkins for 1.6
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>

I would like a little guidance on how to handle patches during this 1.6
episode.

My understanding of CVS tells me that 1.6 has forked from the main
development tree.  Any work done in the 1.6 branch will need to also be
done in the main branch.  Is this correct?

If so, it means that all patches assigned to me need to be applied and
tested twice, which involves completely refetching the entire tree, and
rebuilding the world?

Given that 1.6 appears to be mainly an exercise in posturing by CNRI, is it
reasonable that I hold some patches off while I'm working with 1.6, and
check them in when I move back to the main branch?  Surely no one will
stick with 1.6 in the long (or even medium) term, once all active
development of that code ceases?

Of course, this wouldn't include critical bugs, but no one is mad enough to
assign them to me anyway <wink>

Confused-and-in-need-of-a-machine-upgrade ly,

Mark.



From tim_one@email.msn.com  Fri Aug 11 07:48:56 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 11 Aug 2000 02:48:56 -0400
Subject: [Python-Dev] Patches and checkins for 1.6
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIECPGPAA.tim_one@email.msn.com>

[Mark Hammond]
> I would like a little guidance on how to handle patches during this
> 1.6 episode.
>
> My understanding of CVS tells me that 1.6 has forked from the
> main development tree.  Any work done in the 1.6 branch will need
> to also be done in the main branch.  Is this correct?

Don't look at me -- I first ran screaming in terror from CVS tricks more
than a decade ago, and haven't looked back.  OTOH, I don't know of *any*
work done in the 1.6 branch yet that needs also to be done in the 2.0
branch.  Most of what Fred Drake has been doing is in the other direction,
and the rest has been fixing buglets unique to 1.6.

> If so, it means that all patches assigned to me need to be applied
> and tested twice, which involves completely refetching the entire
> tree, and rebuilding the world?

Patches with new features should *not* go into the 1.6 branch at all!  1.6
is meant to reflect only work that CNRI has clear claims to, plus whatever
bugfixes are needed to make that a good release.  Actual cash dollars for
Unicode development were funneled through CNRI, and that's why the Unicode
features are getting backstitched into it.  They're unique, though.

> Given that 1.6 appears to be mainly an exercise in posturing by
> CNRI,

Speaking on behalf of BeOpen PythonLabs, 1.6 is a milestone in Python
development, worthy of honor, praise and repeated downloading by all.  We at
BeOpen PythonLabs regret the unfortunate misconceptions that have arisen
about its true nature, and fully support CNRI's wise decision to force a
release of Python 1.6 in the public interest.

> is it reasonable that I hold some patches off while I'm working
> with 1.6, and check them in when I move back to the main branch?

I really don't know what you're doing.  If you find a bug in 1.6 that's also
a bug in 2.0, it should go without saying that we'd like that fixed ASAP in
2.0 as well.  But since that went without saying, and you seem to be saying
something else, I'm not sure what you're asking.  If you're asking whether
you're allowed to maximize your own efficiency, well, only Guido can force
you to do something self-damaging <wink>.

> Surely no one will stick with 1.6 in the long (or even
> medium) term, once all active development of that code ceases?

Active development of the 1.6 code has already ceased, far as I can tell.
Maybe some more Unicode patches?  Other than that, just bugfixes as needed.
It's down to a trickle.  We're aiming for a quick beta cycle on 1.6b1, and--
last I heard, and barring scads of fresh bug reports --intending to release
1.6 final next.  Then bugs opened against 1.6 will be answered by "fixed in
2.0".

> Of course, this wouldn't include critical bugs, but no one is mad
> enough to assign them to me anyway <wink>
>
> Confused-and-in-need-of-a-machine-upgrade ly,

And we'll buy you one, too, if you promise to use it to fix the test_fork1
family of bugs on SMP Linux boxes!

don't-forget-that-patches-to-1.6-still-need-cnri-release-forms!-
    and-that-should-clarify-everything-ly y'rs  - tim




From gstein@lyra.org  Fri Aug 11 08:07:29 2000
From: gstein@lyra.org (Greg Stein)
Date: Fri, 11 Aug 2000 00:07:29 -0700
Subject: [Python-Dev] Patches and checkins for 1.6
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 11, 2000 at 04:15:00PM +1000
References: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>
Message-ID: <20000811000729.M19525@lyra.org>

On Fri, Aug 11, 2000 at 04:15:00PM +1000, Mark Hammond wrote:
>...
> If so, it means that all patches assigned to me need to be applied and
> tested twice, which involves completely refetching the entire tree, and
> rebuilding the world?

Just fetch two trees.

c:\src16
c:\src20

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From mal@lemburg.com  Fri Aug 11 09:04:48 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 11 Aug 2000 10:04:48 +0200
Subject: [Python-Dev] Freezing unicode codecs.
References: <3993287a.1852013@smtp.worldonline.dk>
Message-ID: <3993B3A0.28500B22@lemburg.com>

Finn Bock wrote:
> 
> While porting the unicode API and the encoding modules to JPython I came
> across a problem which may also (or maybe not) exists in CPython.
> 
> jpythonc is a compiler for jpython which try to track dependencies
> between modules in an attempt to detect which modules an application or
> applet uses. I have the impression that some of the freeze tools for
> CPython does something similar.
> 
> A call to unicode("abc", "cp1250") and "abc".encode("cp1250") will cause
> the encoding.cp1250 module to be loaded as a side effect. The freeze
> tools will have a hard time figuring this out by scanning the python
> source.
> 
> For JPython I'm leaning towards making it a requirement that the
> encodings must be loading explicit from somewhere in application. Adding
> 
>    import encoding.cp1250
> 
> somewhere in the application will allow jpythonc to include this python
> module in the frozen application.
> 
> How does CPython solve this?

It doesn't. The design of the codec registry is such that it
uses search functions which then locate and load the codecs.
These search function can implement whatever scheme they desire
for the lookup and also w/r to loading the codec, e.g. they
could get the data from a ZIP archive.

This design was chosen to allow drop-in configuration of the
Python codecs. Applications can easily add new codecs to the
registry by registering a new search function (and without
having to copy files into the encodings Lib subdir).
 
When it comes to making an application freezable, I'd suggest
adding explicit imports to some freeze support module in the
application. There are other occasions where this is needed
too, e.g. for packages using lazy import of modules such
as mx.DateTime.

This module would then make sure freeze.py finds the right
modules to include in its output.

> PS. The latest release of the JPython errata have full unicode support
> and includes the "sre" module and unicode codecs.
> 
>     http://sourceforge.net/project/filelist.php?group_id=1842

Cool :-)
 
> regards,
> finn
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From nowonder@nowonder.de  Fri Aug 11 11:29:04 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Fri, 11 Aug 2000 10:29:04 +0000
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl>
Message-ID: <3993D570.7578FE71@nowonder.de>

After sleeping over it, I noticed that at least
BaseHTTPServer and ftplib also use a similar
algorithm to get a fully qualified domain name.

Together with smtplib there are four occurences
of the algorithm (2 in BaseHTTPServer). I think
it would be good not to have four, but one
implementation.

First I thought it could be socket.get_fqdn(),
but it seems a bit troublesome to write it in C.

Should this go somewhere? If yes, where should
it go?

I'll happily prepare a patch as soon as I know
where to put it.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 11 09:40:08 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 11 Aug 2000 11:40:08 +0300 (IDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <3993D570.7578FE71@nowonder.de>
Message-ID: <Pine.GSO.4.10.10008111136390.27824-100000@sundial>

On Fri, 11 Aug 2000, Peter Schneider-Kamp wrote:

> First I thought it could be socket.get_fqdn(),
> but it seems a bit troublesome to write it in C.
> 
> Should this go somewhere?

Yes. We need some OnceAndOnlyOnce mentality here...

> If yes, where should
> it go?

Good question. You'll notice that SimpleHTTPServer imports shutil for
copyfileobj, because I had no good answer to a similar question. GS seems
to think "put it somewhere" is a good enough answer. I think I might
agree.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From barry@scottb.demon.co.uk  Fri Aug 11 12:42:11 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Fri, 11 Aug 2000 12:42:11 +0100
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008110159.DAA09540@python.inrialpes.fr>
Message-ID: <000401c00389$2fa577b0$060210ac@private>

Why not set a limit in the intepreter? Fixing this for every call in object.c
seems a lots of hard work and will always leave holes.

For embedding Python being able to control the recursion depth of the intepreter
is very useful. I would want to be able to set, from C, the max call depth limit
and the current call depth limit. I'd expect Python to set a min call depth limit.

		BArry


> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Vladimir Marangozov
> Sent: 11 August 2000 03:00
> To: Python core developers
> Subject: [Python-Dev] Preventing recursion core dumps
> 
> 
> 
> I'm looking at preventing core dumps due to recursive calls. With
> simple nested call counters for every function in object.c, limited to
> 500 levels deep recursions, I think this works okay for repr, str and
> print. It solves most of the complaints, like:
> 
> class Crasher:
> 	def __str__(self): print self
> 
> print Crasher()
> 
> With such protection, instead of a core dump, we'll get an exception:
> 
> RuntimeError: Recursion too deep
> 
> 
> So far, so good. 500 nested calls to repr, str or print are likely
> to be programming bugs. Now I wonder whether it's a good idea to do
> the same thing for getattr and setattr, to avoid crashes like:
> 
> class Crasher:
> 	def __getattr__(self, x): return self.x 
> 
> Crasher().bonk
> 
> Solving this the same way is likely to slow things down a bit, but
> would prevent the crash. OTOH, in a complex object hierarchy with
> tons of delegation and/or lookup dispatching, 500 nested calls is
> probably not enough. Or am I wondering too much? Opinions?
> 
> -- 
>        Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
> http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From guido@beopen.com  Fri Aug 11 13:47:09 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 07:47:09 -0500
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Your message of "Fri, 11 Aug 2000 03:59:30 +0200."
 <200008110159.DAA09540@python.inrialpes.fr>
References: <200008110159.DAA09540@python.inrialpes.fr>
Message-ID: <200008111247.HAA03687@cj20424-a.reston1.va.home.com>

> I'm looking at preventing core dumps due to recursive calls. With
> simple nested call counters for every function in object.c, limited to
> 500 levels deep recursions, I think this works okay for repr, str and
> print. It solves most of the complaints, like:
> 
> class Crasher:
> 	def __str__(self): print self
> 
> print Crasher()
> 
> With such protection, instead of a core dump, we'll get an exception:
> 
> RuntimeError: Recursion too deep
> 
> 
> So far, so good. 500 nested calls to repr, str or print are likely
> to be programming bugs. Now I wonder whether it's a good idea to do
> the same thing for getattr and setattr, to avoid crashes like:
> 
> class Crasher:
> 	def __getattr__(self, x): return self.x 
> 
> Crasher().bonk
> 
> Solving this the same way is likely to slow things down a bit, but
> would prevent the crash. OTOH, in a complex object hierarchy with
> tons of delegation and/or lookup dispatching, 500 nested calls is
> probably not enough. Or am I wondering too much? Opinions?

In your examples there's recursive Python code involved.  There's
*already* a generic recursion check for that, but the limit is too
high (the latter example segfaults for me too, while a simple def f():
f() gives a RuntimeError).

It seems better to tune the generic check than to special-case str,
repr, and getattr.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Aug 11 13:55:29 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 07:55:29 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi
 ng amount of data sent.
Message-ID: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>

I just noticed this.  Is this true?  Shouldn't we change send() to
raise an error instead of returning a small number?  (The number of
bytes written can be an attribute of the exception.)

Don't look at me for implementing this, sorry, no time...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

------- Forwarded Message

Date:    Thu, 10 Aug 2000 16:39:48 -0700
From:    noreply@sourceforge.net
To:      scott@chronis.pobox.com, 0@delerium.i.sourceforge.net,
	 python-bugs-list@python.org
Subject: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi
	  ng amount of data sent.

Bug #111620, was updated on 2000-Aug-10 16:39
Here is a current snapshot of the bug.

Project: Python
Category: Library
Status: Open
Resolution: None
Bug Group: None
Priority: 5
Summary: lots of use of send() without verifying amount of data sent.

Details: a quick grep of the standard python library (below) shows that there
is lots of unchecked use of the send() function.  Every unix system I've every 
used states that send() returns the number of bytes sent, which can be < length
(<string>).  Using socket.send(s) without verifying that the return value is eq
ual to the length of s is careless and can result in loss of data.

I just submitted a patch for smtplib's use of send(), have patched a piece of Z
ope the same way, and get the feeling that it's becoming standard to call send(
) without checking 
that the amount of data sent is the intended amount.  While this is OK for a qu
ick script, I don't feel it's OK for library code or anything that might be use
d in production.

scott

For detailed info, follow this link:
http://sourceforge.net/bugs/?func=detailbug&bug_id=111620&group_id=5470

_______________________________________________
Python-bugs-list maillist  -  Python-bugs-list@python.org
http://www.python.org/mailman/listinfo/python-bugs-list

------- End of Forwarded Message



From gmcm@hypernet.com  Fri Aug 11 13:32:44 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 08:32:44 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
Message-ID: <1246125329-123433164@hypernet.com>

[bug report] 
> Details: a quick grep of the standard python library (below)
> shows that there is lots of unchecked use of the send() 
> function.
[Guido]
> I just noticed this.  Is this true?  Shouldn't we change send()
> to raise an error instead of returning a small number?  (The
> number of bytes written can be an attribute of the exception.)

No way! You'd break 90% of my sockets code! People who 
don't want to code proper sends / recvs can use that sissy 
makefile junk.

- Gordon


From thomas@xs4all.net  Fri Aug 11 13:31:43 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 14:31:43 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 07:55:29AM -0500
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
Message-ID: <20000811143143.G17171@xs4all.nl>

On Fri, Aug 11, 2000 at 07:55:29AM -0500, Guido van Rossum wrote:

> I just noticed this.  Is this true?  Shouldn't we change send() to
> raise an error instead of returning a small number?  (The number of
> bytes written can be an attribute of the exception.)

This would break a lot of code. (probably all that use send, with or without
return-code checking.) I would propose a 'send_all' or some such instead,
which would keep sending until either a real error occurs, or all data is
sent (possibly with a timeout ?). And most uses of send could be replaced by
send_all, both in the std. library and in user code.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Vladimir.Marangozov@inrialpes.fr  Fri Aug 11 13:39:36 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 14:39:36 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111247.HAA03687@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 11, 2000 07:47:09 AM
Message-ID: <200008111239.OAA15818@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> It seems better to tune the generic check than to special-case str,
> repr, and getattr.

Right. This would be a step forward, at least for recursive Python code
(which is the most common complaint).  Reducing the current value
by half, i.e. setting MAX_RECURSION_DEPTH = 5000 works for me (Linux & AIX)

Agreement on 5000?

Doesn't solve the problem for C code (extensions) though...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Fri Aug 11 14:19:38 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 15:19:38 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <000401c00389$2fa577b0$060210ac@private> from "Barry Scott" at Aug 11, 2000 12:42:11 PM
Message-ID: <200008111319.PAA16192@python.inrialpes.fr>

Barry Scott wrote:
> 
> Why not set a limit in the intepreter? Fixing this for every call in object.c
> seems a lots of hard work and will always leave holes.

Indeed.

> 
> For embedding Python being able to control the recursion depth of the
> intepreter is very useful. I would want to be able to set, from C, the
> max call depth limit and the current call depth limit.

Except exporting MAX_RECURSION_DEPTH as a variable (Py_MaxRecursionDepth)
I don't see what you mean by current call depth limit.

> I'd expect Python to set a min call depth limit.

I don't understand this. Could you elaborate?
Are you implying the introduction of a public function
(ex. Py_SetRecursionDepth) that does some value checks?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From paul@prescod.net  Fri Aug 11 14:19:05 2000
From: paul@prescod.net (Paul Prescod)
Date: Fri, 11 Aug 2000 08:19:05 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <3993FD49.C7E71108@prescod.net>

Just van Rossum wrote:
> 
> ...
>
>        for <index> indexing <element> in <seq>:
>            ...

 
Let me throw out another idea. What if sequences just had .items()
methods?

j=range(0,10)

for index, element in j.items():
    ...

While we wait for the sequence "base class" we could provide helper
functions that makes the implementation of both eager and lazy versions
easier.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From guido@beopen.com  Fri Aug 11 15:19:33 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 09:19:33 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:31:43 +0200."
 <20000811143143.G17171@xs4all.nl>
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
 <20000811143143.G17171@xs4all.nl>
Message-ID: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>

> > I just noticed this.  Is this true?  Shouldn't we change send() to
> > raise an error instead of returning a small number?  (The number of
> > bytes written can be an attribute of the exception.)
> 
> This would break a lot of code. (probably all that use send, with or without
> return-code checking.) I would propose a 'send_all' or some such instead,
> which would keep sending until either a real error occurs, or all data is
> sent (possibly with a timeout ?). And most uses of send could be replaced by
> send_all, both in the std. library and in user code.

Really?!?!

I just read the man page for send() (Red Hat linux 6.1) and it doesn't
mention sending fewer than all bytes at all.  In fact, while it says
that the return value is the number of bytes sent, it at least
*suggests* that it will return an error whenever not everything can be
sent -- even in non-blocking mode.

Under what circumstances can send() return a smaller number?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From paul@prescod.net  Fri Aug 11 14:25:27 2000
From: paul@prescod.net (Paul Prescod)
Date: Fri, 11 Aug 2000 08:25:27 -0500
Subject: [Python-Dev] Winreg update
Message-ID: <3993FEC7.4E38B4F1@prescod.net>

I am in transit so I don't have time for a lot of back and forth email
relating to winreg. It also seems that there are a lot of people (let's
call them "back seat coders") who have vague ideas of what they want but
don't want to spend a bunch of time in a long discussion about registry
arcana. Therefore I am endevouring to make it as easy and fast to
contribute to the discussion as possible. 

I'm doing this through a Python Module Proposal format. This can also
serve as the basis of documentation.

This is really easy so I want
some real feedback this time. Distutils people, this means you! Mark! I
would love to hear Bill Tutt, Greg Stein and anyone else who claims some
knowledge of Windows!

If you're one of the people who has asked for winreg in the core then
you should respond. It isn't (IMO) sufficient to put in a hacky API to
make your life easier. You need to give something to get something. You
want windows registry support in the core -- fine, let's do it properly.

Even people with a minimal understanding of the registry should be able
to contribute: the registry isn't rocket surgery. I'll include a short
primer in this email.

All you need to do is read this email and comment on whether you agree
with the overall principle and then give your opinion on fifteen
possibly controversial issues. The "overall principle" is to steal
shamelessly from Microsoft's new C#/VB/OLE/Active-X/CRL API instead of
innovating for Python. That allows us to avoid starting the debate from
scratch. It also eliminates the feature that Mark complained about
(which was a Python-specific innovation).

The fifteen issues are mostly extensions to the API to make it easier
(convenience extensions) or more powerful (completeness extensions).
Many of them are binary: "do this, don't do that." Others are choices:
e.g. "Use tuples", "Use lists", "Use an instance".

I will try to make sense of the various responses. Some issues will have
strong consensus and I'll close those quickly. Others will require more
(hopefully not much!) discussion.

Windows Registry Primer:
========================

There are things called "keys". They aren't like Python keys so don't
think of them that way. Keys have a list of subkeys indexed by name.
Keys also have a list of "values". Values have names. Every value has a
type. In some type-definition syntax:

key is (name: string, 
     subkeys: (string : key), 
     values: (string : value ))

value is ( name: string,
       type: enumeration,
       data: (depends on enumeration) )

That's the basic model. There are various helper facilities provided by
the APIs, but really, the model is as above.

=========================================================================
Python Module Proposal
Title: Windows registry
Version: $Revision: 1.0$
Owner: paul@prescod.net (Paul Prescod)
Python-Version: 2.0
Status: Incomplete

Overview

    It is convenient for Windows users to know that a Python module to
    access the registry is always available whenever Python is installed
    on Windows.  This is especially useful for installation programs.
    There is a Windows registry module from the win32 extensions to
    Python. It is based directly on the original Microsoft APIs. This
    means that there are many backwards compatibility hacks, "reserved"
    parameters and other legacy features that are not interesting to
    most Python programmers. Microsoft is moving to a higher level API
    for languages other than C, as part of Microsoft's Common Runtime
    Library (CRL) initiative. This newer, higher level API serves as
    the basis for the module described herein.

    This higher level API would be implemented in Python and based upon 
    the low-level API. They would not be in competition: a user would 
    choose based on their preferences and needs.

Module Exports

    These are taken directly from the Common Runtime Library:

    ClassesRoot     The Windows Registry base key HKEY_CLASSES_ROOT.
    CurrentConfig   The Windows Registry base key HKEY_CURRENT_CONFIG.
    CurrentUser     The Windows Registry base key HKEY_CURRENT_USER.
    LocalMachine    The Windows Registry base key HKEY_LOCAL_MACHINE.
    CurrentUser     The Windows Registry base key HKEY_CURRENT_USER.
    DynData         The Windows Registry base key HKEY_DYN_DATA.
    PerformanceData The Windows Registry base key HKEY_PERFORMANCE_DATA.
    Users           The Windows Registry base key HKEY_USERS.

    RegistryKey     Registry key class (important class in module)

RegistryKey class Data Members

    These are taken directly from the Common Runtime Library:

    Name            Retrieves the name of the key. 
                    [Issue: full path or just name within parent?]
    SubKeyCount     Retrieves the count of subkeys.
    ValueCount      Retrieves the count of values in the key.

RegistryKey Methods

    These are taken directly from the Common Runtime Library:

    Close()
        Closes this key and flushes it to disk if the contents have 
        been modified.

    CreateSubKey( subkeyname )
        Creates a new subkey or opens an existing subkey.

     [Issue: SubKey_full_path]: Should it be possible to create a subkey 
        deeply:
        >>> LocalMachine.CreateSubKey( r"foo\bar\baz" )

        Presumably the result of this issue would also apply to every
        other method that takes a subkey parameter.

        It is not clear what the CRL API says yet (Mark?). If it says
        "yes" then we would follow it of course. If it says "no" then
        we could still consider the feature as an extension.

       [Yes] allow subkey parameters to be full paths
       [No]  require them to be a single alphanumeric name, no slashes

    DeleteSubKey( subkeyname )
        Deletes the specified subkey. To delete subkeys and all their 
        children (recursively), use DeleteSubKeyTree.

    DeleteSubKeyTree( subkeyname )
        Recursively deletes a subkey and any child subkeys. 

    DeleteValue( valuename )
        Deletes the specified value from this key.

    __cmp__( other )
	Determines whether the specified key is the same key as the
	current key.

    GetSubKeyNames()
        Retrieves an array of strings containing all the subkey names.

    GetValue( valuename )
        Retrieves the specified value.

     Registry types are converted according to the following table:

         REG_NONE: None
         REG_SZ: UnicodeType
         REG_MULTI_SZ: [UnicodeType, UnicodeType, ...]
         REG_DWORD: IntegerType
         REG_DWORD_LITTLE_ENDIAN: IntegerType
         REG_DWORD_BIG_ENDIAN: IntegerType
         REG_EXPAND_SZ: Same as REG_SZ
         REG_RESOURCE_LIST: Same as REG_BINARY
         REG_FULL_RESOURCE_DESCRIPTOR: Same as REG_BINARY
         REG_RESOURCE_REQUIREMENTS_LIST: Same as REG_BINARY
         REG_LINK: Same as REG_BINARY??? [Issue: more info needed!]

         REG_BINARY: StringType or array.array( 'c' )

     [Issue: REG_BINARY Representation]:
         How should binary data be represented as Python data?

         [String] The win32 module uses "string".
         [Array] I propose that an array of bytes would be better.

         One benefit of "binary" is that allows SetValue to detect
         string data as REG_SZ and array.array('c') as REG_BINARY

    [Issue: Type getting method]
         Should there be a companion method called GetType that fetches 
         the type of a registry value? Otherwise client code would not
         be able to distinguish between (e.g.) REG_SZ and 
         REG_SZ_BINARY.

         [Yes] Add GetType( string )
         [No]  Do not add GetType

    GetValueNames()
        Retrieves a list of strings containing all the value names.

    OpenRemoteBaseKey( machinename, name )
        Opens a new RegistryKey that represents the requested key on a 
        foreign machine.

    OpenSubKey( subkeyname )
        Retrieves a subkey.

    SetValue( keyname, value )
        Sets the specified value

	Types are automatically mapped according to the following
	algorithm:

          None: REG_NONE
          String: REG_SZ
          UnicodeType: REG_SZ
          [UnicodeType, UnicodeType, ...]: REG_MULTI_SZ
          [StringType, StringType, ...]: REG_MULTI_SZ
          IntegerType: REG_DWORD
          array.array('c'): REG_BINARY

       [Issue: OptionalTypeParameter]

          Should there be an optional parameter that allows you to
          specify the type explicitly? Presume that the types are 
          constants in the winreg modules (perhaps strings or 
          integers).

          [Yes] Allow other types to be specified
          [No]  People who want more control should use the underlying 
                win32 module.

Proposed Extensions

    The API above is a direct transliteration of the .NET API. It is
    somewhat underpowered in some senses and also is not entirely
    Pythonic. It is a good start as a basis for consensus, however,
    and these proposed extensions can be voted up or down individually.

    Two extensions are just the convenience functions (OpenRemoteKey
    and the top-level functions). Other extensions attempt to extend
    the API to support ALL features of the underlying API so that users
    never have to switch from one API to another to get a particular
    feature.

    Convenience Extension: OpenRemoteKey

        It is not clear to me why Microsoft restricts remote key opening
        to base keys. Why does it not allow a full path like this:

        >>> winreg.OpenRemoteKey( "machinename", 
                             r"HKEY_LOCAL_MACHINE\SOFTWARE\Python" )

        [Issue: Add_OpenRemoteKey]: 
              [Yes] add RemoteKey 
              [No] do not add?

        [Issue: Remove_OpenRemoteBaseKey]
              [Remove] It's redundant!
              [Retain] For backwards compatibility

    Convenience Extension: Top-level Functions

        A huge number of registry-manipulating programs treat the
        registry namespace as "flat" and go directly to the interesting
        registry key.  These top-level functions allow the Python user
        to skip all of the OO key object and get directly to what
        they want:

        key=OpenKey( keypath, machinename=None )
        key=CreateKey( keypath, machinename=None )
        DeleteKey( keypath, machinename=None )
        val=GetValue( keypath, valname, machinename=None )
        SetValue( keypath, valname, valdata, machinename=None )

        [Yes] Add these functions
        [No] Do not add
        [Variant] I like the idea but would change the function
                  signatures


    Completeness Extension: Type names

        If the type extensions are added to SetValue and GetValue then
        we need to decide how to represent types. It is fairly clear
        that they should be represented as constants in the module. The
        names of those constants could be the cryptic (but standard)
        Microsoft names or more descriptive, conventional names.

	Microsoft Names:

            REG_NONE
            REG_SZ
            REG_EXPAND_SZ
            REG_BINARY
            REG_DWORD
            REG_DWORD_LITTLE_ENDIAN
            REG_DWORD_BIG_ENDIAN
            REG_LINK
            REG_MULTI_SZ
            REG_RESOURCE_LIST
            REG_FULL_RESOURCE_DESCRIPTOR
            REG_RESOURCE_REQUIREMENTS_LIST

	Proposed Descriptive Names:

            NONE
            STRING
            EXPANDABLE_TEMPLATE_STRING
            BINARY_DATA
            INTEGER
            LITTLE_ENDIAN_INTEGER
            BIG_ENDIAN_INTEGER
            LINK
            STRING_LIST
            RESOURCE_LIST
            FULL_RESOURCE_DESCRIPTOR
            RESOURCE_REQUIREMENTS_LIST
             
        We could also allow both. One set would be aliases for the
        other.

        [Issue: TypeNames]:
            [MS Names]: Use the Microsoft names
            [Descriptive Names]: Use the more descriptive names
            [Both]: Use both

    Completeness Extension: Type representation

        No matter what the types are called, they must have values.

	The simplest thing would be to use the integers provided by the
	Microsoft header files.  Unfortunately integers are not at all
	self-describing so getting from the integer value to something
	human readable requires some sort of switch statement or mapping.
 
        An alternative is to use strings and map them internally to the 
        Microsoft integer constants.

        A third option is to use object instances. These instances would
        be useful for introspection and would have the following 
        attributes:

            msname (e.g. REG_SZ)
            friendlyname (e.g. String)
            msinteger (e.g. 6 )

        They would have only the following method:

            def __repr__( self ):
                "Return a useful representation of the type object"
                return "<RegType %d: %s %s>" % \
                  (self.msinteger, self.msname, self.friendlyname )

        A final option is a tuple with the three attributes described
        above.

        [Issue: Type_Representation]:
            [Integers]: Use Microsoft integers
            [Strings]: Use string names
            [Instances]: Use object instances with three introspective 
                         attributes
            [Tuples]: Use 3-tuples

    Completeness Extension: Type Namespace

        Should the types be declared in the top level of the module 
        (and thus show up in a "dir" or "from winreg import *") or 
        should they live in their own dictionary, perhaps called 
        "types" or "regtypes". They could also be attributes of some 
        instance.

        [Issue: Type_Namespace]:
            [Module]: winreg.REG_SZ
            [Dictionary]: winreg.types["REG_SZ"]
            [Instance]: winreg.types["REG_SZ"]

    Completeness Extension: Saving/Loading Keys

        The underlying win32 registry API allows the loading and saving
        of keys to filenames. Therefore these could be implemented
        easily as methods:

            def save( self, filename ):
                "Save a key to a filename"
                _winreg.SaveKey( self.keyobj, filename )

            def load( self, subkey, filename ):
                "Load a key from a filename"
                return _winreg.RegLoadKey( self.handle, subkey, 
                                           filename )

            >>> key.OpenSubKey("Python").save( "Python.reg" )
            >>> key.load( "Python", "Python.reg" )

        [Issue: Save_Load_Keys]
            [Yes] Support the saving and loading of keys
            [No]  Do not add these methods

    Completeness Extension: Security Access Flags

        The underlying win32 registry API allows security flags to be
        applied to the OpenKey method. The flags are:

             "KEY_ALL_ACCESS"
             "KEY_CREATE_LINK"
             "KEY_CREATE_SUB_KEY"
             "KEY_ENUMERATE_SUB_KEYS"
             "KEY_EXECUTE"
             "KEY_NOTIFY"
             "KEY_QUERY_VALUE"
             "KEY_READ"
             "KEY_SET_VALUE"

        These are not documented in the underlying API but should be for
        this API. This documentation would be derived from the Microsoft
        documentation. They would be represented as integer or string
        constants in the Python API and used something like this:

        key=key.OpenKey( subkeyname, winreg.KEY_READ )

        [Issue: Security_Access_Flags]
             [Yes] Allow the specification of security access flags.
             [No]  Do not allow this specification.

        [Issue: Security_Access_Flags_Representation]
             [Integer] Use the Microsoft integers
             [String]  Use string values
             [Tuples] Use (string, integer) tuples
             [Instances] Use instances with "name", "msinteger"
                         attributes

        [Issue: Security_Access_Flags_Location]
             [Top-Level] winreg.KEY_READ
             [Dictionary] winreg.flags["KEY_READ"]
             [Instance] winreg.flags.KEY_READ

    Completeness Extension: Flush

        The underlying win32 registry API has a flush method for keys.
        The documentation is as follows:

            """Writes all the attributes of a key to the registry.

            It is not necessary to call RegFlushKey to change a key.
            Registry changes are flushed to disk by the registry using
            its lazy flusher.  Registry changes are also flushed to
            disk at system shutdown.  Unlike \function{CloseKey()}, the
            \function{FlushKey()} method returns only when all the data
            has been written to the registry.  An application should
            only call \function{FlushKey()} if it requires absolute
            certainty that registry changes are on disk."""

    If all completeness extensions are implemented, the author believes
    that this API will be as complete as the underlying API so
    programmers can choose which to use based on familiarity rather 
    than feature-completeness.


-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"


From guido@beopen.com  Fri Aug 11 15:28:09 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 09:28:09 -0500
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:39:36 +0200."
 <200008111239.OAA15818@python.inrialpes.fr>
References: <200008111239.OAA15818@python.inrialpes.fr>
Message-ID: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>

> > It seems better to tune the generic check than to special-case str,
> > repr, and getattr.
> 
> Right. This would be a step forward, at least for recursive Python code
> (which is the most common complaint).  Reducing the current value
> by half, i.e. setting MAX_RECURSION_DEPTH = 5000 works for me (Linux & AIX)
> 
> Agreement on 5000?

No, the __getattr__ example still dumps core for me.  With 4000 it is
fine, but this indicates that this is totally the wrong approach: I
can change the available stack size with ulimit -s and cause a core
dump anyway.  Or there could be a loger path through the C code where
more C stack is used per recursion.

We could set the maximum to 1000 and assume a "reasonable" stack size,
but that doesn't make me feel comfortable either.

It would be good if there was a way to sense the remaining available
stack, even if it wasn't portable.  Any Linux experts out there?

> Doesn't solve the problem for C code (extensions) though...

That wasn't what started this thread.  Bugs in extensions are just that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From gvwilson@nevex.com  Fri Aug 11 14:39:38 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Fri, 11 Aug 2000 09:39:38 -0400 (EDT)
Subject: [Python-Dev] PEP 0211: Linear Algebra Operators
Message-ID: <Pine.LNX.4.10.10008110936390.13482-200000@akbar.nevex.com>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.
  Send mail to mime@docserver.cac.washington.edu for more info.

--168427786-36668808-966001178=:13482
Content-Type: TEXT/PLAIN; charset=US-ASCII

Hi, everyone.  Please find attached the latest version of PEP-0211,
"Adding New Linear Algebra Operators to Python".  As I don't have write
access to the CVS repository, I'd be grateful if someone could check this
in for me.  Please send comments directly to me (gvwilson@nevex.com); I'll
summarize, update the PEP, and re-post.

Thanks,
Greg

--168427786-36668808-966001178=:13482
Content-Type: TEXT/PLAIN; charset=US-ASCII; name="pep-0211.txt"
Content-Transfer-Encoding: BASE64
Content-ID: <Pine.LNX.4.10.10008110939380.13482@akbar.nevex.com>
Content-Description: 
Content-Disposition: attachment; filename="pep-0211.txt"

UEVQOiAyMTENClRpdGxlOiBBZGRpbmcgTmV3IExpbmVhciBBbGdlYnJhIE9w
ZXJhdG9ycyB0byBQeXRob24NClZlcnNpb246ICRSZXZpc2lvbiQNCk93bmVy
OiBndndpbHNvbkBuZXZleC5jb20gKEdyZWcgV2lsc29uKQ0KUHl0aG9uLVZl
cnNpb246IDIuMQ0KQ3JlYXRlZDogMTUtSnVsLTIwMDANClN0YXR1czogRHJh
ZnQNClBvc3QtSGlzdG9yeToNCg0KDQpJbnRyb2R1Y3Rpb24NCg0KICAgIFRo
aXMgUEVQIGRlc2NyaWJlcyBhIHByb3Bvc2FsIHRvIGFkZCBsaW5lYXIgYWxn
ZWJyYSBvcGVyYXRvcnMgdG8NCiAgICBQeXRob24gMi4wLiAgSXQgZGlzY3Vz
c2VzIHdoeSBzdWNoIG9wZXJhdG9ycyBhcmUgZGVzaXJhYmxlLCBhbmQNCiAg
ICBhbHRlcm5hdGl2ZXMgdGhhdCBoYXZlIGJlZW4gY29uc2lkZXJlZCBhbmQg
ZGlzY2FyZGVkLiAgVGhpcyBQRVANCiAgICBzdW1tYXJpemVzIGRpc2N1c3Np
b25zIGhlbGQgaW4gbWFpbGluZyBsaXN0IGZvcnVtcywgYW5kIHByb3ZpZGVz
DQogICAgVVJMcyBmb3IgZnVydGhlciBpbmZvcm1hdGlvbiwgd2hlcmUgYXBw
cm9wcmlhdGUuICBUaGUgQ1ZTIHJldmlzaW9uDQogICAgaGlzdG9yeSBvZiB0
aGlzIGZpbGUgY29udGFpbnMgdGhlIGRlZmluaXRpdmUgaGlzdG9yaWNhbCBy
ZWNvcmQuDQoNCg0KUHJvcG9zYWwNCg0KICAgIEFkZCBhIHNpbmdsZSBuZXcg
aW5maXggYmluYXJ5IG9wZXJhdG9yICdAJyAoImFjcm9zcyIpLCBhbmQNCiAg
ICBjb3JyZXNwb25kaW5nIHNwZWNpYWwgbWV0aG9kcyAiX19hY3Jvc3NfXygp
IiBhbmQgIl9fcmFjcm9zc19fKCkiLg0KICAgIFRoaXMgb3BlcmF0b3Igd2ls
bCBwZXJmb3JtIG1hdGhlbWF0aWNhbCBtYXRyaXggbXVsdGlwbGljYXRpb24g
b24NCiAgICBOdW1QeSBhcnJheXMsIGFuZCBnZW5lcmF0ZSBjcm9zcy1wcm9k
dWN0cyB3aGVuIGFwcGxpZWQgdG8gYnVpbHQtaW4NCiAgICBzZXF1ZW5jZSB0
eXBlcy4gIE5vIGV4aXN0aW5nIG9wZXJhdG9yIGRlZmluaXRpb25zIHdpbGwg
YmUgY2hhbmdlZC4NCg0KDQpCYWNrZ3JvdW5kDQoNCiAgICBDb21wdXRlcnMg
d2VyZSBpbnZlbnRlZCB0byBkbyBhcml0aG1ldGljLCBhcyB3YXMgdGhlIGZp
cnN0DQogICAgaGlnaC1sZXZlbCBwcm9ncmFtbWluZyBsYW5ndWFnZSwgRm9y
dHJhbi4gIFdoaWxlIEZvcnRyYW4gd2FzIGENCiAgICBncmVhdCBhZHZhbmNl
IG9uIGl0cyBtYWNoaW5lLWxldmVsIHByZWRlY2Vzc29ycywgdGhlcmUgd2Fz
IHN0aWxsIGENCiAgICB2ZXJ5IGxhcmdlIGdhcCBiZXR3ZWVuIGl0cyBzeW50
YXggYW5kIHRoZSBub3RhdGlvbiB1c2VkIGJ5DQogICAgbWF0aGVtYXRpY2lh
bnMuICBUaGUgbW9zdCBpbmZsdWVudGlhbCBlZmZvcnQgdG8gY2xvc2UgdGhp
cyBnYXAgd2FzDQogICAgQVBMIFsxXToNCg0KICAgICAgICBUaGUgbGFuZ3Vh
Z2UgW0FQTF0gd2FzIGludmVudGVkIGJ5IEtlbm5ldGggRS4gSXZlcnNvbiB3
aGlsZSBhdA0KICAgICAgICBIYXJ2YXJkIFVuaXZlcnNpdHkuIFRoZSBsYW5n
dWFnZSwgb3JpZ2luYWxseSB0aXRsZWQgIkl2ZXJzb24NCiAgICAgICAgTm90
YXRpb24iLCB3YXMgZGVzaWduZWQgdG8gb3ZlcmNvbWUgdGhlIGluaGVyZW50
IGFtYmlndWl0aWVzDQogICAgICAgIGFuZCBwb2ludHMgb2YgY29uZnVzaW9u
IGZvdW5kIHdoZW4gZGVhbGluZyB3aXRoIHN0YW5kYXJkDQogICAgICAgIG1h
dGhlbWF0aWNhbCBub3RhdGlvbi4gSXQgd2FzIGxhdGVyIGRlc2NyaWJlZCBp
biAxOTYyIGluIGENCiAgICAgICAgYm9vayBzaW1wbHkgdGl0bGVkICJBIFBy
b2dyYW1taW5nIExhbmd1YWdlIiAoaGVuY2UgQVBMKS4NCiAgICAgICAgVG93
YXJkcyB0aGUgZW5kIG9mIHRoZSBzaXh0aWVzLCBsYXJnZWx5IHRocm91Z2gg
dGhlIGVmZm9ydHMgb2YNCiAgICAgICAgSUJNLCB0aGUgY29tcHV0ZXIgY29t
bXVuaXR5IGdhaW5lZCBpdHMgZmlyc3QgZXhwb3N1cmUgdG8NCiAgICAgICAg
QVBMLiBJdmVyc29uIHJlY2VpdmVkIHRoZSBUdXJpbmcgQXdhcmQgaW4gMTk4
MCBmb3IgdGhpcyB3b3JrLg0KDQogICAgQVBMJ3Mgb3BlcmF0b3JzIHN1cHBv
cnRlZCBib3RoIGZhbWlsaWFyIGFsZ2VicmFpYyBvcGVyYXRpb25zLCBzdWNo
DQogICAgYXMgdmVjdG9yIGRvdCBwcm9kdWN0IGFuZCBtYXRyaXggbXVsdGlw
bGljYXRpb24sIGFuZCBhIHdpZGUgcmFuZ2UNCiAgICBvZiBzdHJ1Y3R1cmFs
IG9wZXJhdGlvbnMsIHN1Y2ggYXMgc3RpdGNoaW5nIHZlY3RvcnMgdG9nZXRo
ZXIgdG8NCiAgICBjcmVhdGUgYXJyYXlzLiAgSXRzIG5vdGF0aW9uIHdhcyBl
eGNlcHRpb25hbGx5IGNyeXB0aWM6IG1hbnkgb2YNCiAgICBpdHMgc3ltYm9s
cyBkaWQgbm90IGV4aXN0IG9uIHN0YW5kYXJkIGtleWJvYXJkcywgYW5kIGV4
cHJlc3Npb25zDQogICAgaGFkIHRvIGJlIHJlYWQgcmlnaHQgdG8gbGVmdC4N
Cg0KICAgIE1vc3Qgc3Vic2VxdWVudCB3b3JrIG9uIG51bWVyaWNhbCBsYW5n
dWFnZXMsIHN1Y2ggYXMgRm9ydHJhbi05MCwNCiAgICBNQVRMQUIsIGFuZCBN
YXRoZW1hdGljYSwgaGFzIHRyaWVkIHRvIHByb3ZpZGUgdGhlIHBvd2VyIG9m
IEFQTA0KICAgIHdpdGhvdXQgdGhlIG9ic2N1cml0eS4gIFB5dGhvbidzIE51
bVB5IFsyXSBoYXMgbW9zdCBvZiB0aGUNCiAgICBmZWF0dXJlcyB0aGF0IHVz
ZXJzIG9mIHN1Y2ggbGFuZ3VhZ2VzIGV4cGVjdCwgYnV0IHRoZXNlIGFyZQ0K
ICAgIHByb3ZpZGVkIHRocm91Z2ggbmFtZWQgZnVuY3Rpb25zIGFuZCBtZXRo
b2RzLCByYXRoZXIgdGhhbg0KICAgIG92ZXJsb2FkZWQgb3BlcmF0b3JzLiAg
VGhpcyBtYWtlcyBOdW1QeSBjbHVtc2llciB0aGFuIGl0cw0KICAgIGNvbXBl
dGl0b3JzLg0KDQogICAgT25lIHdheSB0byBtYWtlIE51bVB5IG1vcmUgY29t
cGV0aXRpdmUgaXMgdG8gcHJvdmlkZSBncmVhdGVyDQogICAgc3ludGFjdGlj
IHN1cHBvcnQgaW4gUHl0aG9uIGl0c2VsZiBmb3IgbGluZWFyIGFsZ2VicmEu
ICBUaGlzDQogICAgcHJvcG9zYWwgdGhlcmVmb3JlIGV4YW1pbmVzIHRoZSBy
ZXF1aXJlbWVudHMgdGhhdCBuZXcgbGluZWFyDQogICAgYWxnZWJyYSBvcGVy
YXRvcnMgaW4gUHl0aG9uIG11c3Qgc2F0aXNmeSwgYW5kIHByb3Bvc2VzIGEg
c3ludGF4DQogICAgYW5kIHNlbWFudGljcyBmb3IgdGhvc2Ugb3BlcmF0b3Jz
Lg0KDQoNClJlcXVpcmVtZW50cw0KDQogICAgVGhlIG1vc3QgaW1wb3J0YW50
IHJlcXVpcmVtZW50IGlzIHRoYXQgdGhlcmUgYmUgbWluaW1hbCBpbXBhY3Qg
b24NCiAgICB0aGUgZXhpc3RpbmcgZGVmaW5pdGlvbiBvZiBQeXRob24uICBU
aGUgcHJvcG9zYWwgbXVzdCBub3QgYnJlYWsNCiAgICBleGlzdGluZyBwcm9n
cmFtcywgZXhjZXB0IHBvc3NpYmx5IHRob3NlIHRoYXQgdXNlIE51bVB5Lg0K
DQogICAgVGhlIHNlY29uZCBtb3N0IGltcG9ydGFudCByZXF1aXJlbWVudCBp
cyB0byBiZSBhYmxlIHRvIGRvIGJvdGgNCiAgICBlbGVtZW50d2lzZSBhbmQg
bWF0aGVtYXRpY2FsIG1hdHJpeCBtdWx0aXBsaWNhdGlvbiB1c2luZyBpbmZp
eA0KICAgIG5vdGF0aW9uLiAgVGhlIG5pbmUgY2FzZXMgdGhhdCBtdXN0IGJl
IGhhbmRsZWQgYXJlOg0KDQogICAgICAgIHw1IDZ8ICogICA5ICAgPSB8NDUg
NTR8ICAgICAgTVM6IG1hdHJpeC1zY2FsYXIgbXVsdGlwbGljYXRpb24NCiAg
ICAgICAgfDcgOHwgICAgICAgICAgIHw2MyA3MnwNCg0KICAgICAgICAgIDkg
ICAqIHw1IDZ8ID0gfDQ1IDU0fCAgICAgIFNNOiBzY2FsYXItbWF0cml4IG11
bHRpcGxpY2F0aW9uDQogICAgICAgICAgICAgICAgfDcgOHwgICB8NjMgNzJ8
DQoNCiAgICAgICAgfDIgM3wgKiB8NCA1fCA9IHw4IDE1fCAgICAgICBWRTog
dmVjdG9yIGVsZW1lbnR3aXNlIG11bHRpcGxpY2F0aW9uDQoNCg0KICAgICAg
ICB8MiAzfCAqICB8NHwgID0gICAyMyAgICAgICAgIFZEOiB2ZWN0b3IgZG90
IHByb2R1Y3QNCiAgICAgICAgICAgICAgICAgfDV8DQoNCiAgICAgICAgIHwy
fCAgKiB8NCA1fCA9IHwgOCAxMHwgICAgICBWTzogdmVjdG9yIG91dGVyIHBy
b2R1Y3QNCiAgICAgICAgIHwzfCAgICAgICAgICAgIHwxMiAxNXwNCg0KICAg
ICAgICB8MSAyfCAqIHw1IDZ8ID0gfCA1IDEyfCAgICAgIE1FOiBtYXRyaXgg
ZWxlbWVudHdpc2UgbXVsdGlwbGljYXRpb24NCiAgICAgICAgfDMgNHwgICB8
NyA4fCAgIHwyMSAzMnwNCg0KICAgICAgICB8MSAyfCAqIHw1IDZ8ID0gfDE5
IDIyfCAgICAgIE1NOiBtYXRoZW1hdGljYWwgbWF0cml4IG11bHRpcGxpY2F0
aW9uDQogICAgICAgIHwzIDR8ICAgfDcgOHwgICB8NDMgNTB8DQoNCiAgICAg
ICAgfDEgMnwgKiB8NSA2fCA9IHwxOSAyMnwgICAgICBWTTogdmVjdG9yLW1h
dHJpeCBtdWx0aXBsaWNhdGlvbg0KICAgICAgICAgICAgICAgIHw3IDh8DQoN
CiAgICAgICAgfDUgNnwgKiAgfDF8ICA9ICAgfDE3fCAgICAgICBNVjogbWF0
cml4LXZlY3RvciBtdWx0aXBsaWNhdGlvbg0KICAgICAgICB8NyA4fCAgICB8
MnwgICAgICB8MjN8DQoNCiAgICBOb3RlIHRoYXQgMS1kaW1lbnNpb25hbCB2
ZWN0b3JzIGFyZSB0cmVhdGVkIGFzIHJvd3MgaW4gVk0sIGFzDQogICAgY29s
dW1ucyBpbiBNViwgYW5kIGFzIGJvdGggaW4gVkQgYW5kIFZPLiAgQm90aCBh
cmUgc3BlY2lhbCBjYXNlcw0KICAgIG9mIDItZGltZW5zaW9uYWwgbWF0cmlj
ZXMgKE54MSBhbmQgMXhOIHJlc3BlY3RpdmVseSkuICBJdCBtYXkNCiAgICB0
aGVyZWZvcmUgYmUgcmVhc29uYWJsZSB0byBkZWZpbmUgdGhlIG5ldyBvcGVy
YXRvciBvbmx5IGZvcg0KICAgIDItZGltZW5zaW9uYWwgYXJyYXlzLCBhbmQg
cHJvdmlkZSBhbiBlYXN5IChhbmQgZWZmaWNpZW50KSB3YXkgZm9yDQogICAg
dXNlcnMgdG8gY29udmVydCAxLWRpbWVuc2lvbmFsIHN0cnVjdHVyZXMgdG8g
Mi1kaW1lbnNpb25hbC4NCiAgICBCZWhhdmlvciBvZiBhIG5ldyBtdWx0aXBs
aWNhdGlvbiBvcGVyYXRvciBmb3IgYnVpbHQtaW4gdHlwZXMgbWF5DQogICAg
dGhlbjoNCg0KICAgIChhKSBiZSBhIHBhcnNpbmcgZXJyb3IgKHBvc3NpYmxl
IG9ubHkgaWYgYSBjb25zdGFudCBpcyBvbmUgb2YgdGhlDQogICAgICAgIGFy
Z3VtZW50cywgc2luY2UgbmFtZXMgYXJlIHVudHlwZWQgaW4gUHl0aG9uKTsN
Cg0KICAgIChiKSBnZW5lcmF0ZSBhIHJ1bnRpbWUgZXJyb3I7IG9yDQoNCiAg
ICAoYykgYmUgZGVyaXZlZCBieSBwbGF1c2libGUgZXh0ZW5zaW9uIGZyb20g
aXRzIGJlaGF2aW9yIGluIHRoZQ0KICAgICAgICB0d28tZGltZW5zaW9uYWwg
Y2FzZS4NCg0KICAgIFRoaXJkLCBzeW50YWN0aWMgc3VwcG9ydCBzaG91bGQg
YmUgY29uc2lkZXJlZCBmb3IgdGhyZWUgb3RoZXINCiAgICBvcGVyYXRpb25z
Og0KDQogICAgICAgICAgICAgICAgICAgICAgICAgVA0KICAgIChhKSB0cmFu
c3Bvc2l0aW9uOiAgQSAgID0+IEFbaiwgaV0gZm9yIEFbaSwgal0NCg0KICAg
ICAgICAgICAgICAgICAgICAgICAgIC0xDQogICAgKGIpIGludmVyc2U6ICAg
ICAgICBBICAgPT4gQScgc3VjaCB0aGF0IEEnICogQSA9IEkgKHRoZSBpZGVu
dGl0eSBtYXRyaXgpDQoNCiAgICAoYykgc29sdXRpb246ICAgICAgIEEvYiA9
PiB4ICBzdWNoIHRoYXQgQSAqIHggPSBiDQogICAgICAgICAgICAgICAgICAg
ICAgICBBXGIgPT4geCAgc3VjaCB0aGF0IHggKiBBID0gYg0KDQogICAgV2l0
aCByZWdhcmQgdG8gKGMpLCBpdCBpcyB3b3J0aCBub3RpbmcgdGhhdCB0aGUg
dHdvIHN5bnRheGVzIHVzZWQNCiAgICB3ZXJlIGludmVudGVkIGJ5IHByb2dy
YW1tZXJzLCBub3QgbWF0aGVtYXRpY2lhbnMuICBNYXRoZW1hdGljaWFucw0K
ICAgIGRvIG5vdCBoYXZlIGEgc3RhbmRhcmQsIHdpZGVseS11c2VkIG5vdGF0
aW9uIGZvciBtYXRyaXggc29sdXRpb24uDQoNCiAgICBJdCBpcyBhbHNvIHdv
cnRoIG5vdGluZyB0aGF0IGRvemVucyBvZiBtYXRyaXggaW52ZXJzaW9uIGFu
ZA0KICAgIHNvbHV0aW9uIGFsZ29yaXRobXMgYXJlIHdpZGVseSB1c2VkLiAg
TUFUTEFCIGFuZCBpdHMga2luIGJpbmQNCiAgICB0aGVpciBpbnZlcnNpb24g
YW5kL29yIHNvbHV0aW9uIG9wZXJhdG9ycyB0byBvbmUgd2hpY2ggaXMNCiAg
ICByZWFzb25hYmx5IHJvYnVzdCBpbiBtb3N0IGNhc2VzLCBhbmQgcmVxdWly
ZSB1c2VycyB0byBjYWxsDQogICAgZnVuY3Rpb25zIG9yIG1ldGhvZHMgdG8g
YWNjZXNzIG90aGVycy4NCg0KICAgIEZvdXJ0aCwgY29uZnVzaW9uIGJldHdl
ZW4gUHl0aG9uJ3Mgbm90YXRpb24gYW5kIHRob3NlIG9mIE1BVExBQg0KICAg
IGFuZCBGb3J0cmFuLTkwIHNob3VsZCBiZSBhdm9pZGVkLiAgSW4gcGFydGlj
dWxhciwgbWF0aGVtYXRpY2FsDQogICAgbWF0cml4IG11bHRpcGxpY2F0aW9u
IChjYXNlIE1NKSBzaG91bGQgbm90IGJlIHJlcHJlc2VudGVkIGFzICcuKics
DQogICAgc2luY2U6DQoNCiAgICAoYSkgTUFUTEFCIHVzZXMgcHJlZml4LScu
JyBmb3JtcyB0byBtZWFuICdlbGVtZW50d2lzZScsIGFuZCByYXcNCiAgICAg
ICAgZm9ybXMgdG8gbWVhbiAibWF0aGVtYXRpY2FsIiBbNF07IGFuZA0KDQog
ICAgKGIpIGV2ZW4gaWYgdGhlIFB5dGhvbiBwYXJzZXIgY2FuIGJlIHRhdWdo
dCBob3cgdG8gaGFuZGxlIGRvdHRlZA0KICAgICAgICBmb3JtcywgJzEuKkEn
IHdpbGwgc3RpbGwgYmUgdmlzdWFsbHkgYW1iaWd1b3VzIFs0XS4NCg0KICAg
IE9uZSBhbnRpLXJlcXVpcmVtZW50IGlzIHRoYXQgbmV3IG9wZXJhdG9ycyBh
cmUgbm90IG5lZWRlZCBmb3INCiAgICBhZGRpdGlvbiwgc3VidHJhY3Rpb24s
IGJpdHdpc2Ugb3BlcmF0aW9ucywgYW5kIHNvIG9uLCBzaW5jZQ0KICAgIG1h
dGhlbWF0aWNpYW5zIGFscmVhZHkgdHJlYXQgdGhlbSBlbGVtZW50d2lzZS4N
Cg0KDQpQcm9wb3NhbDoNCg0KICAgIFRoZSBtZWFuaW5ncyBvZiBhbGwgZXhp
c3Rpbmcgb3BlcmF0b3JzIHdpbGwgYmUgdW5jaGFuZ2VkLiAgSW4NCiAgICBw
YXJ0aWN1bGFyLCAnQSpCJyB3aWxsIGNvbnRpbnVlIHRvIGJlIGludGVycHJl
dGVkIGVsZW1lbnR3aXNlLg0KICAgIFRoaXMgdGFrZXMgY2FyZSBvZiB0aGUg
Y2FzZXMgTVMsIFNNLCBWRSwgYW5kIE1FLCBhbmQgZW5zdXJlcw0KICAgIG1p
bmltYWwgaW1wYWN0IG9uIGV4aXN0aW5nIHByb2dyYW1zLg0KDQogICAgQSBu
ZXcgb3BlcmF0b3IgJ0AnIChwcm9ub3VuY2VkICJhY3Jvc3MiKSB3aWxsIGJl
IGFkZGVkIHRvIFB5dGhvbiwNCiAgICBhbG9uZyB3aXRoIHR3byBzcGVjaWFs
IG1ldGhvZHMsICJfX2Fjcm9zc19fKCkiIGFuZA0KICAgICJfX3JhY3Jvc3Nf
XygpIiwgd2l0aCB0aGUgdXN1YWwgc2VtYW50aWNzLg0KDQogICAgTnVtUHkg
d2lsbCBvdmVybG9hZCAiQCIgdG8gcGVyZm9ybSBtYXRoZW1hdGljYWwgbXVs
dGlwbGljYXRpb24gb2YNCiAgICBhcnJheXMgd2hlcmUgc2hhcGVzIHBlcm1p
dCwgYW5kIHRvIHRocm93IGFuIGV4Y2VwdGlvbiBvdGhlcndpc2UuDQogICAg
VGhlIG1hdHJpeCBjbGFzcydzIGltcGxlbWVudGF0aW9uIG9mICJAIiB3aWxs
IHRyZWF0IGJ1aWx0LWluDQogICAgc2VxdWVuY2UgdHlwZXMgYXMgaWYgdGhl
eSB3ZXJlIGNvbHVtbiB2ZWN0b3JzLiAgVGhpcyB0YWtlcyBjYXJlIG9mDQog
ICAgdGhlIGNhc2VzIE1NIGFuZCBNVi4NCg0KICAgIEFuIGF0dHJpYnV0ZSAi
VCIgd2lsbCBiZSBhZGRlZCB0byB0aGUgTnVtUHkgYXJyYXkgdHlwZSwgc3Vj
aCB0aGF0DQogICAgIm0uVCIgaXM6DQoNCiAgICAoYSkgdGhlIHRyYW5zcG9z
ZSBvZiAibSIgZm9yIGEgMi1kaW1lbnNpb25hbCBhcnJheQ0KDQogICAgKGIp
IHRoZSAxeE4gbWF0cml4IHRyYW5zcG9zZSBvZiAibSIgaWYgIm0iIGlzIGEg
MS1kaW1lbnNpb25hbA0KICAgICAgICBhcnJheTsgb3INCg0KICAgIChjKSBh
IHJ1bnRpbWUgZXJyb3IgZm9yIGFuIGFycmF5IHdpdGggcmFuayA+PSAzLg0K
DQogICAgVGhpcyBhdHRyaWJ1dGUgd2lsbCBhbGlhcyB0aGUgbWVtb3J5IG9m
IHRoZSBiYXNlIG9iamVjdC4gIE51bVB5J3MNCiAgICAidHJhbnNwb3NlKCki
IGZ1bmN0aW9uIHdpbGwgYmUgZXh0ZW5kZWQgdG8gdHVybiBidWlsdC1pbiBz
ZXF1ZW5jZQ0KICAgIHR5cGVzIGludG8gcm93IHZlY3RvcnMuICBUaGlzIHRh
a2VzIGNhcmUgb2YgdGhlIFZNLCBWRCwgYW5kIFZPDQogICAgY2FzZXMuICBX
ZSBwcm9wb3NlIGFuIGF0dHJpYnV0ZSBiZWNhdXNlOg0KDQogICAgKGEpIHRo
ZSByZXN1bHRpbmcgbm90YXRpb24gaXMgc2ltaWxhciB0byB0aGUgJ3N1cGVy
c2NyaXB0IFQnIChhdA0KICAgICAgICBsZWFzdCwgYXMgc2ltaWxhciBhcyBB
U0NJSSBhbGxvd3MpLCBhbmQNCg0KICAgIChiKSBpdCBzaWduYWxzIHRoYXQg
dGhlIHRyYW5zcG9zaXRpb24gYWxpYXNlcyB0aGUgb3JpZ2luYWwgb2JqZWN0
Lg0KDQogICAgTm8gbmV3IG9wZXJhdG9ycyB3aWxsIGJlIGRlZmluZWQgdG8g
bWVhbiAic29sdmUgYSBzZXQgb2YgbGluZWFyDQogICAgZXF1YXRpb25zIiwg
b3IgImludmVydCBhIG1hdHJpeCIuICBJbnN0ZWFkLCBOdW1QeSB3aWxsIGRl
ZmluZSBhDQogICAgdmFsdWUgImludiIsIHdoaWNoIHdpbGwgYmUgcmVjb2du
aXplZCBieSB0aGUgZXhwb25lbnRpYXRpb24NCiAgICBvcGVyYXRvciwgc3Vj
aCB0aGF0ICJBICoqIGludiIgaXMgdGhlIGludmVyc2Ugb2YgIkEiLiAgVGhp
cyBpcw0KICAgIHNpbWlsYXIgaW4gc3Bpcml0IHRvIE51bVB5J3MgZXhpc3Rp
bmcgIm5ld2F4aXMiIHZhbHVlLg0KDQogICAgKE9wdGlvbmFsKSBXaGVuIGFw
cGxpZWQgdG8gc2VxdWVuY2VzLCB0aGUgb3BlcmF0b3Igd2lsbCByZXR1cm4g
YQ0KICAgIGxpc3Qgb2YgdHVwbGVzIGNvbnRhaW5pbmcgdGhlIGNyb3NzLXBy
b2R1Y3Qgb2YgdGhlaXIgZWxlbWVudHMgaW4NCiAgICBsZWZ0LXRvLXJpZ2h0
IG9yZGVyOg0KDQogICAgPj4+IFsxLCAyXSBAICgzLCA0KQ0KICAgIFsoMSwg
MyksICgxLCA0KSwgKDIsIDMpLCAoMiwgNCldDQoNCiAgICA+Pj4gWzEsIDJd
IEAgKDMsIDQpIEAgKDUsIDYpDQogICAgWygxLCAzLCA1KSwgKDEsIDMsIDYp
LCANCiAgICAgKDEsIDQsIDUpLCAoMSwgNCwgNiksDQogICAgICgyLCAzLCA1
KSwgKDIsIDMsIDYpLA0KICAgICAoMiwgNCwgNSksICgyLCA0LCA2KV0NCg0K
ICAgIFRoaXMgd2lsbCByZXF1aXJlIHRoZSBzYW1lIGtpbmQgb2Ygc3BlY2lh
bCBzdXBwb3J0IGZyb20gdGhlIHBhcnNlcg0KICAgIGFzIGNoYWluZWQgY29t
cGFyaXNvbnMgKHN1Y2ggYXMgImE8YjxjPD1kIikuICBIb3dldmVyLCBpdCB3
b3VsZA0KICAgIHBlcm1pdCB0aGUgZm9sbG93aW5nOg0KDQogICAgPj4+IGZv
ciAoaSwgaikgaW4gWzEsIDJdIEAgWzMsIDRdOg0KICAgID4+PiAgICAgcHJp
bnQgaSwgag0KICAgIDEgMw0KICAgIDEgNA0KICAgIDIgMw0KICAgIDIgNA0K
DQogICAgYXMgYSBzaG9ydC1oYW5kIGZvciB0aGUgY29tbW9uIG5lc3RlZCBs
b29wIGlkaW9tOg0KDQogICAgPj4+IGZvciBpIGluIFsxLCAyXToNCiAgICA+
Pj4gICAgZm9yIGogaW4gWzMsIDRdOg0KICAgID4+PiAgICAgICAgcHJpbnQg
aSwgag0KDQogICAgUmVzcG9uc2UgdG8gdGhlICdsb2Nrc3RlcCBsb29wJyBx
dWVzdGlvbm5haXJlIFs1XSBpbmRpY2F0ZWQgdGhhdA0KICAgIG5ld2NvbWVy
cyB3b3VsZCBiZSBjb21mb3J0YWJsZSB3aXRoIHRoaXMgKHNvIGNvbWZvcnRh
YmxlLCBpbiBmYWN0LA0KICAgIHRoYXQgbW9zdCBvZiB0aGVtIGludGVycHJl
dGVkIG1vc3QgbXVsdGktbG9vcCAnemlwJyBzeW50YXhlcyBbNl0NCiAgICBh
cyBpbXBsZW1lbnRpbmcgc2luZ2xlLXN0YWdlIG5lc3RpbmcpLg0KDQoNCkFs
dGVybmF0aXZlczoNCg0KICAgIDAxLiBEb24ndCBhZGQgbmV3IG9wZXJhdG9y
cyAtLS0gc3RpY2sgdG8gZnVuY3Rpb25zIGFuZCBtZXRob2RzLg0KDQogICAg
UHl0aG9uIGlzIG5vdCBwcmltYXJpbHkgYSBudW1lcmljYWwgbGFuZ3VhZ2Uu
ICBJdCBpcyBub3Qgd29ydGgNCiAgICBjb21wbGV4aWZ5aW5nIHRoZSBsYW5n
dWFnZSBmb3IgdGhpcyBzcGVjaWFsIGNhc2UgLS0tIE51bVB5J3MNCiAgICBz
dWNjZXNzIGlzIHByb29mIHRoYXQgdXNlcnMgY2FuIGFuZCB3aWxsIHVzZSBm
dW5jdGlvbnMgYW5kIG1ldGhvZHMNCiAgICBmb3IgbGluZWFyIGFsZ2VicmEu
DQoNCiAgICBPbiB0aGUgcG9zaXRpdmUgc2lkZSwgdGhpcyBtYWludGFpbnMg
UHl0aG9uJ3Mgc2ltcGxpY2l0eS4gIEl0cw0KICAgIHdlYWtuZXNzIGlzIHRo
YXQgc3VwcG9ydCBmb3IgcmVhbCBtYXRyaXggbXVsdGlwbGljYXRpb24gKGFu
ZCwgdG8gYQ0KICAgIGxlc3NlciBleHRlbnQsIG90aGVyIGxpbmVhciBhbGdl
YnJhIG9wZXJhdGlvbnMpIGlzIGZyZXF1ZW50bHkNCiAgICByZXF1ZXN0ZWQs
IGFzIGZ1bmN0aW9uYWwgZm9ybXMgYXJlIGN1bWJlcnNvbWUgZm9yIGxlbmd0
aHkNCiAgICBmb3JtdWxhcywgYW5kIGRvIG5vdCByZXNwZWN0IHRoZSBvcGVy
YXRvciBwcmVjZWRlbmNlIHJ1bGVzIG9mDQogICAgY29udmVudGlvbmFsIG1h
dGhlbWF0aWNzLiAgSW4gYWRkaXRpb24sIHRoZSBtZXRob2QgZm9ybSBpcw0K
ICAgIGFzeW1tZXRyaWMgaW4gaXRzIG9wZXJhbmRzLg0KDQogICAgMDIuIElu
dHJvZHVjZSBwcmVmaXhlZCBmb3JtcyBvZiBleGlzdGluZyBvcGVyYXRvcnMs
IHN1Y2ggYXMgIkAqIg0KICAgICAgICBvciAifioiLCBvciB1c2VkIGJveGVk
IGZvcm1zLCBzdWNoIGFzICJbKl0iIG9yICIlKiUiLg0KDQogICAgVGhlcmUg
YXJlIChhdCBsZWFzdCkgdGhyZWUgb2JqZWN0aW9ucyB0byB0aGlzLiAgRmly
c3QsIGVpdGhlciBmb3JtDQogICAgc2VlbXMgdG8gaW1wbHkgdGhhdCBhbGwg
b3BlcmF0b3JzIGV4aXN0IGluIGJvdGggZm9ybXMuICBUaGlzIGlzDQogICAg
bW9yZSBuZXcgZW50aXRpZXMgdGhhbiB0aGUgcHJvYmxlbSBtZXJpdHMsIGFu
ZCB3b3VsZCByZXF1aXJlIHRoZQ0KICAgIGFkZGl0aW9uIG9mIG1hbnkgbmV3
IG92ZXJsb2FkYWJsZSBtZXRob2RzLCBzdWNoIGFzIF9fYXRfbXVsX18uDQoN
CiAgICBTZWNvbmQsIHdoaWxlIGl0IGlzIGNlcnRhaW5seSBwb3NzaWJsZSB0
byBpbnZlbnQgc2VtYW50aWNzIGZvcg0KICAgIHRoZXNlIG5ldyBvcGVyYXRv
cnMgZm9yIGJ1aWx0LWluIHR5cGVzLCB0aGlzIHdvdWxkIGJlIGEgY2FzZSBv
Zg0KICAgIHRoZSB0YWlsIHdhZ2dpbmcgdGhlIGRvZywgaS5lLiBvZiBsZXR0
aW5nIHRoZSBleGlzdGVuY2Ugb2YgYQ0KICAgIGZlYXR1cmUgImNyZWF0ZSIg
YSBuZWVkIGZvciBpdC4NCg0KICAgIEZpbmFsbHksIHRoZSBib3hlZCBmb3Jt
cyBtYWtlIGh1bWFuIHBhcnNpbmcgbW9yZSBjb21wbGV4LCBlLmcuOg0KDQog
ICAgICAgIEFbKl0gPSBCICAgIHZzLiAgICBBWzpdID0gQg0KDQogICAgMDMu
IChGcm9tIE1vc2hlIFphZGthIFs3XSwgYW5kIGFsc28gY29uc2lkZXJlZCBi
eSBIdWFpeXUgWmhvdSBbOF0NCiAgICAgICAgaW4gaGlzIHByb3Bvc2FsIFs5
XSkgUmV0YWluIHRoZSBleGlzdGluZyBtZWFuaW5nIG9mIGFsbA0KICAgICAg
ICBvcGVyYXRvcnMsIGJ1dCBjcmVhdGUgYSBiZWhhdmlvcmFsIGFjY2Vzc29y
IGZvciBhcnJheXMsIHN1Y2gNCiAgICAgICAgdGhhdDoNCg0KICAgICAgICAg
ICAgQSAqIEINCg0KICAgICAgICBpcyBlbGVtZW50d2lzZSBtdWx0aXBsaWNh
dGlvbiAoTUUpLCBidXQ6DQoNCiAgICAgICAgICAgIEEubSgpICogQi5tKCkN
Cg0KICAgICAgICBpcyBtYXRoZW1hdGljYWwgbXVsdGlwbGljYXRpb24gKE1N
KS4gIFRoZSBtZXRob2QgIkEubSgpIiB3b3VsZA0KICAgICAgICByZXR1cm4g
YW4gb2JqZWN0IHRoYXQgYWxpYXNlZCBBJ3MgbWVtb3J5IChmb3IgZWZmaWNp
ZW5jeSksIGJ1dA0KICAgICAgICB3aGljaCBoYWQgYSBkaWZmZXJlbnQgaW1w
bGVtZW50YXRpb24gb2YgX19tdWxfXygpLg0KDQogICAgVGhlIGFkdmFudGFn
ZSBvZiB0aGlzIG1ldGhvZCBpcyB0aGF0IGl0IGhhcyBubyBlZmZlY3Qgb24g
dGhlDQogICAgZXhpc3RpbmcgaW1wbGVtZW50YXRpb24gb2YgUHl0aG9uOiBj
aGFuZ2VzIGFyZSBsb2NhbGl6ZWQgaW4gdGhlDQogICAgTnVtZXJpYyBtb2R1
bGUuICBUaGUgZGlzYWR2YW50YWdlcyBhcmU6DQoNCiAgICAoYSkgVGhlIHNl
bWFudGljcyBvZiAiQS5tKCkgKiBCIiwgIkEgKyBCLm0oKSIsIGFuZCBzbyBv
biB3b3VsZA0KICAgICAgICBoYXZlIHRvIGJlIGRlZmluZWQsIGFuZCB0aGVy
ZSBpcyBubyAib2J2aW91cyIgY2hvaWNlIGZvciB0aGVtLg0KDQogICAgKGIp
IEFsaWFzaW5nIG9iamVjdHMgdG8gdHJpZ2dlciBkaWZmZXJlbnQgb3BlcmF0
b3IgYmVoYXZpb3IgZmVlbHMNCiAgICAgICAgbGVzcyBQeXRob25pYyB0aGFu
IGVpdGhlciBjYWxsaW5nIG1ldGhvZHMgKGFzIGluIHRoZSBleGlzdGluZw0K
ICAgICAgICBOdW1lcmljIG1vZHVsZSkgb3IgdXNpbmcgYSBkaWZmZXJlbnQg
b3BlcmF0b3IuICBUaGlzIFBFUCBpcw0KICAgICAgICBwcmltYXJpbHkgYWJv
dXQgbG9vayBhbmQgZmVlbCwgYW5kIGFib3V0IG1ha2luZyBQeXRob24gbW9y
ZQ0KICAgICAgICBhdHRyYWN0aXZlIHRvIHBlb3BsZSB3aG8gYXJlIG5vdCBh
bHJlYWR5IHVzaW5nIGl0Lg0KDQogICAgMDQuIChGcm9tIGEgcHJvcG9zYWwg
WzldIGJ5IEh1YWl5dSBaaG91IFs4XSkgSW50cm9kdWNlIGEgImRlbGF5ZWQN
CiAgICAgICAgaW52ZXJzZSIgYXR0cmlidXRlLCBzaW1pbGFyIHRvIHRoZSAi
dHJhbnNwb3NlIiBhdHRyaWJ1dGUNCiAgICAgICAgYWR2b2NhdGVkIGluIHRo
ZSB0aGlyZCBwYXJ0IG9mIHRoaXMgcHJvcG9zYWwuICBUaGUgZXhwcmVzc2lv
bg0KICAgICAgICAiYS5JIiB3b3VsZCBiZSBhIGRlbGF5ZWQgaGFuZGxlIG9u
IHRoZSBpbnZlcnNlIG9mIHRoZSBtYXRyaXgNCiAgICAgICAgImEiLCB3aGlj
aCB3b3VsZCBiZSBldmFsdWF0ZWQgaW4gY29udGV4dCBhcyByZXF1aXJlZC4g
IEZvcg0KICAgICAgICBleGFtcGxlLCAiYS5JICogYiIgYW5kICJiICogYS5J
IiB3b3VsZCBzb2x2ZSBzZXRzIG9mIGxpbmVhcg0KICAgICAgICBlcXVhdGlv
bnMsIHdpdGhvdXQgYWN0dWFsbHkgY2FsY3VsYXRpbmcgdGhlIGludmVyc2Uu
DQoNCiAgICBUaGUgbWFpbiBkcmF3YmFjayBvZiB0aGlzIHByb3Bvc2FsIGl0
IGlzIHJlbGlhbmNlIG9uIGxhenkNCiAgICBldmFsdWF0aW9uLCBhbmQgZXZl
biBtb3JlIG9uICJzbWFydCIgbGF6eSBldmFsdWF0aW9uIChpLmUuIHRoZQ0K
ICAgIG9wZXJhdGlvbiBwZXJmb3JtZWQgZGVwZW5kcyBvbiB0aGUgY29udGV4
dCBpbiB3aGljaCB0aGUgZXZhbHVhdGlvbg0KICAgIGlzIGRvbmUpLiAgVGhl
IEJERkwgaGFzIHNvIGZhciByZXNpc3RlZCBpbnRyb2R1Y2luZyBMRSBpbnRv
DQogICAgUHl0aG9uLg0KDQoNClJlbGF0ZWQgUHJvcG9zYWxzDQoNCiAgICAw
MjAzIDogIEF1Z21lbnRlZCBBc3NpZ25tZW50cw0KDQogICAgICAgICAgICBJ
ZiBuZXcgb3BlcmF0b3JzIGZvciBsaW5lYXIgYWxnZWJyYSBhcmUgaW50cm9k
dWNlZCwgaXQgbWF5DQogICAgICAgICAgICBtYWtlIHNlbnNlIHRvIGludHJv
ZHVjZSBhdWdtZW50ZWQgYXNzaWdubWVudCBmb3JtcyBmb3INCiAgICAgICAg
ICAgIHRoZW0uDQoNCiAgICAwMjA3IDogIFJpY2ggQ29tcGFyaXNvbnMNCg0K
ICAgICAgICAgICAgSXQgbWF5IGJlY29tZSBwb3NzaWJsZSB0byBvdmVybG9h
ZCBjb21wYXJpc29uIG9wZXJhdG9ycw0KICAgICAgICAgICAgc3VjaCBhcyAn
PCcgc28gdGhhdCBhbiBleHByZXNzaW9uIHN1Y2ggYXMgJ0EgPCBCJyByZXR1
cm5zDQogICAgICAgICAgICBhbiBhcnJheSwgcmF0aGVyIHRoYW4gYSBzY2Fs
YXIgdmFsdWUuDQoNCiAgICAwMjA5IDogIEFkZGluZyBNdWx0aWRpbWVuc2lv
bmFsIEFycmF5cw0KDQogICAgICAgICAgICBNdWx0aWRpbWVuc2lvbmFsIGFy
cmF5cyBhcmUgY3VycmVudGx5IGFuIGV4dGVuc2lvbiB0bw0KICAgICAgICAg
ICAgUHl0aG9uLCByYXRoZXIgdGhhbiBhIGJ1aWx0LWluIHR5cGUuDQoNCg0K
QWNrbm93bGVkZ21lbnRzOg0KDQogICAgSSBhbSBncmF0ZWZ1bCB0byBIdWFp
eXUgWmh1IFs4XSBmb3IgaW5pdGlhdGluZyB0aGlzIGRpc2N1c3Npb24sDQog
ICAgYW5kIGZvciBzb21lIG9mIHRoZSBpZGVhcyBhbmQgdGVybWlub2xvZ3kg
aW5jbHVkZWQgYmVsb3cuDQoNCg0KUmVmZXJlbmNlczoNCg0KICAgIFsxXSBo
dHRwOi8vd3d3LmFjbS5vcmcvc2lnYXBsL3doeWFwbC5odG0NCiAgICBbMl0g
aHR0cDovL251bXB5LnNvdXJjZWZvcmdlLm5ldA0KICAgIFszXSBQRVAtMDIw
My50eHQgIkF1Z21lbnRlZCBBc3NpZ25tZW50cyIuDQogICAgWzRdIGh0dHA6
Ly9iZXZvLmNoZS53aXNjLmVkdS9vY3RhdmUvZG9jL29jdGF2ZV85Lmh0bWwj
U0VDNjkNCiAgICBbNV0gaHR0cDovL3d3dy5weXRob24ub3JnL3BpcGVybWFp
bC9weXRob24tZGV2LzIwMDAtSnVseS8wMTMxMzkuaHRtbA0KICAgIFs2XSBQ
RVAtMDIwMS50eHQgIkxvY2tzdGVwIEl0ZXJhdGlvbiINCiAgICBbN10gTW9z
aGUgWmFka2EgaXMgJ21vc2hlekBtYXRoLmh1amkuYWMuaWwnLg0KICAgIFs4
XSBIdWFpeXUgWmh1IGlzICdoemh1QHVzZXJzLnNvdXJjZWZvcmdlLm5ldCcN
CiAgICBbOV0gaHR0cDovL3d3dy5weXRob24ub3JnL3BpcGVybWFpbC9weXRo
b24tbGlzdC8yMDAwLUF1Z3VzdC8xMTI1MjkuaHRtbA0KDQoMDQpMb2NhbCBW
YXJpYWJsZXM6DQptb2RlOiBpbmRlbnRlZC10ZXh0DQppbmRlbnQtdGFicy1t
b2RlOiBuaWwNCkVuZDoNCg==
--168427786-36668808-966001178=:13482--


From fredrik@pythonware.com  Fri Aug 11 14:55:01 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Fri, 11 Aug 2000 15:55:01 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>             <20000811143143.G17171@xs4all.nl>  <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
Message-ID: <016d01c0039b$bfb99a40$0900a8c0@SPIFF>

guido wrote:
> I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> mention sending fewer than all bytes at all.  In fact, while it says
> that the return value is the number of bytes sent, it at least
> *suggests* that it will return an error whenever not everything can be
> sent -- even in non-blocking mode.
> 
> Under what circumstances can send() return a smaller number?

never, it seems:

    The length of the message to be sent is specified by the
    length argument. If the message is too long to pass through
    the underlying protocol, send() fails and no data is transmitted.

    Successful completion of a call to send() does not guarantee
    delivery of the message. A return value of -1 indicates only
    locally-detected errors.

    If space is not available at the sending socket to hold the message
    to be transmitted and the socket file descriptor does not have
    O_NONBLOCK set, send() blocks until space is available. If space
    is not available at the sending socket to hold the message to be
    transmitted and the socket file descriptor does have O_NONBLOCK
    set, send() will fail.

    (from SUSv2)

iow, it either blocks or fails.

</F>



From fredrik@pythonware.com  Fri Aug 11 15:01:17 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Fri, 11 Aug 2000 16:01:17 +0200
Subject: [Python-Dev] Preventing recursion core dumps
References: <000401c00389$2fa577b0$060210ac@private>
Message-ID: <018a01c0039c$9f1949b0$0900a8c0@SPIFF>

barry wrote:
> For embedding Python being able to control the recursion depth of the
intepreter
> is very useful. I would want to be able to set, from C, the max call depth
limit
> and the current call depth limit. I'd expect Python to set a min call
depth limit.

+1 (on concept, at least).

</F>



From thomas@xs4all.net  Fri Aug 11 15:08:51 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:08:51 +0200
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 09:28:09AM -0500
References: <200008111239.OAA15818@python.inrialpes.fr> <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <20000811160851.H17171@xs4all.nl>

On Fri, Aug 11, 2000 at 09:28:09AM -0500, Guido van Rossum wrote:

> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

getrlimit and getrusage do what you want to, I think. getrusage() fills a
struct rusage:


            struct rusage
            {
                 struct timeval ru_utime; /* user time used */
                 struct timeval ru_stime; /* system time used */
                 long ru_maxrss;          /* maximum resident set size */
                 long ru_ixrss;      /* integral shared memory size */
                 long ru_idrss;      /* integral unshared data size */
                 long ru_isrss;      /* integral unshared stack size */
                 long ru_minflt;          /* page reclaims */
                 long ru_majflt;          /* page faults */
                 long ru_nswap;      /* swaps */
                 long ru_inblock;         /* block input operations */
                 long ru_oublock;         /* block output operations */
                 long ru_msgsnd;          /* messages sent */
                 long ru_msgrcv;          /* messages received */
                 long ru_nsignals;        /* signals received */
                 long ru_nvcsw;      /* voluntary context switches */
                 long ru_nivcsw;          /* involuntary context switches */
            };

and you can get the actual stack limit with getrlimit(). The availability of
getrusage/getrlimit is already checked by configure, and there's the
resource module which wraps those functions and structs for Python code.
Note that Linux isn't likely to be a problem, most linux distributions have
liberal limits to start with (still the 'single-user OS' ;)

BSDI, for instance, has very strict default limits -- the standard limits
aren't even enough to start 'pine' on a few MB of mailbox. (But BSDI has
rusage/rlimit, so we can 'autodetect' this.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 11 15:13:13 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 11 Aug 2000 17:13:13 +0300 (IDT)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008111708210.3449-100000@sundial>

On Fri, 11 Aug 2000, Guido van Rossum wrote:

> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

I'm far from an expert, but I might have an idea. The question is: must
this works for embedded version of Python, or can I fool around with
main()?

Here's the approach:

 - In main(), get the address of some local variable. Call this
        min
 - Call getrlimit, and see the stack size. Call max = min+ (<stack size )
 - When checking for "too much recursion", take the address of a local 
   variable and compare it against max. If it's higher, stop.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From just@letterror.com  Fri Aug 11 16:14:40 2000
From: just@letterror.com (Just van Rossum)
Date: Fri, 11 Aug 2000 16:14:40 +0100
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <l03102808b5b9c74f316e@[193.78.237.168]>

> > Agreement on 5000?
>
> No, the __getattr__ example still dumps core for me.  With 4000 it is
> fine, but this indicates that this is totally the wrong approach: I
> can change the available stack size with ulimit -s and cause a core
> dump anyway.  Or there could be a loger path through the C code where
> more C stack is used per recursion.
>
> We could set the maximum to 1000 and assume a "reasonable" stack size,
> but that doesn't make me feel comfortable either.
>
> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

Gordon, how's that Stackless PEP coming along?

Sorry, I couldn't resist ;-)

Just




From thomas@xs4all.net  Fri Aug 11 15:21:09 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:21:09 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <016d01c0039b$bfb99a40$0900a8c0@SPIFF>; from fredrik@pythonware.com on Fri, Aug 11, 2000 at 03:55:01PM +0200
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>
Message-ID: <20000811162109.I17171@xs4all.nl>

On Fri, Aug 11, 2000 at 03:55:01PM +0200, Fredrik Lundh wrote:
> guido wrote:
> > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> > mention sending fewer than all bytes at all.  In fact, while it says
> > that the return value is the number of bytes sent, it at least
> > *suggests* that it will return an error whenever not everything can be
> > sent -- even in non-blocking mode.

> > Under what circumstances can send() return a smaller number?

> never, it seems:

[snip manpage]

Indeed. I didn't actually check the story, since Guido was apparently
convinced by its validity. I was just operating under the assumption that
send() did behave like write(). I won't blindly believe Guido anymore ! :)

Someone set the patch to 'rejected' and tell the submittor that 'send'
doesn't return the number of bytes written ;-P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Vladimir.Marangozov@inrialpes.fr  Fri Aug 11 15:32:45 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 16:32:45 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <Pine.GSO.4.10.10008111708210.3449-100000@sundial> from "Moshe Zadka" at Aug 11, 2000 05:13:13 PM
Message-ID: <200008111432.QAA16648@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> On Fri, 11 Aug 2000, Guido van Rossum wrote:
> 
> > It would be good if there was a way to sense the remaining available
> > stack, even if it wasn't portable.  Any Linux experts out there?
> 
> I'm far from an expert, but I might have an idea. The question is: must
> this works for embedded version of Python, or can I fool around with
> main()?

Probably not main(), but Py_Initialize() for sure.

> 
> Here's the approach:
> 
>  - In main(), get the address of some local variable. Call this
>         min
>  - Call getrlimit, and see the stack size. Call max = min+ (<stack size )
>  - When checking for "too much recursion", take the address of a local 
>    variable and compare it against max. If it's higher, stop.

Sounds good. If getrlimit is not available, we can always fallback to
some (yet to be computed) constant, i.e. the current state.

[Just]
> Gordon, how's that Stackless PEP coming along?
> Sorry, I couldn't resist ;-)

Ah, in this case, we'll get a memory error after filling the whole disk
with frames <wink>

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From akuchlin@mems-exchange.org  Fri Aug 11 15:33:35 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 10:33:35 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811162109.I17171@xs4all.nl>; from thomas@xs4all.net on Fri, Aug 11, 2000 at 04:21:09PM +0200
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl>
Message-ID: <20000811103335.B20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 04:21:09PM +0200, Thomas Wouters wrote:
>Someone set the patch to 'rejected' and tell the submittor that 'send'
>doesn't return the number of bytes written ;-P

What about reviving the idea of raising an exception, then?

--amk


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 11 15:40:10 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 11 Aug 2000 17:40:10 +0300 (IDT)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111432.QAA16648@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008111736300.3449-100000@sundial>

On Fri, 11 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > On Fri, 11 Aug 2000, Guido van Rossum wrote:
> > 
> > > It would be good if there was a way to sense the remaining available
> > > stack, even if it wasn't portable.  Any Linux experts out there?
> > 
> > I'm far from an expert, but I might have an idea. The question is: must
> > this works for embedded version of Python, or can I fool around with
> > main()?
> 
> Probably not main(), but Py_Initialize() for sure.

Py_Initialize() isn't good enough -- I can put an upper bound on the
difference between "min" and the top of the stack: I can't do so
for the call to Py_Initialize(). Well, I probably can in some *really*
ugly way. I'll have to think about it some more.

> Sounds good. If getrlimit is not available, we can always fallback to
> some (yet to be computed) constant, i.e. the current state.

Well, since Guido asked for a non-portable Linuxish way, I think we
can assume getrusage() is there.

[Vladimir]
> Ah, in this case, we'll get a memory error after filling the whole disk
> with frames <wink>

Which is great! Python promises to always throw an exception....

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Fri Aug 11 15:43:49 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:43:49 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811103335.B20646@kronos.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Aug 11, 2000 at 10:33:35AM -0400
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl> <20000811103335.B20646@kronos.cnri.reston.va.us>
Message-ID: <20000811164349.J17171@xs4all.nl>

On Fri, Aug 11, 2000 at 10:33:35AM -0400, Andrew Kuchling wrote:
> On Fri, Aug 11, 2000 at 04:21:09PM +0200, Thomas Wouters wrote:
> >Someone set the patch to 'rejected' and tell the submittor that 'send'
> >doesn't return the number of bytes written ;-P

> What about reviving the idea of raising an exception, then?

static PyObject *
PySocketSock_send(PySocketSockObject *s, PyObject *args)
{
        char *buf;
        int len, n, flags = 0;
        if (!PyArg_ParseTuple(args, "s#|i:send", &buf, &len, &flags))
                return NULL;
        Py_BEGIN_ALLOW_THREADS
        n = send(s->sock_fd, buf, len, flags);
        Py_END_ALLOW_THREADS
        if (n < 0)
                return PySocket_Err();
        return PyInt_FromLong((long)n);
}

(PySocket_Err() creates an error.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Fri Aug 11 16:56:06 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 10:56:06 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: Your message of "Fri, 11 Aug 2000 16:21:09 +0200."
 <20000811162109.I17171@xs4all.nl>
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>
 <20000811162109.I17171@xs4all.nl>
Message-ID: <200008111556.KAA05068@cj20424-a.reston1.va.home.com>

> On Fri, Aug 11, 2000 at 03:55:01PM +0200, Fredrik Lundh wrote:
> > guido wrote:
> > > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> > > mention sending fewer than all bytes at all.  In fact, while it says
> > > that the return value is the number of bytes sent, it at least
> > > *suggests* that it will return an error whenever not everything can be
> > > sent -- even in non-blocking mode.
> 
> > > Under what circumstances can send() return a smaller number?
> 
> > never, it seems:
> 
> [snip manpage]
> 
> Indeed. I didn't actually check the story, since Guido was apparently
> convinced by its validity.

I wasn't convinced!  I wrote "is this true?" in my message!!!

> I was just operating under the assumption that
> send() did behave like write(). I won't blindly believe Guido anymore ! :)

I bgelieve they do behave the same: in my mind, write() doesn't write
fewer bytes than you tell it either!  (Except maybe to a tty device
when interrupted by a signal???)

> Someone set the patch to 'rejected' and tell the submittor that 'send'
> doesn't return the number of bytes written ;-P

Done.

Note that send() *does* return the number of bytes written.  It's just
always (supposed to be) the same as the length of the argument string.

Since this is now established, should we change the send() method to
raise an exception when it returns a smaller number?  (The exception
probably should be a subclass of socket.error and should carry the
number of bytes written

Could there be a signal interrupt issue here too?  E.g. I send() a
megabyte, which takes a while due to TCP buffer limits; before I'm
done a signal handler interrupts the system call.  Will send() now:

(1) return a EINTR error
(2) continue
(3) return the number of bytes already written

???

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Fri Aug 11 16:58:45 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 17:58:45 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 11, 2000 09:28:09 AM
Message-ID: <200008111558.RAA16953@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> We could set the maximum to 1000 and assume a "reasonable" stack size,
> but that doesn't make me feel comfortable either.

Nor me, but it's more comfortable than a core dump, and is the only
easy solution, solving most problems & probably breaking some code...
After all, a max of 1024 seems to be a good suggestion.

> 
> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

On a second thought, I think this would be a bad idea, even if
we manage to tweak the stack limits on most platforms. We would
loose determinism = loose control -- no good. A depth-first algorithm
may succeed on one machine, and fail on another.

I strongly prefer to know that I'm limited to 1024 recursions ("reasonable"
stack size assumptions included) and change my algorithm if it doesn't fly
with my structure, than stumble subsequently on the fact that my algorithm
works half the time.

Changing this now *may* break such scripts, and there doesn't seem
to be an immediate easy solution. But if I were to choose between
breaking some scripts and preventing core dumps, well...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 11 17:12:21 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 11 Aug 2000 19:12:21 +0300 (IDT)
Subject: [Python-Dev] Cookie.py
Message-ID: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>

This is a continuation of a previous server-side cookie support.
There is a liberally licensed (old-Python license) framework called
Webware, which includes Cookie.py, (apparently the same one by Timothy
O'Malley). How about taking that Cookie.py?

Webware can be found at http://webware.sourceforge.net/
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From gmcm@hypernet.com  Fri Aug 11 17:25:18 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 12:25:18 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
References: Your message of "Fri, 11 Aug 2000 14:31:43 +0200."             <20000811143143.G17171@xs4all.nl>
Message-ID: <1246111375-124272508@hypernet.com>

[Guido]
> I just read the man page for send() (Red Hat linux 6.1) and it
> doesn't mention sending fewer than all bytes at all.  In fact,
> while it says that the return value is the number of bytes sent,
> it at least *suggests* that it will return an error whenever not
> everything can be sent -- even in non-blocking mode.

It says (at least on RH 5.2): "If the message is too long to 
pass atomically through the underlying protocol...". Hey guys, 
TCP/IP is a stream protocol! For TCP/IP this is all completely 
misleading.

Yes, it returns the number of bytes sent. For TCP/IP it is *not* 
an error to send less than the argument. It's only an error if the 
other end dies at the time of actual send.

Python has been behaving properly all along. The bug report is 
correct. It's the usage of send in the std lib that is improper 
(though with a nearly infinitessimal chance of breaking, since 
it's almost all single threaded blocking usage of sockets).
 
> Under what circumstances can send() return a smaller number?

Just open a TCP/IP connection and send huge (64K or so) 
buffers. Current Python behavior is no different than C on 
Linux, HPUX and Windows.

Look it up in Stevens if you don't believe me. Or try it.

- Gordon


From akuchlin@mems-exchange.org  Fri Aug 11 17:26:08 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 12:26:08 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 11, 2000 at 07:12:21PM +0300
References: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>
Message-ID: <20000811122608.F20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 07:12:21PM +0300, Moshe Zadka wrote:
>This is a continuation of a previous server-side cookie support.
>There is a liberally licensed (old-Python license) framework called
>Webware, which includes Cookie.py, (apparently the same one by Timothy
>O'Malley). How about taking that Cookie.py?

O'Malley got in touch with me and let me know that the license has
been changed to the 1.5.2 license with his departure from BBN.  He
hasn't sent me a URL where the current version can be downloaded,
though.  I don't know if WebWare has the most current version; it
seems not, since O'Malley's was dated 06/21 and WebWare's was checked
in on May 23.

By the way, I'd suggest adding Cookie.py to a new 'web' package, and
taking advantage of the move to break backward compatibility and
remove the automatic usage of pickle (assuming it's still there).

--amk


From nascheme@enme.ucalgary.ca  Fri Aug 11 17:37:01 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 11 Aug 2000 10:37:01 -0600
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111558.RAA16953@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Aug 11, 2000 at 05:58:45PM +0200
References: <200008111428.JAA04464@cj20424-a.reston1.va.home.com> <200008111558.RAA16953@python.inrialpes.fr>
Message-ID: <20000811103701.A25386@keymaster.enme.ucalgary.ca>

On Fri, Aug 11, 2000 at 05:58:45PM +0200, Vladimir Marangozov wrote:
> On a second thought, I think this would be a bad idea, even if
> we manage to tweak the stack limits on most platforms. We would
> loose determinism = loose control -- no good. A depth-first algorithm
> may succeed on one machine, and fail on another.

So what?  We don't limit the amount of memory you can allocate on all
machines just because your program may run out of memory on some
machine.  It seems like the same thing to me.

  Neil


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 11 17:40:31 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 11 Aug 2000 19:40:31 +0300 (IDT)
Subject: [Python-Dev] Cookie.py
In-Reply-To: <20000811122608.F20646@kronos.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>

On Fri, 11 Aug 2000, Andrew Kuchling wrote:

> O'Malley got in touch with me and let me know that the license has
> been changed to the 1.5.2 license with his departure from BBN.  He
> hasn't sent me a URL where the current version can be downloaded,
> though.  I don't know if WebWare has the most current version; it
> seems not, since O'Malley's was dated 06/21 and WebWare's was checked
> in on May 23.

Well, as soon as you get a version, let me know: I've started working
on documentation.

> By the way, I'd suggest adding Cookie.py to a new 'web' package, and
> taking advantage of the move to break backward compatibility and
> remove the automatic usage of pickle (assuming it's still there).

Well, depends on what you mean there:

There are now three classes

a) SimpleCookie -- never uses pickle
b) SerilizeCookie -- always uses pickle
c) SmartCookie -- uses pickle based on old heuristic.

About web package: I'm +0. Fred has to think about how to document
things in packages (we never had to until now). Well, who cares <wink>

What is more important is working on documentation (which I'm doing),
and on a regression test (for which the May 23 version is probably good 
enough).

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Fri Aug 11 17:44:07 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 18:44:07 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <200008111556.KAA05068@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 10:56:06AM -0500
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl> <200008111556.KAA05068@cj20424-a.reston1.va.home.com>
Message-ID: <20000811184407.A14470@xs4all.nl>

On Fri, Aug 11, 2000 at 10:56:06AM -0500, Guido van Rossum wrote:

> > Indeed. I didn't actually check the story, since Guido was apparently
> > convinced by its validity.

> I wasn't convinced!  I wrote "is this true?" in my message!!!

I appologize... It's been a busy day for me, I guess I wasn't paying enough
attention. I'll try to keep quiet when that happens, next time :P

> > I was just operating under the assumption that
> > send() did behave like write(). I won't blindly believe Guido anymore ! :)

> I bgelieve they do behave the same: in my mind, write() doesn't write
> fewer bytes than you tell it either!  (Except maybe to a tty device
> when interrupted by a signal???)

Hm, I seem to recall write() could return after less than a full write, but
I might be mistaken. I thought I was confusing send with write, but maybe
I'm confusing both with some other function :-) I'm *sure* there is a
function that behaves that way :P

> Note that send() *does* return the number of bytes written.  It's just
> always (supposed to be) the same as the length of the argument string.

> Since this is now established, should we change the send() method to
> raise an exception when it returns a smaller number?  (The exception
> probably should be a subclass of socket.error and should carry the
> number of bytes written

Ahh, now it's starting to get clear to me. I'm not sure if it's worth it...
It would require a different (non-POSIX) socket layer to return on
'incomplete' writes, and that is likely to break a number of other things,
too. (Lets hope it does... a socket layer which has the same API but a
different meaning would be very confusing !)

> Could there be a signal interrupt issue here too?

No, I don't think so.

> E.g. I send() a megabyte, which takes a while due to TCP buffer limits;
> before I'm done a signal handler interrupts the system call.  Will send()
> now:

> (1) return a EINTR error

Yes. From the manpage:

       If  the  message  is  too  long  to pass atomically
       through the underlying protocol,  the  error  EMSGSIZE  is
       returned, and the message is not transmitted.

[..]

ERRORS

       EINTR   A signal occurred.

[..]

Because send() either completely succeeds or completely fails, I didn't see
why you wanted an exception generated :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Fri Aug 11 17:45:13 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 12:45:13 -0400 (EDT)
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
References: <20000811122608.F20646@kronos.cnri.reston.va.us>
 <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>

Moshe Zadka writes:
 > About web package: I'm +0. Fred has to think about how to document
 > things in packages (we never had to until now). Well, who cares <wink>

  I'm not aware of any issues with documenting packages; the curses
documentation seems to be coming along nicely, and that's a package.
If you think I've missed something, we can (and should) deal with it
in the Doc-SIG.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From akuchlin@mems-exchange.org  Fri Aug 11 17:48:11 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 12:48:11 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 11, 2000 at 07:40:31PM +0300
References: <20000811122608.F20646@kronos.cnri.reston.va.us> <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <20000811124811.G20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 07:40:31PM +0300, Moshe Zadka wrote:
>There are now three classes
>a) SimpleCookie -- never uses pickle
>b) SerilizeCookie -- always uses pickle
>c) SmartCookie -- uses pickle based on old heuristic.

Ah, good; never mind, then.

>About web package: I'm +0. Fred has to think about how to document
>things in packages (we never had to until now). Well, who cares <wink>

Hmm... the curses.ascii module is already documented, so documenting
packages shouldn't be a problem.

--amk


From esr@thyrsus.com  Fri Aug 11 18:03:01 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 13:03:01 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 11, 2000 at 12:45:13PM -0400
References: <20000811122608.F20646@kronos.cnri.reston.va.us> <Pine.GSO.4.10.10008111936060.5259-100000@sundial> <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>
Message-ID: <20000811130301.A7354@thyrsus.com>

Fred L. Drake, Jr. <fdrake@beopen.com>:
>   I'm not aware of any issues with documenting packages; the curses
> documentation seems to be coming along nicely, and that's a package.
> If you think I've missed something, we can (and should) deal with it
> in the Doc-SIG.

The curses documentation is basically done.  I've fleshed out the
library reference and overhauled the HOWTO.  I shipped the latter to
amk yesterday because I couldn't beat CVS into checking out py-howtos
for me.

The items left on my to-do list are drafting PEP002 and doing something
constructive about the Berkeley DB mess.  I doubt I'll get to these 
things before LinuxWorld.  Anybody else going?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

It would be thought a hard government that should tax its people one tenth 
part.
	-- Benjamin Franklin


From rushing@nightmare.com  Fri Aug 11 17:59:07 2000
From: rushing@nightmare.com (Sam Rushing)
Date: Fri, 11 Aug 2000 09:59:07 -0700 (PDT)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <284209072@toto.iv>
Message-ID: <14740.12507.587044.121462@seattle.nightmare.com>

Guido van Rossum writes:
 > Really?!?!
 > 
 > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
 > mention sending fewer than all bytes at all.  In fact, while it says
 > that the return value is the number of bytes sent, it at least
 > *suggests* that it will return an error whenever not everything can be
 > sent -- even in non-blocking mode.
 > 
 > Under what circumstances can send() return a smaller number?

It's a feature of Linux... it will send() everything.  Other unixen
act in the classic fashion (it bit me on FreeBSD), and send only what
fits right into the buffer that awaits.

I think this could safely be added to the send method in
socketmodule.c.  Linux users wouldn't even notice.  IMHO this is the
kind of feature that people come to expect from programming in a HLL.
Maybe disable the feature if it's a non-blocking socket?

-Sam



From billtut@microsoft.com  Fri Aug 11 18:01:44 2000
From: billtut@microsoft.com (Bill Tutt)
Date: Fri, 11 Aug 2000 10:01:44 -0700
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
 !)
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>

This is an alternative approach that we should certainly consider. We could
use ANTLR (www.antlr.org) as our parser generator, and have it generate Java
for JPython, and C++ for CPython.  This would be a good chunk of work, and
it's something I really don't have time to pursue. I don't even have time to
pursue the idea about moving keyword recognition into the lexer.

I'm just not sure if you want to bother introducing C++ into the Python
codebase solely to only have one parser for CPython and JPython.

Bill

 -----Original Message-----
From: 	bwarsaw@beopen.com [mailto:bwarsaw@beopen.com] 
Sent:	Thursday, August 10, 2000 8:01 PM
To:	Guido van Rossum
Cc:	Mark Hammond; python-dev@python.org
Subject:	Re: [Python-Dev] Python keywords (was Lockstep iteration -
eureka!)


>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

    GvR> Alas, I'm not sure how easy it will be.  The parser generator
    GvR> will probably have to be changed to allow you to indicate not
    GvR> to do a resword lookup at certain points in the grammar.  I
    GvR> don't know where to start. :-(

Yet another reason why it would be nice to (eventually) merge the
parsing technology in CPython and JPython.

i-don't-wanna-work-i-jes-wanna-bang-on-my-drum-all-day-ly y'rs,
-Barry

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://www.python.org/mailman/listinfo/python-dev


From gmcm@hypernet.com  Fri Aug 11 18:04:26 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 13:04:26 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <1246111375-124272508@hypernet.com>
References: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
Message-ID: <1246109027-124413737@hypernet.com>

[I wrote, about send()]
> Yes, it returns the number of bytes sent. For TCP/IP it is *not*
> an error to send less than the argument. It's only an error if
> the other end dies at the time of actual send.

[and...]
> Just open a TCP/IP connection and send huge (64K or so) 
> buffers. Current Python behavior is no different than C on 
> Linux, HPUX and Windows.

And I just demonstrated it. Strangely enough, sending from Windows 
(where the dos say "send returns the total number of bytes sent, 
which can be less than the number indicated by len") it always 
sent the whole buffer, even when that was 1M on a non-
blocking socket. (I select()'ed the socket first, to make sure it 
could send something).

But from Linux, the largest buffer sent was 54,020 and typical 
was 27,740. No errors.


- Gordon


From thomas@xs4all.net  Fri Aug 11 18:04:37 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 19:04:37 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <14740.12507.587044.121462@seattle.nightmare.com>; from rushing@nightmare.com on Fri, Aug 11, 2000 at 09:59:07AM -0700
References: <284209072@toto.iv> <14740.12507.587044.121462@seattle.nightmare.com>
Message-ID: <20000811190437.C17176@xs4all.nl>

On Fri, Aug 11, 2000 at 09:59:07AM -0700, Sam Rushing wrote:

> It's a feature of Linux... it will send() everything.  Other unixen
> act in the classic fashion (it bit me on FreeBSD), and send only what
> fits right into the buffer that awaits.

Ahhh, the downsides of working on the Most Perfect OS (writing this while
our Technical Manager, a FreeBSD fan, is looking over my shoulder ;)
Thanx for clearing that up. I was slowly going insane ;-P

> I think this could safely be added to the send method in
> socketmodule.c.  Linux users wouldn't even notice.  IMHO this is the
> kind of feature that people come to expect from programming in a HLL.
> Maybe disable the feature if it's a non-blocking socket?

Hm, I'm not sure if that's the 'right' thing to do, though disabling it for
non-blocking sockets is a nice idea. It shouldn't break anything, but it
doesn't feel too 'right'. The safe option would be to add a function that
resends as long as necessary, and point everyone to that function. But I'm
not sure what the name should be -- send is just so obvious ;-) 

Perhaps you're right, perhaps we should consider this a job for the type of
VHLL that Python is, and provide the opposite function separate instead: a
non-resending send(), for those that really want it. But in the eyes of the
Python programmer, socket.send() would just magically accept and send any
message size you care to give it, so it shouldn't break things. I think ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Fri Aug 11 18:16:43 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 13:16:43 -0400 (EDT)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811190437.C17176@xs4all.nl>
References: <284209072@toto.iv>
 <14740.12507.587044.121462@seattle.nightmare.com>
 <20000811190437.C17176@xs4all.nl>
Message-ID: <14740.13563.466035.477406@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Perhaps you're right, perhaps we should consider this a job for the type of
 > VHLL that Python is, and provide the opposite function separate instead: a
 > non-resending send(), for those that really want it. But in the eyes of the
 > Python programmer, socket.send() would just magically accept and send any
 > message size you care to give it, so it shouldn't break things. I think ;)

  This version receives my +1.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gmcm@hypernet.com  Fri Aug 11 18:38:01 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 13:38:01 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <20000811190437.C17176@xs4all.nl>
References: <14740.12507.587044.121462@seattle.nightmare.com>; from rushing@nightmare.com on Fri, Aug 11, 2000 at 09:59:07AM -0700
Message-ID: <1246107013-124534915@hypernet.com>

Thomas Wouters wrote:
> On Fri, Aug 11, 2000 at 09:59:07AM -0700, Sam Rushing wrote:
> 
> > It's a feature of Linux... it will send() everything.  Other
> > unixen act in the classic fashion (it bit me on FreeBSD), and
> > send only what fits right into the buffer that awaits.
...
> > I think this could safely be added to the send method in
> > socketmodule.c.  Linux users wouldn't even notice.  IMHO this
> > is the kind of feature that people come to expect from
> > programming in a HLL. Maybe disable the feature if it's a
> > non-blocking socket?
> 
> Hm, I'm not sure if that's the 'right' thing to do, though
> disabling it for non-blocking sockets is a nice idea. 

It's absolutely vital that it be disabled for non-blocking 
sockets. Otherwise you've just made it into a blocking socket.

With that in place, I would be neutral on the change. I still feel 
that Python is already doing the right thing. The fact that 
everyone misunderstood the man page is not a good reason to 
change Python to match that misreading.

> It
> shouldn't break anything, but it doesn't feel too 'right'. The
> safe option would be to add a function that resends as long as
> necessary, and point everyone to that function. But I'm not sure
> what the name should be -- send is just so obvious ;-) 

I've always thought that was why there was a makefile method.
 


- Gordon


From guido@beopen.com  Fri Aug 11 23:05:32 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:05:32 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: Your message of "Fri, 11 Aug 2000 13:04:26 -0400."
 <1246109027-124413737@hypernet.com>
References: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
 <1246109027-124413737@hypernet.com>
Message-ID: <200008112205.RAA01218@cj20424-a.reston1.va.home.com>

[Gordon]
> [I wrote, about send()]
> > Yes, it returns the number of bytes sent. For TCP/IP it is *not*
> > an error to send less than the argument. It's only an error if
> > the other end dies at the time of actual send.
> 
> [and...]
> > Just open a TCP/IP connection and send huge (64K or so) 
> > buffers. Current Python behavior is no different than C on 
> > Linux, HPUX and Windows.
> 
> And I just demonstrated it. Strangely enough, sending from Windows 
> (where the dos say "send returns the total number of bytes sent, 
> which can be less than the number indicated by len") it always 
> sent the whole buffer, even when that was 1M on a non-
> blocking socket. (I select()'ed the socket first, to make sure it 
> could send something).
> 
> But from Linux, the largest buffer sent was 54,020 and typical 
> was 27,740. No errors.

OK.  So send() can do a partial write, but only on a stream
connection.  And most standard library code doesn't check for that
condition, nor does (probably) much other code that used the standard
library as an example.  Worse, it seems that on some platforms send()
*never* does a partial write (I couldn't reproduce it on Red Hat 6.1
Linux), so even stress testing may not reveal the lurking problem.

Possible solutions:

1. Do nothing.  Pro: least work.  Con: subtle bugs remain.

2. Fix all code that's broken in the standard library, and try to
encourage others to fix their code.  Book authors need to be
encouraged to add a warning.  Pro: most thorough.  Con: hard to fix
every occurrence, especially in 3rd party code.

3. Fix the socket module to raise an exception when less than the
number of bytes sent occurs.  Pro: subtle bug exposed when it
happens.  Con: breaks code that did the right thing!

4. Fix the socket module to loop back on a partial send to send the
remaining bytes.  Pro: no more short writes.  Con: what if the first
few send() calls succeed and then an error is returned?  Note: code
that checks for partial writes will be redundant!

I'm personally in favor of (4), despite the problem with errors after
the first call.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Aug 11 23:14:23 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:14:23 -0500
Subject: [Python-Dev] missing mail
Message-ID: <200008112214.RAA01257@cj20424-a.reston1.va.home.com>

Just a note to you all.  It seems I'm missing a lot of mail to
python-dev.  I noticed because I got a few mails cc'ed to me and never
saw the copy sent via the list (which normally shows up within a
minute).  I looked in the archives and there were more messages that I
hadn't seen at all (e.g. the entire Cookie thread).

I don't know where the problem is (I *am* getting other mail to
guido@python.org as well as to guido@beopen.com) and I have no time to
research this right now.  I'm going to be mostly off line this weekend
and also all of next week.  (I'll be able to read mail occasionally
but I'll be too busy to keep track of everything.)

So if you need me to reply, please cc me directly -- and please be
patient!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From huaiyu_zhu@yahoo.com  Fri Aug 11 22:13:17 2000
From: huaiyu_zhu@yahoo.com (Huaiyu Zhu)
Date: Fri, 11 Aug 2000 14:13:17 -0700 (PDT)
Subject: [Python-Dev] Re: PEP 0211: Linear Algebra Operators
In-Reply-To: <Pine.LNX.4.10.10008110936390.13482-200000@akbar.nevex.com>
Message-ID: <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com>

As the PEP posted by Greg is substantially different from the one floating
around in c.l.py, I'd like to post the latter here, which covers several
weeks of discussions by dozens of discussants.  I'd like to encourage Greg
to post his version to python-list to seek comments.

I'd be grateful to hear any comments.


        Python Extension Proposal: Adding new math operators 
                Huaiyu Zhu <hzhu@users.sourceforge.net>
                         2000-08-11, draft 3


Introduction
------------

This PEP describes a proposal to add new math operators to Python, and
summarises discussions in the news group comp.lang.python on this topic.
Issues discussed here include:

1. Background.
2. Description of proposed operators and implementation issues.
3. Analysis of alternatives to new operators.
4. Analysis of alternative forms.
5. Compatibility issues
6. Description of wider extensions and other related ideas.

A substantial portion of this PEP describes ideas that do not go into the
proposed extension.  They are presented because the extension is essentially
syntactic sugar, so its adoption must be weighed against various possible
alternatives.  While many alternatives may be better in some aspects, the
current proposal appears to be overall advantageous.



Background
----------

Python provides five basic math operators, + - * / **.  (Hereafter
generically represented by "op").  They can be overloaded with new semantics
for user-defined classes.  However, for objects composed of homogeneous
elements, such as arrays, vectors and matrices in numerical computation,
there are two essentially distinct flavors of semantics.  The objectwise
operations treat these objects as points in multidimensional spaces.  The
elementwise operations treat them as collections of individual elements.
These two flavors of operations are often intermixed in the same formulas,
thereby requiring syntactical distinction.

Many numerical computation languages provide two sets of math operators.
For example, in Matlab, the ordinary op is used for objectwise operation
while .op is used for elementwise operation.  In R, op stands for
elementwise operation while %op% stands for objectwise operation.

In python, there are other methods of representation, some of which already
used by available numerical packages, such as

1. function:   mul(a,b)
2. method:     a.mul(b)
3. casting:    a.E*b 

In several aspects these are not as adequate as infix operators.  More
details will be shown later, but the key points are

1. Readability: Even for moderately complicated formulas, infix operators
   are much cleaner than alternatives.
2. Familiarity: Users are familiar with ordinary math operators.  
3. Implementation: New infix operators will not unduly clutter python
   syntax.  They will greatly ease the implementation of numerical packages.

While it is possible to assign current math operators to one flavor of
semantics, there is simply not enough infix operators to overload for the
other flavor.  It is also impossible to maintain visual symmetry between
these two flavors if one of them does not contain symbols for ordinary math
operators.  



Proposed extension
------------------

1.  New operators ~+ ~- ~* ~/ ~** ~+= ~-= ~*= ~/= ~**= are added to core
    Python.  They parallel the existing operators + - * / ** and the (soon
    to be added) += -= *= /= **= operators.

2.  Operator ~op retains the syntactical properties of operator op,
    including precedence.

3.  Operator ~op retains the semantical properties of operator op on
    built-in number types.  They raise syntax error on other types.

4.  These operators are overloadable in classes with names that prepend
    "alt" to names of ordinary math operators.  For example, __altadd__ and
    __raltadd__ work for ~+ just as __add__ and __radd__ work for +.

5.  As with standard math operators, the __r*__() methods are invoked when
    the left operand does not provide the appropriate method.

The symbol ~ is already used in Python as the unary "bitwise not" operator.
Currently it is not allowed for binary operators.  To allow it as part of
binary operators, the tokanizer would treat ~+ as one token.  This means
that currently valid expression ~+1 would be tokenized as ~+ 1 instead of ~
+ 1.  The compiler would then treat ~+ as composite of ~ +.  

The proposed implementation is to patch several files relating to the parser
and compiler to duplicate the functionality of existing math operators as
necessary.  All new semantics are to be implemented in the application that
overloads them, but they are recommended to be conceptually similar to
existing math operators.

It is not specified which version of operators stands for elementwise or
objectwise operations, leaving the decision to applications.

A prototype implementation already exists.



Alternatives to adding new operators
------------------------------------

Some of the leading alternatives, using the multiplication as an example.

1. Use function mul(a,b).

   Advantage:
   -  No need for new operators.
  
   Disadvantage: 
   - Prefix forms are cumbersome for composite formulas.
   - Unfamiliar to the intended users.
   - Too verbose for the intended users.
   - Unable to use natural precedence rules.
 
2. Use method call a.mul(b)

   Advantage:
   - No need for new operators.
   
   Disadvantage:
   - Asymmetric for both operands.
   - Unfamiliar to the intended users.
   - Too verbose for the intended users.
   - Unable to use natural precedence rules.


3. Use "shadow classes".  For matrix class define a shadow array class
   accessible through a method .E, so that for matrices a and b, a.E*b would
   be a matrix object that is elementwise_mul(a,b). 

   Likewise define a shadow matrix class for arrays accessible through a
   method .M so that for arrays a and b, a.M*b would be an array that is
   matrixwise_mul(a,b).

   Advantage:
   - No need for new operators.
   - Benefits of infix operators with correct precedence rules.
   - Clean formulas in applications.
   
   Disadvantage:
   - Hard to maintain in current Python because ordinary numbers cannot have
     user defined class methods.  (a.E*b will fail if a is a pure number.)
   - Difficult to implement, as this will interfere with existing method
     calls, like .T for transpose, etc.
   - Runtime overhead of object creation and method lookup.
   - The shadowing class cannot replace a true class, because it does not
     return its own type.  So there need to be a M class with shadow E class,
     and an E class with shadow M class.
   - Unnatural to mathematicians.

4. Implement matrixwise and elementwise classes with easy casting to the
   other class.  So matrixwise operations for arrays would be like a.M*b.M
   and elementwise operations for matrices would be like a.E*b.E.  For error
   detection a.E*b.M would raise exceptions.

   Advantage:
   - No need for new operators.
   - Similar to infix notation with correct precedence rules.

   Disadvantage:
   - Similar difficulty due to lack of user-methods for pure numbers.
   - Runtime overhead of object creation and method lookup.
   - More cluttered formulas
   - Switching of flavor of objects to facilitate operators becomes
     persistent.  This introduces long range context dependencies in
     application code that would be extremely hard to maintain.

5. Using mini parser to parse formulas written in arbitrary extension placed
   in quoted strings.

   Advantage:
   - Pure Python, without new operators

   Disadvantage:
   - The actual syntax is within the quoted string, which does not resolve
     the problem itself.
   - Introducing zones of special syntax.
   - Demanding on the mini-parser.

Among these alternatives, the first and second are used in current
applications to some extent, but found inadequate.  The third is the most
favorite for applications, but it will incur huge implementation complexity.
The fourth would make applications codes very contex-sensitive and hard to
maintain.  These two alternatives also share significant implementational
difficulties due to current type/class split.  The fifth appears to create
more problems than it would solve.



Alternative forms of infix operators
------------------------------------

Two major forms and several minor variants of new infix operators were
discussed:

1. Bracketed form

   (op)
   [op]
   {op}
   <op>
   :op:
   ~op~
   %op%

2. Meta character form

   .op
   @op
   ~op
   
   Alternatively the meta character is put after the operator.

3. Less consistent variations of these themes.   These are considered
   unfavorably.  For completeness some are listed here
   - Use @/ and /@ for left and right division
   - Use [*] and (*) for outer and inner products

4. Use __call__ to simulate multiplication.
   a(b)  or (a)(b)


Criteria for choosing among the representations include:

   - No syntactical ambiguities with existing operators.  

   - Higher readability in actual formulas.  This makes the bracketed forms
     unfavorable.  See examples below.

   - Visually similar to existing math operators.

   - Syntactically simple, without blocking possible future extensions.


With these criteria the overall winner in bracket form appear to be {op}.  A
clear winner in the meta character form is ~op.  Comparing these it appears
that ~op is the favorite among them all.  

Some analysis are as follows:

   - The .op form is ambiguous: 1.+a would be different from 1 .+a.

   - The bracket type operators are most favorable when standing alone, but
     not in formulas, as they interfere with visual parsing of parenthesis
     for precedence and function argument.  This is so for (op) and [op],
     and somewhat less so for {op} and <op>.

   - The <op> form has the potential to be confused with < > and =.

   - The @op is not favored because @ is visually heavy (dense, more like a
     letter): a@+b is more readily read as a@ + b than a @+ b.

   - For choosing meta-characters: Most of existing ASCII symbols have
     already been used.  The only three unused are @ $ ?.



Semantics of new operators
--------------------------

There are convincing arguments for using either set of operators as
objectwise or elementwise.  Some of them are listed here:

1. op for element, ~op for object

   - Consistent with current multiarray interface of Numeric package
   - Consistent with some other languages
   - Perception that elementwise operations are more natural
   - Perception that elementwise operations are used more frequently

2. op for object, ~op for element

   - Consistent with current linear algebra interface of MatPy package
   - Consistent with some other languages
   - Perception that objectwise operations are more natural
   - Perception that objectwise operations are used more frequently
   - Consistent with the current behavior of operators on lists
   - Allow ~ to be a general elementwise meta-character in future extensions.

It is generally agreed upon that 

   - there is no absolute reason to favor one or the other
   - it is easy to cast from one representation to another in a sizable
     chunk of code, so the other flavor of operators is always minority
   - there are other semantic differences that favor existence of
     array-oriented and matrix-oriented packages, even if their operators
     are unified.
   - whatever the decision is taken, codes using existing interfaces should
     not be broken for a very long time.

Therefore not much is lost, and much flexibility retained, if the semantic
flavors of these two sets of operators are not dictated by the core
language.  The application packages are responsible for making the most
suitable choice.  This is already the case for NumPy and MatPy which use
opposite semantics.  Adding new operators will not break this.  See also
observation after subsection 2 in the Examples below.

The issue of numerical precision was raised, but if the semantics is left to
the applications, the actual precisions should also go there.



Examples
--------

Following are examples of the actual formulas that will appear using various
operators or other representations described above.

1. The matrix inversion formula:

   - Using op for object and ~op for element:
     
     b = a.I - a.I * u / (c.I + v/a*u) * v / a

     b = a.I - a.I * u * (c.I + v*a.I*u).I * v * a.I

   - Using op for element and ~op for object:
   
     b = a.I @- a.I @* u @/ (c.I @+ v@/a@*u) @* v @/ a

     b = a.I ~- a.I ~* u ~/ (c.I ~+ v~/a~*u) ~* v ~/ a

     b = a.I (-) a.I (*) u (/) (c.I (+) v(/)a(*)u) (*) v (/) a

     b = a.I [-] a.I [*] u [/] (c.I [+] v[/]a[*]u) [*] v [/] a

     b = a.I <-> a.I <*> u </> (c.I <+> v</>a<*>u) <*> v </> a

     b = a.I {-} a.I {*} u {/} (c.I {+} v{/}a{*}u) {*} v {/} a

   Observation: For linear algebra using op for object is preferable.

   Observation: The ~op type operators look better than (op) type in
   complicated formulas.

   - using named operators

     b = a.I @sub a.I @mul u @div (c.I @add v @div a @mul u) @mul v @div a

     b = a.I ~sub a.I ~mul u ~div (c.I ~add v ~div a ~mul u) ~mul v ~div a

   Observation: Named operators are not suitable for math formulas.


2. Plotting a 3d graph

   - Using op for object and ~op for element:

     z = sin(x~**2 ~+ y~**2);    plot(x,y,z)

   - Using op for element and ~op for object:

     z = sin(x**2 + y**2);   plot(x,y,z)

    Observation: Elementwise operations with broadcasting allows much more
    efficient implementation than Matlab.

    Observation: Swapping the semantics of op and ~op (by casting the
    objects) is often advantageous, as the ~op operators would only appear
    in chunks of code where the other flavor dominate.


3. Using + and - with automatic broadcasting

     a = b - c;  d = a.T*a

   Observation: This would silently produce hard-to-trace bugs if one of b
   or c is row vector while the other is column vector.



Miscellaneous issues:
---------------------

1. Need for the ~+ ~- operators.  The objectwise + - are important because
   they provide important sanity checks as per linear algebra.  The
   elementwise + - are important because they allow broadcasting that are
   very efficient in applications.

2. Left division (solve).  For matrix, a*x is not necessarily equal to x*a.
   The solution of a*x==b, denoted x=solve(a,b), is therefore different from
   the solution of x*a==b, denoted x=div(b,a).  There are discussions about
   finding a new symbol for solve.  [Background: Matlab use b/a for div(b,a)
   and a\b for solve(a,b).]

   It is recognized that Python provides a better solution without requiring
   a new symbol: the inverse method .I can be made to be delayed so that
   a.I*b and b*a.I are equivalent to Matlab's a\b and b/a.  The
   implementation is quite simple and the resulting application code clean.

3. Power operator.  Python's use of a**b as pow(a,b) has two perceived
   disadvantages:
   - Most mathematicians are more familiar with a^b for this purpose.
   - It results in long augmented assignment operator ~**=.
   However, this issue is distinct from the main issue here.

4. Additional multiplication operators.  Several forms of multiplications
   are used in (multi-)linear algebra.  Most can be seen as variations of
   multiplication in linear algebra sense (such as Kronecker product).  But
   two forms appear to be more fundamental: outer product and inner product.
   However, their specification includes indices, which can be either

   - associated with the operator, or
   - associated with the objects.

   The latter (the Einstein notation) is used extensively on paper, and is
   also the easier one to implement.  By implementing a tensor-with-indices
   class, a general form of multiplication would cover both outer and inner
   products, and specialize to linear algebra multiplication as well.  The
   index rule can be defined as class methods, like,

     a = b.i(1,2,-1,-2) * c.i(4,-2,3,-1)   # a_ijkl = b_ijmn c_lnkm

   Therefore one objectwise multiplication is sufficient.

5. Bitwise operators.  Currently Python assigns six operators to bitwise
   operations: and (&), or (|), xor (^), complement (~), left shift (<<) and
   right shift (>>), with their own precedence levels.  This has some
   barings on the new math operators in several ways:

   - The proposed new math operators use the symbol ~ that is "bitwise not"
     operator.  This poses no compatibility problem but somewhat complicates
     implementation.

   - The symbol ^ might be better used for pow than bitwise xor.  But this
     depends on the future of bitwise operators.  It does not immediately
     impact on the proposed math operator.

   - The symbol | was suggested to be used for matrix solve.  But the new
     solution of using delayed .I is better in several ways.

   - The bitwise operators assign special syntactical and semantical
     structures to operations, which could be more consistently regarded as
     elementwise lattice operators. (see below) Most of their usage could be
     replaced by a bitwise module with named functions.  Removing ~ as a
     single operator could also allow notations that link them to logical
     operators (see below).  However, this issue is separate from the
     current proposed extension.

6. Lattice operators.  It was suggested that similar operators be combined
   with bitwise operators to represent lattice operations.  For example, ~|
   and ~& could represent "lattice or" and "lattice and".  But these can
   already be achieved by overloading existing logical or bitwise operators.
   On the other hand, these operations might be more deserving for infix
   operators than the built-in bitwise operators do (see below).

7. Alternative to special operator names used in definition,

   def "+"(a, b)      in place of       def __add__(a, b)

   This appears to require greater syntactical change, and would only be
   useful when arbitrary additional operators are allowed.

8. There was a suggestion to provide a copy operator :=, but this can
   already be done by a=b.copy.



Impact on possible future extensions:
-------------------------------------

More general extensions could lead from the current proposal. Although they
would be distinct proposals, they might have syntactical or semantical
implications on each other.  It is prudent to ensure that the current
extension do not restrict any future possibilities.


1. Named operators. 

The news group discussion made it generally clear that infix operators is a
scarce resource in Python, not only in numerical computation, but in other
fields as well.  Several proposals and ideas were put forward that would
allow infix operators be introduced in ways similar to named functions.

The idea of named infix operators is essentially this: Choose a meta
character, say @, so that for any identifier "opname", the combination
"@opname" would be a binary infix operator, and

a @opname b == opname(a,b)

Other representations mentioned include .name ~name~ :name: (.name) %name%
and similar variations.  The pure bracket based operators cannot be used
this way.

This requires a change in the parser to recognize @opname, and parse it into
the same structure as a function call.  The precedence of all these
operators would have to be fixed at one level, so the implementation would
be different from additional math operators which keep the precedence of
existing math operators.

The current proposed extension do not limit possible future extensions of
such form in any way.


2. More general symbolic operators.

One additional form of future extension is to use meta character and
operator symbols (symbols that cannot be used in syntactical structures
other than operators).  Suppose @ is the meta character.  Then

      a + b,    a @+ b,    a @@+ b,  a @+- b

would all be operators with a hierarchy of precedence, defined by

   def "+"(a, b)
   def "@+"(a, b)
   def "@@+"(a, b)
   def "@+-"(a, b)

One advantage compared with named operators is greater flexibility for
precedences based on either the meta character or the ordinary operator
symbols.  This also allows operator composition.  The disadvantage is that
they are more like "line noise".  In any case the current proposal does not
impact its future possibility.

These kinds of future extensions may not be necessary when Unicode becomes
generally available.


3. Object/element dichotomy for other types of objects.

The distinction between objectwise and elementwise operations are meaningful
in other contexts as well, where an object can be conceptually regarded as a
collection of homogeneous elements.  Several examples are listed here:
   
   - List arithmetics
   
      [1, 2] + [3, 4]        # [1, 2, 3, 4]
      [1, 2] ~+ [3, 4]       # [4, 6]
                             
      ['a', 'b'] * 2         # ['a', 'b', 'a', 'b']
      'ab' * 2               # 'abab'
      ['a', 'b'] ~* 2        # ['aa', 'bb']
      [1, 2] ~* 2            # [2, 4]

     It is also consistent to Cartesian product

      [1,2]*[3,4]            # [(1,3),(1,4),(2,3),(2,4)]
   
   - Tuple generation
   
      [1, 2, 3], [4, 5, 6]   # ([1,2, 3], [4, 5, 6])
      [1, 2, 3]~,[4, 5, 6]   # [(1,4), (2, 5), (3,6)]
   
      This has the same effect as the proposed zip function.
   
   - Bitwise operation (regarding integer as collection of bits, and
      removing the dissimilarity between bitwise and logical operators)
   
      5 and 6       # 6
      5 or 6        # 5
                    
      5 ~and 6      # 4
      5 ~or 6       # 7
   
   - Elementwise format operator (with broadcasting)
   
      a = [1,2,3,4,5]
      print ["%5d "] ~% a     # print ("%5s "*len(a)) % tuple(a)
      a = [[1,2],[3,4]]
      print ["%5d "] ~~% a
   
   - Using ~ as generic elementwise meta-character to replace map
   
      ~f(a, b)      # map(f, a, b)
      ~~f(a, b)     # map(lambda *x:map(f, *x), a, b)
   
      More generally,
   
      def ~f(*x): return map(f, *x)
      def ~~f(*x): return map(~f, *x)
      ...

    - Rich comparison

      [1, 2, 3, 4]  ~< [4, 3, 2, 1]  # [1, 1, 0, 0]
   
   There are probably many other similar situations.  This general approach
   seems well suited for most of them, in place of several separated
   proposals for each of them (parallel and cross iteration, list
   comprehension, rich comparison, and some others).

   Of course, the sementics of "elementwise" depends on applications.  For
   example an element of matrix is two levels down from list of list point
   of view.  In any case, the current proposal will not negatively impact on
   future possibilities of this nature.

Note that this section discusses compatibility of the proposed extension
with possible future extensions.  The desirability or compatibility of these
other extensions themselves are specifically not considered here.




-- 
Huaiyu Zhu                       hzhu@users.sourceforge.net
Matrix for Python Project        http://MatPy.sourceforge.net 




From trentm@ActiveState.com  Fri Aug 11 22:30:31 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 11 Aug 2000 14:30:31 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
Message-ID: <20000811143031.A13790@ActiveState.com>

These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files. Why,
then, do we treat them as binary files.

Would it not be preferable to have those files be handled like a normal text
files, i.e. check it out on Unix and it uses Unix line terminators, check it
out on Windows and it uses DOS line terminators.

This way you are using the native line terminator format and text processing
tools you use on them a less likely to screw them up. (Anyone see my
motivation?).

Does anybody see any problems treating them as text files? And, if not, who
knows how to get rid of the '-kb' sticky tag on those files.

Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From gmcm@hypernet.com  Fri Aug 11 22:34:54 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 17:34:54 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008112205.RAA01218@cj20424-a.reston1.va.home.com>
References: Your message of "Fri, 11 Aug 2000 13:04:26 -0400."             <1246109027-124413737@hypernet.com>
Message-ID: <1246092799-125389828@hypernet.com>

[Guido]
> OK.  So send() can do a partial write, but only on a stream
> connection.  And most standard library code doesn't check for
> that condition, nor does (probably) much other code that used the
> standard library as an example.  Worse, it seems that on some
> platforms send() *never* does a partial write (I couldn't
> reproduce it on Red Hat 6.1 Linux), so even stress testing may
> not reveal the lurking problem.

I'm quite sure you can force it with a non-blocking socket (on 
RH 5.2  64K blocks did it - but that's across a 10baseT 
ethernet connection).
 
> Possible solutions:
> 
> 1. Do nothing.  Pro: least work.  Con: subtle bugs remain.

Yes, but they're application-level bugs, even if they're in the 
std lib. They're not socket-support level bugs.
 
> 2. Fix all code that's broken in the standard library, and try to
> encourage others to fix their code.  Book authors need to be
> encouraged to add a warning.  Pro: most thorough.  Con: hard to
> fix every occurrence, especially in 3rd party code.

As far as I can tell, Linux and Windows can't fail with the std 
lib code (it's all blocking sockets). Sam says BSDI could fail, 
and as I recall HPUX could too.
 
> 3. Fix the socket module to raise an exception when less than the
> number of bytes sent occurs.  Pro: subtle bug exposed when it
> happens.  Con: breaks code that did the right thing!
> 
> 4. Fix the socket module to loop back on a partial send to send
> the remaining bytes.  Pro: no more short writes.  Con: what if
> the first few send() calls succeed and then an error is returned?
>  Note: code that checks for partial writes will be redundant!

If you can exempt non-blocking sockets, either 3 or 4 
(preferably 4) is OK. But if you can't exempt non-blocking 
sockets, I'll scream bloody murder. It would mean you could 
not write high performance socket code in Python (which it 
currently is very good for). For one thing, you'd break Medusa.
 
> I'm personally in favor of (4), despite the problem with errors
> after the first call.

The sockets HOWTO already documents the problem. Too 
bad I didn't write it before that std lib code got written <wink>.

I still prefer leaving it alone and telling people to use makefile if 
they can't deal with it. I'll vote +0 on 4 if and only if it exempts 
nonblocking sockets.

- Gordon


From nowonder@nowonder.de  Sat Aug 12 00:48:20 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Fri, 11 Aug 2000 23:48:20 +0000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <399490C4.F234D68A@nowonder.de>

Bill Tutt wrote:
> 
> This is an alternative approach that we should certainly consider. We could
> use ANTLR (www.antlr.org) as our parser generator, and have it generate Java
> for JPython, and C++ for CPython.  This would be a good chunk of work, and
> it's something I really don't have time to pursue. I don't even have time to
> pursue the idea about moving keyword recognition into the lexer.

<disclaimer val="I have only used ANTLR to generate Java code and not
for
 a parser but for a Java source code checker that tries to catch
possible
 runtime errors.">

ANTLR is a great tool. Unfortunately - although trying hard to change
it this morning in order to suppress keyword lookup in certain places -
I don't know anything about the interface between Python and its
parser. Is there some documentation on that (or can some divine deity
guide me with a few hints where to look in Parser/*)?

> I'm just not sure if you want to bother introducing C++ into the Python
> codebase solely to only have one parser for CPython and JPython.

Which compilers/platforms would this affect? VC++/Windows
won't be a problem, I guess; gcc mostly comes with g++,
but not always as a default. Probably more problematic.

don't-know-about-VMS-and-stuff-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From guido@beopen.com  Fri Aug 11 23:56:23 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:56:23 -0500
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:30:31 MST."
 <20000811143031.A13790@ActiveState.com>
References: <20000811143031.A13790@ActiveState.com>
Message-ID: <200008112256.RAA01675@cj20424-a.reston1.va.home.com>

> These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files. Why,
> then, do we treat them as binary files.

DevStudio doesn't (or at least 5.x didn't) like it if not all lines
used CRLF terminators.

> Would it not be preferable to have those files be handled like a normal text
> files, i.e. check it out on Unix and it uses Unix line terminators, check it
> out on Windows and it uses DOS line terminators.

I think I made them binary during the period when I was mounting the
Unix source directory on a Windows machine.  I don't do that any more
and I don't know anyone who does, so I think it's okay to change.

> This way you are using the native line terminator format and text processing
> tools you use on them a less likely to screw them up. (Anyone see my
> motivation?).
> 
> Does anybody see any problems treating them as text files? And, if not, who
> knows how to get rid of the '-kb' sticky tag on those files.

cvs admin -kkv file ...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Sat Aug 12 00:00:29 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 18:00:29 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: Your message of "Fri, 11 Aug 2000 17:34:54 -0400."
 <1246092799-125389828@hypernet.com>
References: Your message of "Fri, 11 Aug 2000 13:04:26 -0400." <1246109027-124413737@hypernet.com>
 <1246092799-125389828@hypernet.com>
Message-ID: <200008112300.SAA01726@cj20424-a.reston1.va.home.com>

> > 4. Fix the socket module to loop back on a partial send to send
> > the remaining bytes.  Pro: no more short writes.  Con: what if
> > the first few send() calls succeed and then an error is returned?
> >  Note: code that checks for partial writes will be redundant!
> 
> If you can exempt non-blocking sockets, either 3 or 4 
> (preferably 4) is OK. But if you can't exempt non-blocking 
> sockets, I'll scream bloody murder. It would mean you could 
> not write high performance socket code in Python (which it 
> currently is very good for). For one thing, you'd break Medusa.

Of course.  Don't worry.

> > I'm personally in favor of (4), despite the problem with errors
> > after the first call.
> 
> The sockets HOWTO already documents the problem. Too 
> bad I didn't write it before that std lib code got written <wink>.
> 
> I still prefer leaving it alone and telling people to use makefile if 
> they can't deal with it. I'll vote +0 on 4 if and only if it exempts 
> nonblocking sockets.

Understood.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From fdrake@beopen.com  Fri Aug 11 23:21:18 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 18:21:18 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <399490C4.F234D68A@nowonder.de>
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
 <399490C4.F234D68A@nowonder.de>
Message-ID: <14740.31838.336790.710005@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > parser. Is there some documentation on that (or can some divine deity
 > guide me with a few hints where to look in Parser/*)?

  Not that I'm aware of!  Feel free to write up any overviews you
think appropriate, and it can become part of the standard
documentation or be a README in the Parser/ directory.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Sat Aug 12 01:59:22 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 11 Aug 2000 20:59:22 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000811143031.A13790@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>

[Trent Mick]
> These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files.
> Why, then, do we treat them as binary files.
>
> Would it not be preferable to have those files be handled like
> a normal text files, i.e. check it out on Unix and it uses Unix
> line terminators, check it out on Windows and it uses DOS line
> terminators.
>
> This way you are using the native line terminator format and text
> processing tools you use on them a less likely to screw them up.
> (Anyone see my motivation?).

Not really.  They're not human-editable!  Leave 'em alone.  Keeping them in
binary mode is a clue to people that they aren't *supposed* to go mucking
with them via text processing tools.

> Does anybody see any problems treating them as text files? And,
> if not, who knows how to get rid of the '-kb' sticky tag on those
> files.

Well, whatever you did didn't work.  I'm dead in the water on Windows now --
VC6 refuses to open the new & improved .dsw and .dsp files.  I *imagine*
it's because they've got Unix line-ends now, but haven't yet checked.  Can
you fix it or back it out?




From skip@mojam.com (Skip Montanaro)  Sat Aug 12 02:07:31 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 11 Aug 2000 20:07:31 -0500 (CDT)
Subject: [Python-Dev] list comprehensions
Message-ID: <14740.41811.590487.13187@beluga.mojam.com>

I believe the latest update to the list comprehensions patch by Ping
resolved the last concert the BDFL(*) had.  As the owner of the patch is it my
responsibility to check it in or do I need to assign it to Guido for final
dispensation.

Skip

(*) Took me a week or so to learn what BDFL meant.  :-) I tried a lot of
"somewhat inaccurate" expansions before seeing it expanded in a message from
Barry...


From esr@thyrsus.com  Sat Aug 12 03:50:17 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 22:50:17 -0400
Subject: [Python-Dev] Stop the presses!
Message-ID: <20000811225016.A18449@thyrsus.com>

The bad news: I've found another reproducible core-dump bug in
Python-2.0 under Linux.  Actually I found it in 1.5.2 while making
some changes to CML2, and just verified that the CVS snapshot of
Python 2.0 bombs identically.

The bad news II: it really seems to be in the Python core, not one of
the extensions like Tkinter.  My curses and Tk front ends both
segfault in the same place, the guard of an innocuous-looking if
statement.

The good news: the patch to go from code-that-runs to code-that-bombs
is pretty small and clean.  I suspect anybody who really knows the ceval
internals will be able to use it to nail this bug fairly quickly.

Damn, seems like I found the core dump in Pickle just yesterday.  This
is getting to be a habit I don't enjoy much :-(.

I'm putting together a demonstration package now.  Stay tuned; I'll 
ship it tonight.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"One of the ordinary modes, by which tyrants accomplish their purposes
without resistance, is, by disarming the people, and making it an
offense to keep arms."
        -- Constitutional scholar and Supreme Court Justice Joseph Story, 1840


From ping@lfw.org  Sat Aug 12 03:56:50 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 19:56:50 -0700 (PDT)
Subject: [Python-Dev] Stop the presses!
In-Reply-To: <20000811225016.A18449@thyrsus.com>
Message-ID: <Pine.LNX.4.10.10008111956070.2615-100000@localhost>

On Fri, 11 Aug 2000, Eric S. Raymond wrote:
> The good news: the patch to go from code-that-runs to code-that-bombs
> is pretty small and clean.  I suspect anybody who really knows the ceval
> internals will be able to use it to nail this bug fairly quickly.
[...]
> I'm putting together a demonstration package now.  Stay tuned; I'll 
> ship it tonight.

Oooh, i can't wait.  How exciting!  Post it, post it!  :)


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu



From fdrake@beopen.com  Sat Aug 12 04:30:23 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 23:30:23 -0400 (EDT)
Subject: [Python-Dev] list comprehensions
In-Reply-To: <14740.41811.590487.13187@beluga.mojam.com>
References: <14740.41811.590487.13187@beluga.mojam.com>
Message-ID: <14740.50383.386575.806754@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:

 > I believe the latest update to the list comprehensions patch by
 > Ping resolved the last concert the BDFL(*) had.  As the owner of
 > the patch is it my responsibility to check it in or do I need to
 > assign it to Guido for final dispensation.

  Given the last comment added to the patch, check it in and close the
patch.  Then finish the PEP so we don't have to explain it over and
over and ...  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From esr@thyrsus.com  Sat Aug 12 04:56:33 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 23:56:33 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
Message-ID: <20000811235632.A19358@thyrsus.com>

Here are the directions to reproduce the core dump.

1. Download and unpack CML2 version 0.7.6 from 
   <http://www.tuxedo.org/~esr/kbuild/>.  Change directory into it.

2. Do `cmlcompile.py kernel-rules.cml' to generate a pickled rulebase.

3. Run `make xconfig'.  Ignore the error message about the arch defconfig.

4. Set NETDEVICES on the main menu to 'y'.
5. Select the "Network Device Configuration" menu that appears below.
6. Set PPP to 'y'.
7. Select the "PPP options" menu that appears below it.
8. Set PPP_BSDCOMP to 'y'.

9. Observe and dismiss the pop-up window.  Quit the configurator using the
   "File" menu on the menu bar.

10. Now apply the attached patch.

11. Repeat steps 2-10.  

12. Observe the core dump.  If you look near cmlsystem.py:770, you'll see
    that the patch inserted two print statements that bracket the apparent
    point of the core dump.

13. To verify that this core dump is neither a Tkinter nor an ncurses problem,
    run `make menuconfig'.

14. Repeat steps 2-8.  To set symbols in the curses interface, use the arrow
    keys to select each one and type "y".  To select a menu, use the arrow
    keys and type a space or Enter when the selection bar is over the entry.

15. Observe the core dump at the same spot.

This bug bites both a stock 1.5.2 and today's CVS snapshoot of 2.0.

--- cml.py	2000/08/12 03:21:40	1.97
+++ cml.py	2000/08/12 03:25:45
@@ -111,6 +111,21 @@
         res = res + self.dump()
         return res[:-1] + "}"
 
+class Requirement:
+    "A requirement, together with a message to be shown if it's violated."
+    def __init__(self, wff, message=None):
+        self.predicate = wff
+        self.message = message
+
+    def str(self):
+        return display_expression(self.predicate)
+
+    def __repr__(self):
+        if self.message:
+            return self.message
+        else:
+            return str(self)
+
 # This describes an entire configuration.
 
 class CMLRulebase:
--- cmlcompile.py	2000/08/10 16:22:39	1.131
+++ cmlcompile.py	2000/08/12 03:24:31
@@ -12,7 +12,7 @@
 
 _keywords = ('symbols', 'menus', 'start', 'unless', 'suppress',
 	    'dependent', 'menu', 'choices', 'derive', 'default',
-	    'range', 'require', 'prohibit', 'private', 'debug',
+	    'range', 'require', 'prohibit', 'explanation', 'private', 'debug',
 	    'helpfile', 'prefix', 'banner', 'icon', 'condition',
 	    'trits', 'on', 'warndepend')
 
@@ -432,7 +432,14 @@
             expr = parse_expr(input)
             if leader.type == "prohibit":
                 expr = ('==', expr, cml.n.value)
-	    requirements.append(expr)	    
+            msg = None
+            #next = input.lex_token()
+            #if next.type != 'explanation':
+            #    input.push_token(next)
+            #    continue
+            #else:
+            #    msg = input.demand("word")
+	    requirements.append(cml.Requirement(expr, msg))	    
 	    bool_tests.append((expr, input.infile, input.lineno))
 	elif leader.type == "default":
 	    symbol = input.demand("word")
@@ -746,7 +753,7 @@
             entry.visibility = resolve(entry.visibility)
 	if entry.default:
 	    entry.default = resolve(entry.default)
-    requirements = map(resolve, requirements)
+    requirements = map(lambda x: cml.Requirement(resolve(x.predicate), x.message), requirements)
     if bad_symbols:
 	sys.stderr.write("cmlcompile: %d symbols could not be resolved:\n"%(len(bad_symbols),))
 	sys.stderr.write(`bad_symbols.keys()` + "\n")
@@ -868,7 +875,7 @@
     # rule file are not consistent, it's not likely the user will make
     # a consistent one.
     for wff in requirements:
-	if not cml.evaluate(wff, debug):
+	if not cml.evaluate(wff.predicate, debug):
 	    print "cmlcompile: constraint violation:", wff
 	    errors = 1
 
--- cmlsystem.py	2000/07/25 04:24:53	1.98
+++ cmlsystem.py	2000/08/12 03:29:21
@@ -28,6 +28,7 @@
     "INCAUTOGEN":"/*\n * Automatically generated, don't edit\n */\n",
     "INCDERIVED":"/*\n * Derived symbols\n */\n",
     "ISNOTSET":"# %s is not set\n",
+    "NOTRITS":"Trit values are not currently allowed.",
     "RADIOINVIS":"    Query of choices menu %s elided, button pressed",
     "READING":"Reading configuration from %s",
     "REDUNDANT":"    Redundant assignment forced by %s", 
@@ -100,10 +101,10 @@
         "Assign constraints to their associated symbols."
         for entry in self.dictionary.values():
             entry.constraints = []
-        for wff in self.constraints:
-            for symbol in cml.flatten_expr(wff):
-                if not wff in symbol.constraints:
-                    symbol.constraints.append(wff)
+        for requirement in self.constraints:
+            for symbol in cml.flatten_expr(requirement.predicate):
+                if not requirement.predicate in symbol.constraints:
+                    symbol.constraints.append(requirement)
         if self.debug:
             cc = dc = tc = 0
             for symbol in self.dictionary.values():
@@ -436,8 +437,8 @@
         if symbol.constraints:
             self.set_symbol(symbol, value)
             for constraint in symbol.constraints:
-                if not cml.evaluate(constraint, self.debug):
-                    self.debug_emit(1, self.lang["CONSTRAINT"] % (value, symbol.name, constraint))
+                if not cml.evaluate(constraint.predicate, self.debug):
+                    self.debug_emit(1, self.lang["CONSTRAINT"] % (value, symbol.name, str(constraint)))
                     self.rollback()
                     return 0
             self.rollback()
@@ -544,7 +545,7 @@
         # be unfrozen.  Simplify constraints to remove newly frozen variables.
         # Then rerun optimize_constraint_access.
         if freeze:
-            self.constraints = map(lambda wff, self=self: self.simplify(wff), self.constraints)
+            self.constraints = map(lambda requirement, self=self: cml.Requirement(self.simplify(requirement.predicate), requirement.message), self.constraints)
             self.optimize_constraint_access()
             for entry in self.dictionary.values():
                 if self.bindcheck(entry, self.newbindings) and entry.menu and entry.menu.type=="choices":
@@ -559,7 +560,7 @@
         violations = []
         # First, check the explicit constraints.
         for constraint in self.constraints:
-            if not cml.evaluate(constraint, self.debug):
+            if not cml.evaluate(constraint.predicate, self.debug):
                 violations.append(constraint);
                 self.debug_emit(1, self.lang["FAILREQ"] % (constraint,))
         # If trits are disabled, any variable having a trit value is wrong.
@@ -570,7 +571,7 @@
                     mvalued = ('and', ('!=', entry,cml.m), mvalued)
             if mvalued != cml.y:
                mvalued = self.simplify(mvalued)
-               violations.append(('implies', ('==', self.trit_tie, cml.n), mvalued))
+               violations.append(cml.Requirement(('implies', ('==', self.trit_tie, cml.n), mvalued), self.lang["NOTRITS"]))
         return violations
 
     def set_symbol(self, symbol, value, source=None):
@@ -631,10 +632,10 @@
         dups = {}
         relevant = []
         for csym in touched:
-            for wff in csym.constraints:
-                if not dups.has_key(wff):
-                    relevant.append(wff)
-                    dups[wff] = 1
+            for requirement in csym.constraints:
+                if not dups.has_key(requirement.predicate):
+                    relevant.append(requirement.predicate)
+                    dups[requirement.predicate] = 1
         # Now loop through the constraints, simplifying out assigned
         # variables and trying to freeze more variables each time.
         # The outer loop guarantees that as long as the constraints
@@ -765,7 +766,9 @@
                     else:
                         self.set_symbol(left, cml.n.value, source)
                         return 1
+                print "Just before the core-dump point"
                 if right_mutable and left == cml.n.value:
+                    print "Just after the core-dump point"
                     if rightnonnull == cml.n.value:
                         self.debug_emit(1, self.lang["REDUNDANT"] % (wff,))
                         return 0

End of diffs,

-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders, give
orders, cooperate, act alone, solve equations, analyze a new problem,
pitch manure, program a computer, cook a tasty meal, fight efficiently,
die gallantly. Specialization is for insects.
	-- Robert A. Heinlein, "Time Enough for Love"


From nhodgson@bigpond.net.au  Sat Aug 12 05:16:54 2000
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Sat, 12 Aug 2000 14:16:54 +1000
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net>
Message-ID: <045901c00414$27a67010$8119fea9@neil>

> .... It also seems that there are a lot of people (let's
> call them "back seat coders") who have vague ideas of what they want but
> don't want to spend a bunch of time in a long discussion about registry
> arcana. Therefore I am endevouring to make it as easy and fast to
> contribute to the discussion as possible.

   I think a lot of the registry using people are unwilling to spend too
much energy on this because, while it looks useless, its not really going to
be a problem so long as the low level module is available.

> If you're one of the people who has asked for winreg in the core then
> you should respond. It isn't (IMO) sufficient to put in a hacky API to
> make your life easier. You need to give something to get something. You
> want windows registry support in the core -- fine, let's do it properly.

   Hacky API only please.

   The registry is just not important enough to have this much attention or
work.

> All you need to do is read this email and comment on whether you agree
> with the overall principle and then give your opinion on fifteen
> possibly controversial issues. The "overall principle" is to steal
> shamelessly from Microsoft's new C#/VB/OLE/Active-X/CRL API instead of
> innovating for Python. That allows us to avoid starting the debate from
> scratch. It also eliminates the feature that Mark complained about
> (which was a Python-specific innovation).

   The Microsoft.Win32.Registry* API appears to be a hacky legacy API to me.
Its there for compatibility during the transition to the
System.Configuration API. Read the blurb for ConfigManager to understand the
features of System.Configuration. Its all based on XML files. What a
surprise.

   Neil



From ping@lfw.org  Sat Aug 12 05:52:54 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 21:52:54 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000811235632.A19358@thyrsus.com>
Message-ID: <Pine.LNX.4.10.10008112149280.2615-100000@localhost>

On Fri, 11 Aug 2000, Eric S. Raymond wrote:
> Here are the directions to reproduce the core dump.

I have successfully reproduced the core dump.


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu



From ping@lfw.org  Sat Aug 12 05:57:02 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 21:57:02 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008112149280.2615-100000@localhost>
Message-ID: <Pine.LNX.4.10.10008112156180.2615-100000@localhost>

On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> I have successfully reproduced the core dump.

I'm investigating.  Top of the stack looks like:

#0  0x40061e39 in __pthread_lock (lock=0x0, self=0x40067f20) at spinlock.c:41
#1  0x4005f8aa in __pthread_mutex_lock (mutex=0xbfe0277c) at mutex.c:92
#2  0x400618cb in __flockfile (stream=0xbfe02794) at lockfile.c:32
#3  0x400d2955 in _IO_vfprintf (s=0xbfe02794, 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'", 
    ap=0xbfe02a54) at vfprintf.c:1041
#4  0x400e00b3 in _IO_vsprintf (string=0xbfe02850 "Ø/à¿", 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'", 
    args=0xbfe02a54) at iovsprintf.c:47
#5  0x80602c5 in PyErr_Format (exception=0x819783c, 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'")
    at errors.c:377
#6  0x806eac4 in instance_getattr1 (inst=0x84ecdd4, name=0x81960a8)
    at classobject.c:594
#7  0x806eb97 in instance_getattr (inst=0x84ecdd4, name=0x81960a8)
    at classobject.c:639
#8  0x807b445 in PyObject_GetAttrString (v=0x84ecdd4, name=0x80d306b "__str__")
    at object.c:595
#9  0x807adf8 in PyObject_Str (v=0x84ecdd4) at object.c:291
#10 0x8097d1e in builtin_str (self=0x0, args=0x85adc3c) at bltinmodule.c:2034
#11 0x805a490 in call_builtin (func=0x81917e0, arg=0x85adc3c, kw=0x0)
    at ceval.c:2369


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu



From tim_one@email.msn.com  Sat Aug 12 07:29:42 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 02:29:42 -0400
Subject: [Python-Dev] list comprehensions
In-Reply-To: <14740.41811.590487.13187@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFPGPAA.tim_one@email.msn.com>

[Skip Montanaro]
> I believe the latest update to the list comprehensions patch by Ping
> resolved the last concert the BDFL(*) had.  As the owner of the
> patch is it my responsibility to check it in or do I need to assign
> it to Guido for final dispensation.

As the owner of the listcomp PEP, I both admonish you to wait until the PEP
is complete, and secretly encourage you to check it in anyway (unlike most
PEPs, this one is pre-approved no matter what I write <0.5 wink> -- better
to get the code out there now!  if anything changes due to the PEP, should
be easy to twiddle).

acting-responsibly-despite-appearances-ly y'rs  - tim




From tim_one@email.msn.com  Sat Aug 12 08:32:17 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 03:32:17 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000811235632.A19358@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>

[Eric S. Raymond, with a lot of code that dies in a lot of pain]

Eric, as I'm running on a Windows laptop right now, there's not much I can
do to try to run this code.  However, something struck me in your patch "by
eyeball", and here's a self-contained program that crashes under Windows:

# This is esr's new class.
class Requirement:
    "A requirement, together with a message to be shown if it's violated."
    def __init__(self, wff, message=None):
        self.predicate = wff
        self.message = message

    def str(self):
        return display_expression(self.predicate)

    def __repr__(self):
        if self.message:
            return self.message
        else:
            return str(self)

# This is my driver.
r = Requirement("trust me, I'm a wff!")
print r


Could that be related to your problem?  I think you really wanted to name
"str" as "__str__" in this class (or if not, comment in extreme detail why
you want to confuse the hell out of the reader <wink>).  As is, my

    print r

attempts to get look up r.__str__, which isn't found, so Python falls back
to using r.__repr__.  That *is* found, but r.message is None, so
Requirement.__repr__ executes

    return str(self)

And then we go thru the whole "no __str__" -> "try __repr__" -> "message is
None" -> "return str(self)" business again, and end up with unbounded
recursion.  The top of the stacktrace Ping posted *does* show that the code
is failing to find a "__str__" attr, so that's consistent with the scenario
described here.

If this is the problem, note that ways to detect such kinds of unbounded
recursion have been discussed here within the last week.  You're a clever
enough fellow that I have to suspect you concocted this test case as a way
to support the more extreme of those proposals without just saying "+1"
<wink>.




From ping@lfw.org  Sat Aug 12 09:09:53 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Sat, 12 Aug 2000 01:09:53 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008112156180.2615-100000@localhost>
Message-ID: <Pine.LNX.4.10.10008120107140.2615-100000@localhost>

On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> > I have successfully reproduced the core dump.
> 
> I'm investigating.  Top of the stack looks like:

This chunk of stack repeats lots and lots of times.
The problem is due to infinite recursion in your __repr__ routine:

    class Requirement:
        "A requirement, together with a message to be shown if it's violated."
        def __init__(self, wff, message=None):
            self.predicate = wff
            self.message = message

        def str(self):
            return display_expression(self.predicate)

        def __repr__(self):
            if self.message:
                return self.message
            else:
                return str(self)

Notice that Requirement.__repr__ calls str(self), which triggers
Requirement.__repr__ again because there is no __str__ method.

If i change "def str(self)" to "def __str__(self)", the problem goes
away and everything works properly.

With a reasonable stack depth limit in place, this would produce
a run-time error rather than a segmentation fault.


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu



From ping@lfw.org  Sat Aug 12 09:22:40 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Sat, 12 Aug 2000 01:22:40 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>

On Sat, 12 Aug 2000, Tim Peters wrote:
> Could that be related to your problem?  I think you really wanted to name
> "str" as "__str__" in this class

Oops.  I guess i should have just read the code before going
through the whole download procedure.

Uh, yeah.  What he said.  :)  That wise man with the moustache over there.


One thing i ran into as a result of trying to run it under the
debugger, though: turning on cursesmodule was slightly nontrivial.
There's no cursesmodule.c; it's _cursesmodule.c instead; but
Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
wasn't sufficient; i had to edit and insert the underscores by hand
to get curses to work.


-- ?!ng



From Fredrik Lundh" <effbot@telia.com  Sat Aug 12 10:12:19 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 11:12:19 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>              <20000811162109.I17171@xs4all.nl>  <200008111556.KAA05068@cj20424-a.reston1.va.home.com>
Message-ID: <00d301c0043d$7eb0b540$f2a6b5d4@hagrid>

guido wrote:
> > Indeed. I didn't actually check the story, since Guido was =
apparently
> > convinced by its validity.
>=20
> I wasn't convinced!  I wrote "is this true?" in my message!!!
>=20
> > I was just operating under the assumption that
> > send() did behave like write(). I won't blindly believe Guido =
anymore ! :)
>=20
> I bgelieve they do behave the same: in my mind, write() doesn't write
> fewer bytes than you tell it either!  (Except maybe to a tty device
> when interrupted by a signal???)

SUSv2 again:

    If a write() requests that more bytes be written than there
    is room for (for example, the ulimit or the physical end of a
    medium), only as many bytes as there is room for will be
    written. For example, suppose there is space for 20 bytes
    more in a file before reaching a limit. A write of 512 bytes
    will return 20. The next write of a non-zero number of bytes
    will give a failure return (except as noted below)  and the
    implementation will generate a SIGXFSZ signal for the thread.=20

    If write() is interrupted by a signal before it writes any data,
    it will return -1 with errno set to [EINTR].=20

    If write() is interrupted by a signal after it successfully writes
    some data, it will return the number of bytes written.=20

sockets are an exception:

    If fildes refers to a socket, write() is equivalent to send() with
    no flags set.

fwiw, if "send" may send less than the full buffer in blocking
mode on some platforms (despite what the specification implies),
it's quite interesting that *nobody* has ever noticed before...

</F>



From Fredrik Lundh" <effbot@telia.com  Sat Aug 12 10:13:45 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 11:13:45 +0200
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
References: <20000811143031.A13790@ActiveState.com>  <200008112256.RAA01675@cj20424-a.reston1.va.home.com>
Message-ID: <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>

guido wrote:
> I think I made them binary during the period when I was mounting the
> Unix source directory on a Windows machine.  I don't do that any more
> and I don't know anyone who does

we do.

trent wrote:
> > Does anybody see any problems treating them as text files?

developer studio 5.0 does:

    "This makefile was not generated by Developer Studio"

    "Continuing will create a new Developer Studio project to
    wrap this makefile. You will be prompted to save after the
    new project has been created".

    "Do you want to continue"

    (click yes)

    "The options file (.opt) for this workspace specified a project
    configuration "... - Win32 Alpha Release" that no longer exists.
    The configuration will be set to "... - Win32 Debug"

    (click OK)

    (click build)

    "MAKE : fatal error U1052: file '....mak' not found"

</F>



From thomas@xs4all.net  Sat Aug 12 10:21:19 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 11:21:19 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>; from ping@lfw.org on Sat, Aug 12, 2000 at 01:22:40AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost>
Message-ID: <20000812112119.C14470@xs4all.nl>

On Sat, Aug 12, 2000 at 01:22:40AM -0700, Ka-Ping Yee wrote:

> One thing i ran into as a result of trying to run it under the
> debugger, though: turning on cursesmodule was slightly nontrivial.
> There's no cursesmodule.c; it's _cursesmodule.c instead; but
> Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> wasn't sufficient; i had to edit and insert the underscores by hand
> to get curses to work.

You should update your Setup file, then ;) Compare it with Setup.in and see
what else changed since the last time you configured Python.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From martin@loewis.home.cs.tu-berlin.de  Sat Aug 12 10:29:25 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 12 Aug 2000 11:29:25 +0200
Subject: [Python-Dev] Processing CVS diffs
Message-ID: <200008120929.LAA01434@loewis.home.cs.tu-berlin.de>

While looking at the comments for Patch #100654, I noticed a complaint
about the patch being a CVS diff, which is not easily processed by
patch.

There is a simple solution to that: process the patch with the script
below. It will change the patch in-place, and it works well for me
even though it is written in the Evil Language :-)

Martin

#! /usr/bin/perl -wi
# Propagate the full pathname from the Index: line in CVS output into
# the diff itself so that patch will use it.
#  Thrown together by Jason Merrill <jason@cygnus.com>

while (<>)
{
  if (/^Index: (.*)/) 
    {
      $full = $1;
      print;
      for (1..7)
	{
	  $_ = <>;
	  s/ [^\t]+\t/ $full\t/;
	  print;
	}
    }
  else
    {
      print;
    }
}




From mal@lemburg.com  Sat Aug 12 10:48:25 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 12 Aug 2000 11:48:25 +0200
Subject: [Python-Dev] Python Announcements ???
Message-ID: <39951D69.45D01703@lemburg.com>

Could someone at BeOpen please check what happened to the
python-announce mailing list ?!

Messages to that list don't seem to show up anywhere and I've
been getting strange reports from the mail manager software in
the past when I've tried to post there.

Also, what happened to the idea of hooking that list onto
the c.l.p.a newsgroup. I don't remember the details of
how this is done (had to do something with adding some
approved header), but this would be very helpful.

The Python community currently has no proper way of
announcing new projects, software or gigs. A post to
c.l.p which has grown to be a >4K posts/month list does
not have the same momentum as pushing it through c.l.p.a
had in the past.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From just@letterror.com  Sat Aug 12 12:51:31 2000
From: just@letterror.com (Just van Rossum)
Date: Sat, 12 Aug 2000 12:51:31 +0100
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <l03102803b5bae5eb2fe1@[193.78.237.125]>

(Sorry for the late reply, that's what you get when you don't Cc me...)

Vladimir Marangozov wrote:
> [Just]
> > Gordon, how's that Stackless PEP coming along?
> > Sorry, I couldn't resist ;-)
>
> Ah, in this case, we'll get a memory error after filling the whole disk
> with frames <wink>

No matter how much we wink to each other, that was a cheap shot; especially
since it isn't true: Stackless has a MAX_RECURSION_DEPTH value. Someone who
has studied Stackless "in detail" (your words ;-) should know that.

Admittedly, that value is set way too high in the last stackless release
(123456 ;-), but that doesn't change the principle that Stackless could
solve the problem discussed in this thread in a reliable and portable
manner.

Of course there's be work to do:
- MAX_RECURSION_DEPTH should be changeable at runtime
- __str__ (and a bunch of others) isn't yet stackless
- ...

But the hardest task seems to be to get rid of the hostility and prejudices
against Stackless :-(

Just




From esr@thyrsus.com  Sat Aug 12 12:22:55 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:22:55 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 03:32:17AM -0400
References: <20000811235632.A19358@thyrsus.com> <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>
Message-ID: <20000812072255.C20109@thyrsus.com>

Tim Peters <tim_one@email.msn.com>:
> If this is the problem, note that ways to detect such kinds of unbounded
> recursion have been discussed here within the last week.  You're a clever
> enough fellow that I have to suspect you concocted this test case as a way
> to support the more extreme of those proposals without just saying "+1"
> <wink>.

I may be that clever, but I ain't that devious.  I'll try the suggested
fix.  Very likely you're right, though the location of the core dump
is peculiar if this is the case.  It's inside bound_from_constraint(),
whereas in your scenario I'd expect it to be in the Requirement method code.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The day will come when the mystical generation of Jesus by the Supreme
Being as his father, in the womb of a virgin, will be classed with the
fable of the generation of Minerva in the brain of Jupiter.
	-- Thomas Jefferson, 1823


From esr@thyrsus.com  Sat Aug 12 12:34:19 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:34:19 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>; from ping@lfw.org on Sat, Aug 12, 2000 at 01:22:40AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost>
Message-ID: <20000812073419.D20109@thyrsus.com>

Ka-Ping Yee <ping@lfw.org>:
> One thing i ran into as a result of trying to run it under the
> debugger, though: turning on cursesmodule was slightly nontrivial.
> There's no cursesmodule.c; it's _cursesmodule.c instead; but
> Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> wasn't sufficient; i had to edit and insert the underscores by hand
> to get curses to work.

Your Setup is out of date.

But this reminds me.  There's way too much hand-hacking in the Setup
mechanism.  It wouldn't be hard to enhance the Setup format to support
#if/#endif so that config.c generation could take advantage of
configure tests.  That way, Setup could have constructs in it like
this:

#if defined(CURSES)
#if defined(linux)
_curses _cursesmodule.c -lncurses
#else
_curses _cursesmodule.c -lcurses -ltermcap
#endif
#endif

I'm willing to do and test this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The right of the citizens to keep and bear arms has justly been considered as
the palladium of the liberties of a republic; since it offers a strong moral
check against usurpation and arbitrary power of rulers; and will generally,
even if these are successful in the first instance, enable the people to resist
and triumph over them."
        -- Supreme Court Justice Joseph Story of the John Marshall Court


From esr@snark.thyrsus.com  Sat Aug 12 12:44:54 2000
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:44:54 -0400
Subject: [Python-Dev] Core dump is dead, long live the core dump
Message-ID: <200008121144.HAA20230@snark.thyrsus.com>

Tim's diagnosis of fatal recursion was apparently correct; apologies,
all.  This still leaves the question of why the core dump happened so
far from the actual scene of the crime.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

In every country and in every age, the priest has been hostile to
liberty. He is always in alliance with the despot, abetting his abuses
in return for protection to his own.
	-- Thomas Jefferson, 1814


From mal@lemburg.com  Sat Aug 12 12:36:14 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 12 Aug 2000 13:36:14 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com>
Message-ID: <399536AE.309D456C@lemburg.com>

"Eric S. Raymond" wrote:
> 
> Ka-Ping Yee <ping@lfw.org>:
> > One thing i ran into as a result of trying to run it under the
> > debugger, though: turning on cursesmodule was slightly nontrivial.
> > There's no cursesmodule.c; it's _cursesmodule.c instead; but
> > Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> > wasn't sufficient; i had to edit and insert the underscores by hand
> > to get curses to work.
> 
> Your Setup is out of date.
> 
> But this reminds me.  There's way too much hand-hacking in the Setup
> mechanism.  It wouldn't be hard to enhance the Setup format to support
> #if/#endif so that config.c generation could take advantage of
> configure tests.  That way, Setup could have constructs in it like
> this:
> 
> #if defined(CURSES)
> #if defined(linux)
> _curses _cursesmodule.c -lncurses
> #else
> _curses _cursesmodule.c -lcurses -ltermcap
> #endif
> #endif
> 
> I'm willing to do and test this.

This would be a *cool* thing to have :-) 

Definitely +1 from me if it's done in a portable way.

(Not sure how you would get this to run without the C preprocessor
though -- and Python's Makefile doesn't provide any information
on how to call it in a platform independent way. It's probably
cpp on most platforms, but you never know...)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From esr@thyrsus.com  Sat Aug 12 12:50:57 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:50:57 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <399536AE.309D456C@lemburg.com>; from mal@lemburg.com on Sat, Aug 12, 2000 at 01:36:14PM +0200
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com> <399536AE.309D456C@lemburg.com>
Message-ID: <20000812075056.A20245@thyrsus.com>

M.-A. Lemburg <mal@lemburg.com>:
> > But this reminds me.  There's way too much hand-hacking in the Setup
> > mechanism.  It wouldn't be hard to enhance the Setup format to support
> > #if/#endif so that config.c generation could take advantage of
> > configure tests.  That way, Setup could have constructs in it like
> > this:
> > 
> > #if defined(CURSES)
> > #if defined(linux)
> > _curses _cursesmodule.c -lncurses
> > #else
> > _curses _cursesmodule.c -lcurses -ltermcap
> > #endif
> > #endif
> > 
> > I'm willing to do and test this.
> 
> This would be a *cool* thing to have :-) 
> 
> Definitely +1 from me if it's done in a portable way.
> 
> (Not sure how you would get this to run without the C preprocessor
> though -- and Python's Makefile doesn't provide any information
> on how to call it in a platform independent way. It's probably
> cpp on most platforms, but you never know...)

Ah.  The Makefile may not provide this information -- but I believe 
configure can be made to!
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Ideology, politics and journalism, which luxuriate in failure, are
impotent in the face of hope and joy.
	-- P. J. O'Rourke


From thomas@xs4all.net  Sat Aug 12 12:53:46 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 13:53:46 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000812073419.D20109@thyrsus.com>; from esr@thyrsus.com on Sat, Aug 12, 2000 at 07:34:19AM -0400
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com>
Message-ID: <20000812135346.D14470@xs4all.nl>

On Sat, Aug 12, 2000 at 07:34:19AM -0400, Eric S. Raymond wrote:

> But this reminds me.  There's way too much hand-hacking in the Setup
> mechanism.  It wouldn't be hard to enhance the Setup format to support
> #if/#endif so that config.c generation could take advantage of
> configure tests.  That way, Setup could have constructs in it like
> this:

> #if defined(CURSES)
> #if defined(linux)
> _curses _cursesmodule.c -lncurses
> #else
> _curses _cursesmodule.c -lcurses -ltermcap
> #endif
> #endif

Why go through that trouble ? There already is a 'Setup.config' file, which
is used to pass Setup info for the thread and gc modules. It can easily be
extended to include information on all other locatable modules, leaving
'Setup' or 'Setup.local' for people who have their modules in strange
places. What would be a cool idea as well would be a configuration tool. Not
as complex as the linux kernel config tool, but something to help people
select the modules they want. Though it might not be necessary if configure
finds out what modules can be safely built.

I'm willing to write some autoconf tests to locate modules as well, if this
is deemed a good idea.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Sat Aug 12 12:54:31 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 13:54:31 +0200
Subject: [Python-Dev] Core dump is dead, long live the core dump
In-Reply-To: <200008121144.HAA20230@snark.thyrsus.com>; from esr@snark.thyrsus.com on Sat, Aug 12, 2000 at 07:44:54AM -0400
References: <200008121144.HAA20230@snark.thyrsus.com>
Message-ID: <20000812135431.E14470@xs4all.nl>

On Sat, Aug 12, 2000 at 07:44:54AM -0400, Eric S. Raymond wrote:

> Tim's diagnosis of fatal recursion was apparently correct; apologies,
> all.  This still leaves the question of why the core dump happened so
> far from the actual scene of the crime.

Blame it on your stack :-) It could have been that the appropriate error was
generated, and that the stack overflowed *again* during the processing of
that error :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gmcm@hypernet.com  Sat Aug 12 14:16:47 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Sat, 12 Aug 2000 09:16:47 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <00d301c0043d$7eb0b540$f2a6b5d4@hagrid>
Message-ID: <1246036275-128789882@hypernet.com>

Fredrik wrote:

> fwiw, if "send" may send less than the full buffer in blocking
> mode on some platforms (despite what the specification implies),
> it's quite interesting that *nobody* has ever noticed before...

I noticed, but I expected it, so had no reason to comment. The 
Linux man pages are the only specification of send that I've 
seen that don't make a big deal out it. And clearly I'm not the 
only one, otherwise there would never have been a bug report 
(he didn't experience it, he just noticed sends without checks).

- Gordon


From guido@beopen.com  Sat Aug 12 15:48:11 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sat, 12 Aug 2000 09:48:11 -0500
Subject: [Python-Dev] Re: PEP 0211: Linear Algebra Operators
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:13:17 MST."
 <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com>
References: <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com>
Message-ID: <200008121448.JAA03545@cj20424-a.reston1.va.home.com>

> As the PEP posted by Greg is substantially different from the one floating
> around in c.l.py, I'd like to post the latter here, which covers several
> weeks of discussions by dozens of discussants.  I'd like to encourage Greg
> to post his version to python-list to seek comments.

A procedural suggestion: let's have *two* PEPs, one for Huaiyu's
proposal, one for Greg's.  Each PEP should in its introduction briefly
mention the other as an alternative.  I don't generally recommend that
alternative proposals develop separate PEPs, but in this case the
potential impact on Python is so large that I think it's the only way
to proceed that doesn't give one group an unfair advantage over the
other.

I haven't had the time to read either proposal yet, so I can't comment
on their (in)compatibility, but I would surmise that at most one can
be accepted -- with the emphasis on *at most* (possibly neither is
ready for prime time), and with the understanding that each proposal
may be able to borrow ideas or code from the other anyway.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Vladimir.Marangozov@inrialpes.fr  Sat Aug 12 15:21:50 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 16:21:50 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000811103701.A25386@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Aug 11, 2000 10:37:01 AM
Message-ID: <200008121421.QAA20095@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Aug 11, 2000 at 05:58:45PM +0200, Vladimir Marangozov wrote:
> > On a second thought, I think this would be a bad idea, even if
> > we manage to tweak the stack limits on most platforms. We would
> > loose determinism = loose control -- no good. A depth-first algorithm
> > may succeed on one machine, and fail on another.
> 
> So what?

Well, the point is that people like deterministic behavior and tend to
really dislike unpredictable systems, especially when the lack of
determinism is due to platform heterogeneity.

> We don't limit the amount of memory you can allocate on all
> machines just because your program may run out of memory on some
> machine.

We don't because we can't do it portably. But if we could, this would
have been a very useful setting -- there has been demand for Python on
embedded systems where memory size is a constraint. And note that after
the malloc cleanup, we *can* do this with a specialized Python malloc
(control how much memory is allocated from Python).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Fredrik Lundh" <effbot@telia.com  Sat Aug 12 15:29:57 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 16:29:57 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
References: <1246036275-128789882@hypernet.com>
Message-ID: <001301c00469$cb380fe0$f2a6b5d4@hagrid>

gordon wrote:
> Fredrik wrote:
>=20
> > fwiw, if "send" may send less than the full buffer in blocking
> > mode on some platforms (despite what the specification implies),
> > it's quite interesting that *nobody* has ever noticed before...
>=20
> I noticed, but I expected it, so had no reason to comment. The=20
> Linux man pages are the only specification of send that I've=20
> seen that don't make a big deal out it. And clearly I'm not the=20
> only one, otherwise there would never have been a bug report=20
> (he didn't experience it, he just noticed sends without checks).

I meant "I wonder why my script fails" rather than "that piece
of code looks funky".

:::

fwiw, I still haven't found a single reference (SUSv2 spec, man-
pages, Stevens, the original BSD papers) that says that a blocking
socket may do anything but sending all the data, or fail.

if that's true, I'm not sure we really need to "fix" anything here...

</F>



From Vladimir.Marangozov@inrialpes.fr  Sat Aug 12 15:46:40 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 16:46:40 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <l03102803b5bae5eb2fe1@[193.78.237.125]> from "Just van Rossum" at Aug 12, 2000 12:51:31 PM
Message-ID: <200008121446.QAA20112@python.inrialpes.fr>

Just van Rossum wrote:
> 
> (Sorry for the late reply, that's what you get when you don't Cc me...)
> 
> Vladimir Marangozov wrote:
> > [Just]
> > > Gordon, how's that Stackless PEP coming along?
> > > Sorry, I couldn't resist ;-)
> >
> > Ah, in this case, we'll get a memory error after filling the whole disk
> > with frames <wink>
> 
> No matter how much we wink to each other, that was a cheap shot;

I can't say that yours was more expensive <wink>.

> especially since it isn't true: Stackless has a MAX_RECURSION_DEPTH value.
> Someone who has studied Stackless "in detail" (your words ;-) should know
> that.

As I said - it has been years ago. Where's that PEP draft?
Please stop dreaming about hostility <wink>. I am all for Stackless, but
the implementation wasn't mature enough at the time when I looked at it.
Now I hear it has evolved and does not allow graph cycles. Okay, good --
tell me more in a PEP and submit a patch.

> 
> Admittedly, that value is set way too high in the last stackless release
> (123456 ;-), but that doesn't change the principle that Stackless could
> solve the problem discussed in this thread in a reliable and portable
> manner.

Indeed, if it didn't reduce the stack dependency in a portable way, it
couldn't have carried the label "Stackless" for years. BTW, I'm more
interested in the stackless aspect than the call/cc aspect of the code.

> 
> Of course there's be work to do:
> - MAX_RECURSION_DEPTH should be changeable at runtime
> - __str__ (and a bunch of others) isn't yet stackless
> - ...

Tell me more in the PEP.

> 
> But the hardest task seems to be to get rid of the hostility and prejudices
> against Stackless :-(

Dream on <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From skip@mojam.com (Skip Montanaro)  Sat Aug 12 18:56:23 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sat, 12 Aug 2000 12:56:23 -0500 (CDT)
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
Message-ID: <14741.36807.101870.221890@beluga.mojam.com>

With Thomas's patch to the top-level Makefile that makes Grammar a more
first-class directory, are the generated graminit.h and graminit.c files
needed any longer?

Skip


From guido@beopen.com  Sat Aug 12 20:12:23 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sat, 12 Aug 2000 14:12:23 -0500
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
In-Reply-To: Your message of "Sat, 12 Aug 2000 12:56:23 EST."
 <14741.36807.101870.221890@beluga.mojam.com>
References: <14741.36807.101870.221890@beluga.mojam.com>
Message-ID: <200008121912.OAA00807@cj20424-a.reston1.va.home.com>

> With Thomas's patch to the top-level Makefile that makes Grammar a more
> first-class directory, are the generated graminit.h and graminit.c files
> needed any longer?

I still like to keep them around.  Most people don't hack the grammar.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From trentm@ActiveState.com  Sat Aug 12 19:39:00 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 11:39:00 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>; from effbot@telia.com on Sat, Aug 12, 2000 at 11:13:45AM +0200
References: <20000811143031.A13790@ActiveState.com> <200008112256.RAA01675@cj20424-a.reston1.va.home.com> <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>
Message-ID: <20000812113900.D3528@ActiveState.com>

On Sat, Aug 12, 2000 at 11:13:45AM +0200, Fredrik Lundh wrote:
> guido wrote:
> > I think I made them binary during the period when I was mounting the
> > Unix source directory on a Windows machine.  I don't do that any more
> > and I don't know anyone who does
> 
> we do.
> 
> trent wrote:
> > > Does anybody see any problems treating them as text files?
> 
> developer studio 5.0 does:
> 
>     "This makefile was not generated by Developer Studio"
> 
>     "Continuing will create a new Developer Studio project to
>     wrap this makefile. You will be prompted to save after the
>     new project has been created".
> 
>     "Do you want to continue"
> 
>     (click yes)
> 
>     "The options file (.opt) for this workspace specified a project
>     configuration "... - Win32 Alpha Release" that no longer exists.
>     The configuration will be set to "... - Win32 Debug"
> 
>     (click OK)
> 
>     (click build)
> 
>     "MAKE : fatal error U1052: file '....mak' not found"
> 
> </F>

I admit that I have not tried a clean checkout and used DevStudio 5 (I will
try at home alter today). Howver, I *do* think that the problem here is that
you grabbed in the short iterim before patch:

http://www.python.org/pipermail/python-checkins/2000-August/007072.html


I hope I hope.
If it is broken for MSVC 5 when I tried in a little bit I will back out.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Sat Aug 12 19:47:19 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 11:47:19 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 11, 2000 at 08:59:22PM -0400
References: <20000811143031.A13790@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>
Message-ID: <20000812114719.E3528@ActiveState.com>

On Fri, Aug 11, 2000 at 08:59:22PM -0400, Tim Peters wrote:
> Not really.  They're not human-editable!  Leave 'em alone.  Keeping them in
> binary mode is a clue to people that they aren't *supposed* to go mucking
> with them via text processing tools.

I think that putting them in binary mode is a misleading clue that people
should not muck with them. The *are* text files. Editable or not the are not
binary. I shouldn't go mucking with 'configure' either, because it is a generated
file, but we shouldn't call it binary.

Yes, I agree, people should not muck with .dsp files. I am not suggesting
that we do. The "text-processing" I was referring to are my attempts to keep
a local repository of Python in our local SCM tool (Perforce) in sync with
Python-CVS. When I suck in Python-CVS on linux and them shove it in Perforce:
 - the .dsp's land on my linux box with DOS terminators
 - I check everything into Perforce
 - I check Python out of Perforce on a Windows box and the .dsp's are all
   terminated with \r\n\n. This is because the .dsp were not marked as binary
   in Perforce because I logically didn't think that they *should* be marked
   as binary.
Having them marked as binary is just misleading I think.
 
Anyway, as Guido said, this is not worth arguing over too much and it should
have been fixed for you about an hour after I broke it (sorry).

If it is still broken for you then I will back out.


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From nascheme@enme.ucalgary.ca  Sat Aug 12 19:58:20 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 12:58:20 -0600
Subject: [Python-Dev] Lib/symbol.py needs update after listcomp
Message-ID: <20000812125820.A567@keymaster.enme.ucalgary.ca>

Someone needs to run:

    ./python Lib/symbol.py

and check in the changes.

  Neil


From akuchlin@mems-exchange.org  Sat Aug 12 20:09:44 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Sat, 12 Aug 2000 15:09:44 -0400
Subject: [Python-Dev] Lib/symbol.py needs update after listcomp
In-Reply-To: <20000812125820.A567@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Sat, Aug 12, 2000 at 12:58:20PM -0600
References: <20000812125820.A567@keymaster.enme.ucalgary.ca>
Message-ID: <20000812150944.A9653@kronos.cnri.reston.va.us>

On Sat, Aug 12, 2000 at 12:58:20PM -0600, Neil Schemenauer wrote:
>Someone needs to run:
>    ./python Lib/symbol.py
>and check in the changes.

Done.  

--amk


From tim_one@email.msn.com  Sat Aug 12 20:10:30 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 15:10:30 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000812113900.D3528@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>

Note that an update isn't enough to get you going again on Windows, and
neither is (the moral equivalent of) "rm *" in PCbuild followed by an
update.  But "rm -rf PCbuild" followed by an update was enough for me (I'm
working via phone modem -- a fresh full checkout is too time-consuming for
me).




From Vladimir.Marangozov@inrialpes.fr  Sat Aug 12 20:16:17 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 21:16:17 +0200 (CEST)
Subject: [Python-Dev] minimal stackless
Message-ID: <200008121916.VAA20873@python.inrialpes.fr>

I'd like to clarify my position about the mythical Stackless issue.

I would be okay to evaluate a minimal stackless implementation of the
current VM, and eventually consider it for inclusion if it doesn't slow
down the interpreter (and if it does, I don't know yet how much would be
tolerable).

However, I would be willing to do this only if such implementation is
distilled from the call/cc stuff completely.

That is, a minimal stackless implementation which gives us an equivalent
VM as we have it today with the C stack. This is what I'd like to see
first in the stackless PEP too. No mixtures with continuations & co.

The call/cc issue is "application domain" for me -- it relies on top of
the minimal stackless and would come only as an exported interface to the
control flow management of the VM. Therefore, it must be completely
optional (both in terms of lazy decision on whether it should be included
someday).

So, if such distilled, minimal stackless implementation hits the
SourceForge shelves by the next week, I, at least, will give it a try
and will report impressions. By that time, it would also be nice to see a
clear summary of the frame management ideas in the 1st draft of the PEP.

If the proponents of Stackless are ready for the challenge, give it a go
(this seems to be a required first step in the right direction anyway).

I can't offer any immediate help though, given the list of Python-related
tasks I'd like to finish (as always, done in my spare minutes) and I'll
be almost, if not completely, unavailable the last week of August.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From trentm@ActiveState.com  Sat Aug 12 20:22:58 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 12:22:58 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 03:10:30PM -0400
References: <20000812113900.D3528@ActiveState.com> <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>
Message-ID: <20000812122258.A4684@ActiveState.com>

On Sat, Aug 12, 2000 at 03:10:30PM -0400, Tim Peters wrote:
> Note that an update isn't enough to get you going again on Windows, and
> neither is (the moral equivalent of) "rm *" in PCbuild followed by an
> update.  But "rm -rf PCbuild" followed by an update was enough for me (I'm
> working via phone modem -- a fresh full checkout is too time-consuming for
> me).

Oh right. The '-kb' is sticky to you checked out version. I forgot
about that.

Thanks, Tim.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From esr@thyrsus.com  Sat Aug 12 20:37:42 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 15:37:42 -0400
Subject: [Python-Dev] minimal stackless
In-Reply-To: <200008121916.VAA20873@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sat, Aug 12, 2000 at 09:16:17PM +0200
References: <200008121916.VAA20873@python.inrialpes.fr>
Message-ID: <20000812153742.A25529@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov@inrialpes.fr>:
> That is, a minimal stackless implementation which gives us an equivalent
> VM as we have it today with the C stack. This is what I'd like to see
> first in the stackless PEP too. No mixtures with continuations & co.
> 
> The call/cc issue is "application domain" for me -- it relies on top of
> the minimal stackless and would come only as an exported interface to the
> control flow management of the VM. Therefore, it must be completely
> optional (both in terms of lazy decision on whether it should be included
> someday).

I'm certainly among the call/cc fans, and I guess I'm weakly in the
"Stackless proponent" camp, and I agree.  These issues should be
separated.  If minimal stackless mods to ceval can solve (for example) the
stack overflow problem I just got bitten by, we ought to integrate
them for 2.0 and then give any new features a separate and thorough debate.

I too will be happy to test a minimal-stackless patch.  Come on, Christian,
the ball's in your court.  This is your best chance to get stackless
accepted.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

When only cops have guns, it's called a "police state".
        -- Claire Wolfe, "101 Things To Do Until The Revolution" 


From Fredrik Lundh" <effbot@telia.com  Sat Aug 12 20:40:15 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 21:40:15 +0200
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
References: <14741.36807.101870.221890@beluga.mojam.com>
Message-ID: <002b01c00495$32df3120$f2a6b5d4@hagrid>

skip wrote:

> With Thomas's patch to the top-level Makefile that makes Grammar a more
> first-class directory, are the generated graminit.h and graminit.c files
> needed any longer?

yes please -- thomas' patch only generates those files on
unix boxes.  as long as we support other platforms too, the
files should be in the repository, and new versions should be
checked in whenever the grammar is changed.

</F>



From tim_one@email.msn.com  Sat Aug 12 20:39:03 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 15:39:03 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000812114719.E3528@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEGOGPAA.tim_one@email.msn.com>

[Trent Mick]
> I think that putting them in binary mode is a misleading clue that
> people should not muck with them. The *are* text files.

But you don't know that.  They're internal Microsoft files in an
undocumented, proprietary format.  You'll find nothing in MS docs
guaranteeing they're text files, but will find the direst warnings against
attempting to edit them.  MS routinely changes *scads* of things about
DevStudio-internal files across releases.

For all the rest, you created your own problems by insisting on telling
Perforce they're text files, despite that they're clearly marked binary
under CVS.

I'm unstuck now, but Fredrik will likely still have new problems
cross-mounting file systems between Windows and Linux (see his msg).  Since
nothing here *was* broken (except for your private and self-created problems
under Perforce), "fixing" it was simply a bad idea.  We're on a very tight
schedule, and the CVS tree isn't a playground.




From thomas@xs4all.net  Sat Aug 12 20:45:24 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 21:45:24 +0200
Subject: [Python-Dev] 're' crashes ?
Message-ID: <20000812214523.H14470@xs4all.nl>

I'm not trying to sound like Eric (though I don't mind if I do ;) but my
Python crashes. Or rather, test_re fails with a coredump, since this
afternoon or so. I'm fairly certain it was working fine yesterday, and it's
an almost-vanilla CVS tree (I was about to check-in the fixes to
Tools/compiler, and tried to use the compiler on the std lib and the test
suite, when I noticed the coredump.)

The coredump says this:

#0  eval_code2 (co=0x824ba50, globals=0x82239b4, locals=0x0, args=0x827e18c, 
    argcount=2, kws=0x827e194, kwcount=0, defs=0x82211c0, defcount=1, 
    owner=0x0) at ceval.c:1474
1474                                    Py_DECREF(w);

Which is part of the FOR_LOOP opcode:

1461                    case FOR_LOOP:
1462                            /* for v in s: ...
1463                               On entry: stack contains s, i.
1464                               On exit: stack contains s, i+1, s[i];
1465                               but if loop exhausted:
1466                                    s, i are popped, and we jump */
1467                            w = POP(); /* Loop index */
1468                            v = POP(); /* Sequence object */
1469                            u = loop_subscript(v, w);
1470                            if (u != NULL) {
1471                                    PUSH(v);
1472                                    x = PyInt_FromLong(PyInt_AsLong(w)+1);
1473                                    PUSH(x);
1474                                    Py_DECREF(w);
1475                                    PUSH(u);
1476                                    if (x != NULL) continue;
1477                            }
1478                            else {
1479                                    Py_DECREF(v);
1480                                    Py_DECREF(w);
1481                                    /* A NULL can mean "s exhausted"
1482                                       but also an error: */
1483                                    if (PyErr_Occurred())
1484                                            why = WHY_EXCEPTION;

I *think* this isn't caused by this code, but rather by a refcounting bug
somewhere. 'w' should be an int, and it's used on line 1472, and doesn't
cause an error there (unless passing a NULL pointer to PyInt_AsLong() isn't
an error ?) But it's NULL at line 1474. Is there an easy way to track an
error like this ? Otherwise I'll play around a bit using breakpoints and
such in gdb.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Sat Aug 12 21:03:20 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 16:03:20 -0400
Subject: [Python-Dev] Feature freeze!
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com>

The 2.0 release manager (Jeremy) is on vacation.  In his absence, here's a
reminder from the 2.0 release schedule:

    Aug. 14: All 2.0 PEPs finished / feature freeze

See the rest at:

    http://python.sourceforge.net/peps/pep-0200.html

Note that that's Monday!  Any new "new feature" patches submitted after
Sunday will be mindlessly assigned Postponed status.  New "new feature"
patches submitted after this instant but before Monday will almost certainly
be assigned Postponed status too -- just not *entirely* mindlessly <wink>.
"Sunday" and "Monday" are defined by wherever Guido happens to be.  "This
instant" is defined by me, and probably refers to some time in the past from
your POV; it's negotiable.




From akuchlin@mems-exchange.org  Sat Aug 12 21:06:28 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Sat, 12 Aug 2000 16:06:28 -0400
Subject: [Python-Dev] Location of compiler code
Message-ID: <E13NhY4-00087X-00@kronos.cnri.reston.va.us>

I noticed that Jeremy checked in his compiler code; however, it lives
in Tools/compiler/compiler.  Any reason it isn't in Lib/compiler?

--amk


From tim_one@email.msn.com  Sat Aug 12 21:11:50 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 16:11:50 -0400
Subject: [Python-Dev] Location of compiler code
In-Reply-To: <E13NhY4-00087X-00@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHBGPAA.tim_one@email.msn.com>

[Andrew Kuchling]
> I noticed that Jeremy checked in his compiler code; however, it lives
> in Tools/compiler/compiler.  Any reason it isn't in Lib/compiler?

Suggest waiting for Jeremy to return from vacation (22 Aug).




From Vladimir.Marangozov@inrialpes.fr  Sat Aug 12 22:08:44 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 23:08:44 +0200 (CEST)
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 04:03:20 PM
Message-ID: <200008122108.XAA21412@python.inrialpes.fr>

Tim Peters wrote:
> 
> The 2.0 release manager (Jeremy) is on vacation.  In his absence, here's a
> reminder from the 2.0 release schedule:
> 
>     Aug. 14: All 2.0 PEPs finished / feature freeze
> 
> See the rest at:
> 
>     http://python.sourceforge.net/peps/pep-0200.html
> 
> Note that that's Monday!  Any new "new feature" patches submitted after
> Sunday will be mindlessly assigned Postponed status.  New "new feature"
> patches submitted after this instant but before Monday will almost certainly
> be assigned Postponed status too -- just not *entirely* mindlessly <wink>.
> "Sunday" and "Monday" are defined by wherever Guido happens to be.  "This
> instant" is defined by me, and probably refers to some time in the past from
> your POV; it's negotiable.

This reminder comes JIT!

Then please make coincide the above dates/instants with the status of
the open patches and take a stance on them: assign them to people, postpone,
whatever.

I deliberately postponed my object malloc patch.

PS: this is also JIT as per the stackless discussion -- I mentioned
"consider for inclusion" which was interpreted as "inclusion for 2.0"
<frown>. God knows that I tried to be very careful when writing my
position statement... OTOH, there's still a valid deadline for 2.0!

PPS: is the pep-0200.html referenced above up to date? For instance,
I see it mentions SET_LINENO pointing to old references, while a newer
postponed patch is at SourceForge.

A "last modified <date>" stamp would be nice.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From trentm@ActiveState.com  Sat Aug 12 22:51:55 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 14:51:55 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
Message-ID: <20000812145155.A7629@ActiveState.com>

from Objects/listobject.c:

static int
ins1(PyListObject *self, int where, PyObject *v)
{
    int i;
    PyObject **items;
    if (v == NULL) {
        PyErr_BadInternalCall();
        return -1;
    }
    items = self->ob_item;
    NRESIZE(items, PyObject *, self->ob_size+1);
    if (items == NULL) {
        PyErr_NoMemory();
        return -1;
    }
    if (where < 0)
        where = 0;
    if (where > self->ob_size)
        where = self->ob_size;
    for (i = self->ob_size; --i >= where; )
        items[i+1] = items[i];
    Py_INCREF(v);
    items[where] = v;
    self->ob_item = items;
    self->ob_size++;         <-------------- can this overflow?
    return 0;
}


In the case of sizeof(int) < sizeof(void*), can this overflow. I have a small
patch to text self->ob_size against INT_MAX and I was going to submit it but
I am not so sure that overflow is not checked by some other mechanism for
list insert. Is it or was this relying on sizeof(ob_size) == sizeof(void*),
hence a list being able to hold as many items as there is addressable memory?

scared-to-patch-ly yours,
Trent


proposed patch:

*** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
--- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
***************
*** 149,155 ****
        Py_INCREF(v);
        items[where] = v;
        self->ob_item = items;
!       self->ob_size++;
        return 0;
  }

--- 149,159 ----
        Py_INCREF(v);
        items[where] = v;
        self->ob_item = items;
!       if (self->ob_size++ == INT_MAX) {
!               PyErr_SetString(PyExc_OverflowError,
!                       "cannot add more objects to list");
!               return -1;
!       }
        return 0;
  }




-- 
Trent Mick
TrentM@ActiveState.com


From thomas@xs4all.net  Sat Aug 12 22:52:47 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 23:52:47 +0200
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <200008122108.XAA21412@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sat, Aug 12, 2000 at 11:08:44PM +0200
References: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com> <200008122108.XAA21412@python.inrialpes.fr>
Message-ID: <20000812235247.I14470@xs4all.nl>

On Sat, Aug 12, 2000 at 11:08:44PM +0200, Vladimir Marangozov wrote:

> PPS: is the pep-0200.html referenced above up to date? For instance,
> I see it mentions SET_LINENO pointing to old references, while a newer
> postponed patch is at SourceForge.

I asked similar questions about PEP 200, in particular on which new features
were considered for 2.0 and what their status is (PEP 200 doesn't mention
augmented assignment, which as far as I know has been on Guido's "2.0" list
since 2.0 and 1.6 became different releases.) I apparently caught Jeremy
just before he left for his holiday, and directed me towards Guido regarding
those questions, and Guido has apparently been too busy (or he missed that
email as well as some python-dev email.)

All my PEPs are in, though, unless I should write a PEP on 'import as',
which I really think should go in 2.0. I'd be suprised if 'import as' needs
a PEP, since the worst vote on 'import as' was Eric's '+0', and there seems
little concern wrt. syntax or implementation. It's more of a fix for
overlooked syntax than it is a new feature<0.6 wink>.

I just assumed the PyLabs team (or at least 4/5th of it) were too busy with
getting 1.6 done and finished to be concerned with non-pressing 2.0 issues,
and didn't want to press them on these issues until 1.6 is truely finished.
Pity 1.6-beta-cycle and 2.0-feature-freeze overlap :P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nascheme@enme.ucalgary.ca  Sat Aug 12 23:03:57 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 18:03:57 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
Message-ID: <20000812180357.A18816@acs.ucalgary.ca>

With all the recent proposed and accepted language changes, we
should be a careful to keep everything up to date.  The parser
module, Jeremy's compiler, and I suspect JPython have been left
behind by the recent changes.  In the past we have been blessed
by a very stable core language.  Times change. :)

I'm trying to keep Jeremy's compiler up to date.  Modifying the
parser module to understand list comprehensions seems to be none
trivial however.  Does anyone else have the time and expertise to
make these changes?  The compiler/transformer.py module will also
have to be modified to understand the new parse tree nodes.  That
part should be somewhat easier.

On a related note, I think the SyntaxError messages generated by
the compile() function and the parser module could be improved.
This is annoying:

    >>> compile("$x", "myfile.py", "eval")
    Traceback (most recent call last):
      File "<stdin>", line 1, in ?
      File "<string>", line 1
        $x
        ^
    SyntaxError: invalid syntax

Is there any reason why the error message does not say
"myfile.py" instead of "<string>"?  If not I can probably put
together a patch to fix it.

As far as I can tell, the parser ParserError exceptions are
almost useless.  At least a line number could be given.  I'm not
sure how much work that is to fix though.

  Neil


From nascheme@enme.ucalgary.ca  Sat Aug 12 23:06:07 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 18:06:07 -0400
Subject: [Python-Dev] compiler package in Lib?
Message-ID: <20000812180607.A18938@acs.ucalgary.ca>

Shouldn't the compiler package go in Lib instead of Tools?  The
AST used the by compiler should be very useful to things like
lint checkers, optimizers, and "refactoring" tools.  

  Neil


From Vladimir.Marangozov@inrialpes.fr  Sat Aug 12 23:24:39 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 00:24:39 +0200 (CEST)
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <20000812145155.A7629@ActiveState.com> from "Trent Mick" at Aug 12, 2000 02:51:55 PM
Message-ID: <200008122224.AAA21816@python.inrialpes.fr>

Trent Mick wrote:
>
> [listobject.c/ins1()]
> ...
>     self->ob_item = items;
>     self->ob_size++;         <-------------- can this overflow?
>     return 0;
> }
> 
> 
> In the case of sizeof(int) < sizeof(void*), can this overflow. I have a small
> patch to text self->ob_size against INT_MAX and I was going to submit it but
> I am not so sure that overflow is not checked by some other mechanism for
> list insert.

+0.

It could overflow but if it does, this is a bad sign about using a list
for such huge amount of data.

And this is the second time in a week that I see an attempt to introduce
a bogus counter due to post-increments embedded in an if statement!

> Is it or was this relying on sizeof(ob_size) == sizeof(void*),
> hence a list being able to hold as many items as there is addressable memory?
> 
> scared-to-patch-ly yours,
> Trent

And you're right <wink>

> 
> 
> proposed patch:
> 
> *** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
> --- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
> ***************
> *** 149,155 ****
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       self->ob_size++;
>         return 0;
>   }
> 
> --- 149,159 ----
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       if (self->ob_size++ == INT_MAX) {
> !               PyErr_SetString(PyExc_OverflowError,
> !                       "cannot add more objects to list");
> !               return -1;
> !       }
>         return 0;
>   }
> 
> 
> 
> 
> -- 
> Trent Mick
> TrentM@ActiveState.com
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From esr@thyrsus.com  Sat Aug 12 23:31:32 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 18:31:32 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812180357.A18816@acs.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Sat, Aug 12, 2000 at 06:03:57PM -0400
References: <20000812180357.A18816@acs.ucalgary.ca>
Message-ID: <20000812183131.A26660@thyrsus.com>

Neil Schemenauer <nascheme@enme.ucalgary.ca>:
> I'm trying to keep Jeremy's compiler up to date.  Modifying the
> parser module to understand list comprehensions seems to be none
> trivial however. 

Last I checked, list comprehensions hadn't been accepted.  I think
there's at least one more debate waiting there...
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

If a thousand men were not to pay their tax-bills this year, that would
... [be] the definition of a peaceable revolution, if any such is possible.
	-- Henry David Thoreau


From trentm@ActiveState.com  Sat Aug 12 23:33:12 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 15:33:12 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <200008122224.AAA21816@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sun, Aug 13, 2000 at 12:24:39AM +0200
References: <20000812145155.A7629@ActiveState.com> <200008122224.AAA21816@python.inrialpes.fr>
Message-ID: <20000812153312.B7629@ActiveState.com>

On Sun, Aug 13, 2000 at 12:24:39AM +0200, Vladimir Marangozov wrote:
> +0.
> 
> It could overflow but if it does, this is a bad sign about using a list
> for such huge amount of data.

Point taken.

> 
> And this is the second time in a week that I see an attempt to introduce
> a bogus counter due to post-increments embedded in an if statement!
>

If I read you correctly then I think that you are mistaking my intention. Do
you mean that I am doing the comparison *before* the increment takes place
here:

> > !       if (self->ob_size++ == INT_MAX) {
> > !               PyErr_SetString(PyExc_OverflowError,
> > !                       "cannot add more objects to list");
> > !               return -1;
> > !       }

That is my intention. You can increment up to INT_MAX but not over.....

... heh heh actually my code *is* wrong. But for a slightly different reason.
I trash the value of self->ob_size on overflow. You are right, I made a
mistake trying to be cute with autoincrement in an 'if' statement. I should
do the check and *then* increment if okay.

Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From thomas@xs4all.net  Sat Aug 12 23:34:43 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 13 Aug 2000 00:34:43 +0200
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812183131.A26660@thyrsus.com>; from esr@thyrsus.com on Sat, Aug 12, 2000 at 06:31:32PM -0400
References: <20000812180357.A18816@acs.ucalgary.ca> <20000812183131.A26660@thyrsus.com>
Message-ID: <20000813003443.J14470@xs4all.nl>

On Sat, Aug 12, 2000 at 06:31:32PM -0400, Eric S. Raymond wrote:
> Neil Schemenauer <nascheme@enme.ucalgary.ca>:
> > I'm trying to keep Jeremy's compiler up to date.  Modifying the
> > parser module to understand list comprehensions seems to be none
> > trivial however. 

> Last I checked, list comprehensions hadn't been accepted.  I think
> there's at least one more debate waiting there...

Check again, they're already checked in. The implementation may change
later, but the syntax has been decided (by Guido):

[(x, y) for y in something for x in somewhere if y in x]

The parentheses around the leftmost expression are mandatory. It's currently
implemented something like this:

L = []
__x__ = [].append
for y in something:
	for x in somewhere:
		if y in x:
			__x__((x, y))
del __x__

(where 'x' is a number, chosen to *probably* not conflict with any other
local vrbls or other (nested) list comprehensions, and the result of the
expression is L, which isn't actually stored anywhere during evaluation.)

See the patches list archive and the SF patch info about the patch (#100654)
for more information on how and why.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@thyrsus.com  Sun Aug 13 00:01:54 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 19:01:54 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000813003443.J14470@xs4all.nl>; from thomas@xs4all.net on Sun, Aug 13, 2000 at 12:34:43AM +0200
References: <20000812180357.A18816@acs.ucalgary.ca> <20000812183131.A26660@thyrsus.com> <20000813003443.J14470@xs4all.nl>
Message-ID: <20000812190154.B26719@thyrsus.com>

Thomas Wouters <thomas@xs4all.net>:
> > Last I checked, list comprehensions hadn't been accepted.  I think
> > there's at least one more debate waiting there...
> 
> Check again, they're already checked in. The implementation may change
> later, but the syntax has been decided (by Guido):
> 
> [(x, y) for y in something for x in somewhere if y in x]

Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
to believe that having special syntax for this (rather than constructor
functions a la zip()) is a bad mistake.  I predict it's going to come
back to bite us hard in the future.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

I cannot undertake to lay my finger on that article of the
Constitution which grant[s] a right to Congress of expending, on
objects of benevolence, the money of their constituents.
	-- James Madison, 1794


From bckfnn@worldonline.dk  Sun Aug 13 00:29:14 2000
From: bckfnn@worldonline.dk (Finn Bock)
Date: Sat, 12 Aug 2000 23:29:14 GMT
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812180357.A18816@acs.ucalgary.ca>
References: <20000812180357.A18816@acs.ucalgary.ca>
Message-ID: <3995dd8b.34665776@smtp.worldonline.dk>

[Neil Schemenauer]

>With all the recent proposed and accepted language changes, we
>should be a careful to keep everything up to date.  The parser
>module, Jeremy's compiler, and I suspect JPython have been left
>behind by the recent changes. 

WRT JPython, the list comprehensions have not yet been added. Then
again, the feature was only recently checked in.

You raise a good point however. There are many compilers/parsers that
have to be updated before we can claim that a feature is fully
implemented. 


[Thomas Wouters]

>[(x, y) for y in something for x in somewhere if y in x]
>
>The parentheses around the leftmost expression are mandatory. It's currently
>implemented something like this:
>
>L = []
>__x__ = [].append
>for y in something:
>	for x in somewhere:
>		if y in x:
>			__x__((x, y))
>del __x__

Thank you for the fine example. At least I now think that know what the
feature is about.

regards,
finn


From tim_one@email.msn.com  Sun Aug 13 00:37:14 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 19:37:14 -0400
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <20000812145155.A7629@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com>

[Trent Mick]
> from Objects/listobject.c:
>
> static int
> ins1(PyListObject *self, int where, PyObject *v)
> {
>     ...
>     self->ob_size++;         <-------------- can this overflow?
>     return 0;
> }

> ...
> Is it or was this relying on sizeof(ob_size) == sizeof(void*),
> hence a list being able to hold as many items as there is
> addressable memory?

I think it's more relying on the product of two other assumptions:  (a)
sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
elements in Python.  But you're right, sooner or later that's going to bite
us.

> proposed patch:
>
> *** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
> --- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
> ***************
> *** 149,155 ****
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       self->ob_size++;
>         return 0;
>   }
>
> --- 149,159 ----
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       if (self->ob_size++ == INT_MAX) {
> !               PyErr_SetString(PyExc_OverflowError,
> !                       "cannot add more objects to list");
> !               return -1;
> !       }
>         return 0;
>   }

+1 on catching it, -1 on this technique.  You noted later that this will
make trash out of ob_size if it triggers, but the list has already grown and
been shifted by this time too, so it's left in an insane state (to the user,
the last element will appear to vanish).

Suggest checking at the *start* of the routine instead:

       if (self->ob_size == INT_MAX) {
              PyErr_SetString(PyExc_OverflowError,
                      "cannot add more objects to list");
              return -1;
      }

Then the list isn't harmed.




From tim_one@email.msn.com  Sun Aug 13 00:57:29 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 19:57:29 -0400
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <200008122108.XAA21412@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHJGPAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> This reminder comes JIT!
>
> Then please make coincide the above dates/instants with the status of
> the open patches and take a stance on them: assign them to people,
> postpone, whatever.
>
> I deliberately postponed my object malloc patch.

I don't know why.  It's been there quite a while, and had non-trivial
support for inclusion in 2.0.  A chance to consider the backlog of patches
as a stable whole is why the two weeks between "feature freeze" and 2.0b1
exists!

> PS: this is also JIT as per the stackless discussion -- I mentioned
> "consider for inclusion" which was interpreted as "inclusion for 2.0"
> <frown>. God knows that I tried to be very careful when writing my
> position statement... OTOH, there's still a valid deadline for 2.0!

I doubt any variant of Stackless has a real shot for 2.0 at this point,
although if a patch shows up before Sunday ends I won't Postpone it without
reason (like, say, Guido tells me to).

> PPS: is the pep-0200.html referenced above up to date? For instance,
> I see it mentions SET_LINENO pointing to old references, while a newer
> postponed patch is at SourceForge.
>
> A "last modified <date>" stamp would be nice.

I agree, but yaaaawn <wink>.  CVS says it was last modified before Jeremy
went on vacation.  It's not up to date.  The designated release manager in
Jeremy's absence apparently didn't touch it.  I can't gripe about that,
though, because he's my boss <wink>.  He sent me email today saying "tag,
now you're it!" (Guido will be gone all next week).  My plate is already
full, though, and I won't get around to updating it today.

Yes, this is no way to run a release, and so I don't have any confidence
that the release dates in pep200 will be met.  Still, I was arguing for
feature freeze two months ago, and so as long as "I'm it" I'm not inclined
to slip the schedule on *that* part.  I bet it will be at least 3 weeks
before 2.0b1 hits the streets, though.

in-any-case-feature-freeze-is-on-the-critical-path-so-the-sooner-
    the-better-ly y'rs  - tim




From tim_one@email.msn.com  Sun Aug 13 01:11:30 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 20:11:30 -0400
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <20000812235247.I14470@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>

[Thomas Wouters]
> I asked similar questions about PEP 200, in particular on which
> new features were considered for 2.0 and what their status is
> (PEP 200 doesn't mention augmented assignment, which as far as I
> know has been on Guido's "2.0" list since 2.0 and 1.6 became
> different releases.)

Yes, augmented assignment is golden for 2.0.

> I apparently caught Jeremy just before he left for his holiday,
> and directed me towards Guido regarding those questions, and
> Guido has apparently been too busy (or he missed that email as
> well as some python-dev email.)

Indeed, we're never going to let Guido be Release Manager again <wink>.

> All my PEPs are in, though, unless I should write a PEP on 'import as',
> which I really think should go in 2.0. I'd be suprised if 'import
> as' needs a PEP, since the worst vote on 'import as' was Eric's '+0',
> and there seems little concern wrt. syntax or implementation. It's
> more of a fix for overlooked syntax than it is a new feature<0.6 wink>.

Why does everyone flee from the idea of writing a PEP?  Think of it as a
chance to let future generations know what a cool idea you had.  I agree
this change is too simple and too widely loved to *need* a PEP, but if you
write one anyway you can add it to your resume under your list of
peer-reviewed publications <wink>.

> I just assumed the PyLabs team (or at least 4/5th of it) were too
> busy with getting 1.6 done and finished to be concerned with non-
> pressing 2.0 issues, and didn't want to press them on these issues
> until 1.6 is truely finished.

Actually, Fred Drake has done almost everything in the 1.6 branch by
himself, while Guido has done all the installer and web-page work for that.
The rest of our time has been eaten away by largely invisible cruft, from
haggling over the license to haggling over where to put python.org next.
Lots of haggling!  You guys get to do the *fun* parts (btw, it's occurred to
me often that I did more work on Python proper when I had a speech
recognition job!).

> Pity 1.6-beta-cycle and 2.0-feature-freeze overlap :P

Ya, except it's too late to stop 1.6 now <wink>.

but-not-too-late-to-stop-2.0-ly y'rs  - tim




From MarkH@ActiveState.com  Sun Aug 13 02:02:36 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Sun, 13 Aug 2000 11:02:36 +1000
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000812190154.B26719@thyrsus.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com>

ESR, responding to 

[(x, y) for y in something for x in somewhere if y in x]

for list comprehension syntax:

> Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
> to believe that having special syntax for this (rather than constructor
> functions a la zip()) is a bad mistake.  I predict it's going to come
> back to bite us hard in the future.

FWIW, these are my thoughts exactly (for this particular issue, anyway).

Wont-bother-voting-cos-nobody-is-counting ly,

Mark.



From trentm@ActiveState.com  Sun Aug 13 02:25:18 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 18:25:18 -0700
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <3995dd8b.34665776@smtp.worldonline.dk>; from bckfnn@worldonline.dk on Sat, Aug 12, 2000 at 11:29:14PM +0000
References: <20000812180357.A18816@acs.ucalgary.ca> <3995dd8b.34665776@smtp.worldonline.dk>
Message-ID: <20000812182518.B10528@ActiveState.com>

On Sat, Aug 12, 2000 at 11:29:14PM +0000, Finn Bock wrote:
> [Thomas Wouters]
> 
> >[(x, y) for y in something for x in somewhere if y in x]
> >
> >The parentheses around the leftmost expression are mandatory. It's currently
> >implemented something like this:
> >
> >L = []
> >__x__ = [].append
> >for y in something:
> >	for x in somewhere:
> >		if y in x:
> >			__x__((x, y))
> >del __x__
> 
> Thank you for the fine example. At least I now think that know what the
> feature is about.
> 

Maybe that example should get in the docs for list comprehensions.


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Sun Aug 13 02:30:02 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 18:30:02 -0700
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 08:11:30PM -0400
References: <20000812235247.I14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>
Message-ID: <20000812183002.C10528@ActiveState.com>

On Sat, Aug 12, 2000 at 08:11:30PM -0400, Tim Peters wrote:
> You guys get to do the *fun* parts 

Go give Win64 a whirl for a while. <grumble>

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From tim_one@email.msn.com  Sun Aug 13 02:33:43 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 21:33:43 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>

[ESR, responding to

  [(x, y) for y in something for x in somewhere if y in x]

 for list comprehension syntax:
]
> Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
> to believe that having special syntax for this (rather than constructor
> functions a la zip()) is a bad mistake.  I predict it's going to come
> back to bite us hard in the future.

[Mark Hammond]
> FWIW, these are my thoughts exactly (for this particular issue,
> anyway).
>
> Wont-bother-voting-cos-nobody-is-counting ly,

Counting, no; listening, yes; persuaded, no.  List comprehensions are one of
the best-loved features of Haskell (really!), and Greg/Skip/Ping's patch
implements as an exact a parallel to Haskell's syntax and semantics as is
possible in Python.  Predictions of doom thus need to make a plausible case
for why a rousing success in Haskell is going to be a disaster in Python.
The only basis I can see for such a claim (and I have to make one up myself
because nobody else has <wink>) is that Haskell is lazy, while Python is
eager.  I can't get from there to "disaster", though, or even "plausible
regret".

Beyond that, Guido dislikes the way Lisp spells most things, so it's this or
nothing.  I'm certain I'll use it, and with joy.  Do an update and try it.

C:\Code\python\dist\src\PCbuild>python
Python 2.0b1 (#0, Aug 12 2000, 14:57:27) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> [x**2 for x in range(10) if x & 1]
[1, 9, 25, 49, 81]
>>> [x**2 if 3]
[81]
>>>

Now even as a fan, I'll admit that last line sucks <wink>.

bug-in-the-grammar-ly y'rs  - tim




From thomas@xs4all.net  Sun Aug 13 08:53:57 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 13 Aug 2000 09:53:57 +0200
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 09:33:43PM -0400
References: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com> <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>
Message-ID: <20000813095357.K14470@xs4all.nl>

On Sat, Aug 12, 2000 at 09:33:43PM -0400, Tim Peters wrote:

[ ESR and Mark griping about list comprehensions syntax, which I can relate
to, so I'll bother to try and exlain what bothers *me* wrt list
comprehensions. Needn't be the same as what bothers them, though ]

> List comprehensions are one of the best-loved features of Haskell
> (really!), and Greg/Skip/Ping's patch implements as an exact a parallel to
> Haskell's syntax and semantics as is possible in Python.

I don't see "it's cool in language X" as a particular good reason to include
a feature... We don't add special syntax for regular expressions, support
for continuations or direct access to hardware because of that, do we ?

> Predictions of doom thus need to make a plausible case for why a rousing
> success in Haskell is going to be a disaster in Python. The only basis I
> can see for such a claim (and I have to make one up myself because nobody
> else has <wink>) is that Haskell is lazy, while Python is eager.  I can't
> get from there to "disaster", though, or even "plausible regret".

My main beef with the syntax is that it is, in my eyes, unpythonic. It has
an alien, forced feel to it, much more so than the 'evil' map/filter/reduce.
It doesn't 'fit' into Python the way most of the other features do; it's
simply syntactic sugar for a specific kind of for-loop. It doesn't add any
extra functionality, and for that large a syntactic change, I guess that
scares me.

Those doubts were why I was glad you were going to write the PEP. I was
looking forward to you explaining why I had those doubts and giving sound
arguments against them :-)

> Beyond that, Guido dislikes the way Lisp spells most things, so it's this or
> nothing.  I'm certain I'll use it, and with joy.  Do an update and try it.

Oh, I've tried it. It's not included in the 'heavily patched Python 2.0b1' I
have running on a couple of machines to impress my colleagues, (which
includes the obmalloc patch, augmented assignment, range literals, import
as, indexing-for, and extended-slicing-on-lists) but that's mostly
because I was expecting, like ESR, a huge debate on its syntax. Lets say
that most my doubts arose after playing with it for a while. I fear people
will start using it in odd construct, and in odd ways, expecting other
aspects of for-loops to be included in list comprehensions (break, else,
continue, etc.) And then there's the way it's hard to parse because of the
lack of punctuation in it:

[((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in S] for b in
[b for b in B if mean(b)] for b,c in C for a,d in D for e in [Egg(a, b, c,
d, e) for e in E]]

I hope anyone writing something like that (notice the shadowing of some of
the outer vrbls in the inner loops) will either add some newlines and
indentation by themselves, or will be hunted down and shot (or at least
winged) by the PSU.

I'm not arguing to remove list comprehensions. I think they are cool
features that can replace map/filter, I just don't think they're that much
better than the use of map/filter.

Write-that-PEP-Tim-it-will-look-good-on-your-resume-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@thyrsus.com  Sun Aug 13 09:13:40 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sun, 13 Aug 2000 04:13:40 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>; from thomas@xs4all.net on Sun, Aug 13, 2000 at 09:53:57AM +0200
References: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com> <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com> <20000813095357.K14470@xs4all.nl>
Message-ID: <20000813041340.B27949@thyrsus.com>

Thomas Wouters <thomas@xs4all.net>:
> My main beef with the syntax is that it is, in my eyes, unpythonic. It has
> an alien, forced feel to it, much more so than the 'evil' map/filter/reduce.
> It doesn't 'fit' into Python the way most of the other features do; it's
> simply syntactic sugar for a specific kind of for-loop. It doesn't add any
> extra functionality, and for that large a syntactic change, I guess that
> scares me.

I agree 100% with all of this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"This country, with its institutions, belongs to the people who
inhabit it. Whenever they shall grow weary of the existing government,
they can exercise their constitutional right of amending it or their
revolutionary right to dismember it or overthrow it."
	-- Abraham Lincoln, 4 April 1861


From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug 13 09:15:15 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 13 Aug 2000 11:15:15 +0300 (IDT)
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEGOGPAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008131114190.20886-100000@sundial>

On Sat, 12 Aug 2000, Tim Peters wrote:

> [Trent Mick]
> > I think that putting them in binary mode is a misleading clue that
> > people should not muck with them. The *are* text files.
> 
> But you don't know that.  They're internal Microsoft files in an
> undocumented, proprietary format.  You'll find nothing in MS docs
> guaranteeing they're text files, but will find the direst warnings against
> attempting to edit them.  MS routinely changes *scads* of things about
> DevStudio-internal files across releases.

Hey, I parsed those beasts, and edited them by hand. 

of-course-my-co-workers-hated-me-for-that-ly y'rs, Z.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From Vladimir.Marangozov@inrialpes.fr  Sun Aug 13 10:16:50 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 11:16:50 +0200 (CEST)
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 07:37:14 PM
Message-ID: <200008130916.LAA29139@python.inrialpes.fr>

Tim Peters wrote:
> 
> I think it's more relying on the product of two other assumptions:  (a)
> sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
> elements in Python.  But you're right, sooner or later that's going to bite
> us.

+1 on your patch, but frankly, if we reach a situation to be bitten
by this overflow, chances are that we've already dumped core or will
be very soon -- billions of objects = soon to be overflowing
ob_refcnt integer counters. Py_None looks like a fine candidate for this.

Now I'm sure you're going to suggest again making the ob_refcnt a long,
as you did before <wink>.


> Suggest checking at the *start* of the routine instead:
> 
>        if (self->ob_size == INT_MAX) {
>               PyErr_SetString(PyExc_OverflowError,
>                       "cannot add more objects to list");
>               return -1;
>       }
> 
> Then the list isn't harmed.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Sun Aug 13 10:32:25 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 11:32:25 +0200 (CEST)
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHJGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 07:57:29 PM
Message-ID: <200008130932.LAA29181@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov]
> > This reminder comes JIT!
> >
> > Then please make coincide the above dates/instants with the status of
> > the open patches and take a stance on them: assign them to people,
> > postpone, whatever.
> >
> > I deliberately postponed my object malloc patch.
> 
> I don't know why.  It's been there quite a while, and had non-trivial
> support for inclusion in 2.0.  A chance to consider the backlog of patches
> as a stable whole is why the two weeks between "feature freeze" and 2.0b1
> exists!

Because the log message reads that I'm late with the stat interface
which shows what the situation is with and without it. If I want to
finish that part, I'll need to block my Sunday afternoon. Given that
now it's 11am, I have an hour to think what to do about it -- resurrect
or leave postponed.

> I doubt any variant of Stackless has a real shot for 2.0 at this point,
> although if a patch shows up before Sunday ends I won't Postpone it without
> reason (like, say, Guido tells me to).

I'm doubtful too, but if there's a clean & solid minimal implementation
which removes the stack dependency -- I'll have a look.

> 
> > PPS: is the pep-0200.html referenced above up to date? For instance,
> > I see it mentions SET_LINENO pointing to old references, while a newer
> > postponed patch is at SourceForge.
> >
> > A "last modified <date>" stamp would be nice.
> 
> I agree, but yaaaawn <wink>.  CVS says it was last modified before Jeremy
> went on vacation.  It's not up to date.  The designated release manager in
> Jeremy's absence apparently didn't touch it.  I can't gripe about that,
> though, because he's my boss <wink>.  He sent me email today saying "tag,
> now you're it!" (Guido will be gone all next week).  My plate is already
> full, though, and I won't get around to updating it today.

Okay - just wanted to make this point clear, since your reminder reads
"see the details there".

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From trentm@ActiveState.com  Sun Aug 13 19:04:49 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sun, 13 Aug 2000 11:04:49 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <200008130916.LAA29139@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sun, Aug 13, 2000 at 11:16:50AM +0200
References: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com> <200008130916.LAA29139@python.inrialpes.fr>
Message-ID: <20000813110449.A23269@ActiveState.com>

On Sun, Aug 13, 2000 at 11:16:50AM +0200, Vladimir Marangozov wrote:
> Tim Peters wrote:
> > 
> > I think it's more relying on the product of two other assumptions:  (a)
> > sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
> > elements in Python.  But you're right, sooner or later that's going to bite
> > us.
> 
> +1 on your patch, but frankly, if we reach a situation to be bitten

I'll check it in later today.


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Mon Aug 14 00:08:43 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sun, 13 Aug 2000 16:08:43 -0700
Subject: [Python-Dev] you may have some PCbuild hiccups
Message-ID: <20000813160843.A27104@ActiveState.com>

Hello all,

Recently I spearheaded a number of screw ups in the PCbuild/ directory.
PCbuild/*.dsp and *.dsw went from binary to text to binary again. These are
sticky CVS attributes on files in your checked out Python tree.

If you care about the PCbuild/ content (i.e. you build Python on Windows)
then you may need to completely delete the PCbuild directory and
re-get it from CVS. You can tell if you *need* to by doing a 'cvs status
*.dsw *.dsp'. If any of those files *don't* have the "Sticky Option: -kb",
they should. If they all do and MSDEV loads the project files okay, then you
are fine.

NOTE: You have to delete the *whole* PCbuild\ directory, not just its
contents. The PCbuild\CVS control directory is part of what you have to
re-get.


Sorry,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From tim_one@email.msn.com  Mon Aug 14 01:08:45 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 13 Aug 2000 20:08:45 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>

[Tim]
>> List comprehensions are one of the best-loved features of Haskell
>> (really!), and Greg/Skip/Ping's patch implements as an exact a
>> parallel to Haskell's syntax and semantics as is possible in Python.

[Thomas Wouters]
> I don't see "it's cool in language X" as a particular good reason
> to include a feature... We don't add special syntax for regular
> expressions, support for continuations or direct access to hardware
> because of that, do we ?

As Guido (overly!) modestly says, the only language idea he ever invented
was an "else" clause on loops.  He decided listcomps "were Pythonic" before
knowing anything about Haskell (or SETL, from which Haskell took the idea).
Given that he *already* liked them, the value in looking at Haskell is for
its actual experience with them.  It would be pretty stupid *not* to look at
experience with other languages that already have it!  And that's whether
you're pro or con.

So it's not "cool in language X" that drives it at all, it's "cool in
language X" that serves to confirm or refute the prior judgment that "it's
Pythonic, period".  When, e.g., Eric predicts it will bite us hard someday,
I can point to Haskell and legitimately ask "why here and not there?".

There was once a great push for adding some notion of "protected" class
members to Python.  Guido was initially opposed, but tempted to waffle
because proponents kept pointing to C++.  Luckily, Stroustrup had something
to say about this in his "Design and Evolution of C++" book, including that
he thought adding "protected" was a mistake, driven by relentless "good
arguments" that opposed his own initial intuition.  So in that case,
*really* looking at C++ may have saved Guido from making the same mistake.

As another example, few arguments are made more frequently than that Python
should add embedded assignments in conditionals.  *Lots* of other languages
have that -- but they mostly serve to tell us it's a bug factory in
practice!  The languages that avoid the bugs point to ways to get the effect
safely (although none yet Pythonically enough for Guido).

So this is a fact:  language design is very little about wholesale
invention, and that's especially true of Python.  It's a mystically
difficult blending of borrowed ideas, and it's rare as readable Perl <wink>
that an idea will get borrowed if it didn't even work well in its native
home.  listcomps work great where they came from, and that plus "hey, Guido
likes 'em!" makes it 99% a done deal.

> My main beef with the syntax is that it is, in my eyes, unpythonic.
> It has an alien, forced feel to it, much more so than the 'evil'
> map/filter/reduce.  It doesn't 'fit' into Python the way most of
> the other features do;

Guido feels exactly the opposite:  the business about "alien, forced feel,
not fitting" is exactly what he's said about map/filter/reduce/lambda on
many occasions.  listcomps strike him (me too, for that matter) as much more
Pythonic than those.

> it's simply syntactic sugar for a specific kind of for-loop. It
> doesn't add any extra functionality,

All overwhelmingly true of augmented assignments, and range literals, and
three-argument getattr, and list.pop, etc etc etc too.  Python has lots of
syntactic sugar -- making life pleasant is not a bad thing.

> and for that large a syntactic change, I guess that scares me.

The only syntactic change is to add a new form of list constructor.  It's
isolated and self-contained, and so "small" in that sense.

> Those doubts were why I was glad you were going to write the PEP. I
> was looking forward to you explaining why I had those doubts and
> giving sound arguments against them :-)

There is no convincing argument to made either way on whether "it's
Pythonic", which I think is your primary worry.  People *never* reach
consensus on whether a given feature X "is Pythonic".  That's why it's
always Guido's job.  You've been here long enough to see that -1 and +1 are
about evenly balanced, except on (in recent memory) "import x as y" -- which
I conviently neglected to mention had been dismissed as unPythonic by Guido
just a couple weeks ago <wink -- but he didn't really mean it then,
according to me>.

> ...
> but that's mostly because I was expecting, like ESR, a huge debate
> on its syntax.

Won't happen, because it already did.  This was debated to death long ago,
and more than once, and Guido likes what he likes now.  Greg Wilson made the
only new point on listcomps I've seen since two weeks after they were first
proposed by Greg Ewing (i.e., that the ";" notation *really* sucked).

> Lets say that most my doubts arose after playing with it for
> a while. I fear people will start using it in odd construct, and
> in odd ways,

Unlike, say, filter, map and reduce <wink>?

> expecting other aspects of for-loops to be included
> in list comprehensions (break, else, continue, etc.)

Those ideas were rejected long ago too (and that Haskell and SETL also
rejected them independently shows that, whether we can explain it or not,
they're simply bad ideas).

> And then there's the way it's hard to parse because of the
> lack of punctuation in it:
>
> [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> for e in [Egg(a, b, c, d, e) for e in E]]

That isn't a serious argument, to my eyes.  Write that as a Lisp one-liner
and see what it looks like then -- nuts is nuts, and a "scare example" could
just as easily be concocted out of insane nesting of subscripts and/or
function calls and/or parenthesized arithmetic.  Idiotic nesting is a
possibility for any construct that nests!  BTW, you're missing the
possibility to nest listcomps in "the expression" position too, a la

>>> [[1 for i in range(n)] for n in range(10)]
[[],
 [1],
 [1, 1],
 [1, 1, 1],
 [1, 1, 1, 1],
 [1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1, 1, 1]]
>>>

I know you missed that possibility above because, despite your claim of
being hard to parse, it's dead easy to spot where your listcomps begin:  "["
is easy for the eye to find.

> I hope anyone writing something like that (notice the shadowing of
> some of the outer vrbls in the inner loops)

You can find the same in nested lambdas littering map/reduce/etc today.

> will either add some newlines and indentation by themselves, or
> will be hunted down and shot (or at least winged) by the PSU.

Nope!  We just shun them.  Natural selection will rid the Earth of them
without violence <wink>.

> I'm not arguing to remove list comprehensions. I think they are cool
> features that can replace map/filter, I just don't think they're that
> much better than the use of map/filter.

Haskell programmers have map/filter too, and Haskell programmers routinely
favor using listcomps.  This says something about what people who have both
prefer.  I predict that once you're used to them, you'll find them much more
expressive:  "[" tells you immediately you're getting a list, then the next
thing you see is what the list is built out of, and then there's a bunch of
lower-level detail.  It's great.

> Write-that-PEP-Tim-it-will-look-good-on-your-resume-ly y'rs,

except-i'm-too-old-to-need-a-resume-anymore<wink>-ly y'rs  - tim




From tim_one@email.msn.com  Mon Aug 14 02:31:20 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 13 Aug 2000 21:31:20 -0400
Subject: [Python-Dev] you may have some PCbuild hiccups
In-Reply-To: <20000813160843.A27104@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJHGPAA.tim_one@email.msn.com>

[Trent Mick]
> Recently I spearheaded a number of screw ups in the PCbuild/
> directory.

Let's say the intent was laudable but the execution a bit off the mark
<wink>.

[binary -> text -> binary again]
> ...
> NOTE: You have to delete the *whole* PCbuild\ directory, not just
> its contents. The PCbuild\CVS control directory is part of what you
> have to re-get.

Actually, I don't think you have to bother this time -- just do a regular
update.  The files *were* marked as text this time around, but there is no
"sticky bit" saying so in the local config, so a plain update replaces them
now.

OK, I made most of that up.  But a plain update did work fine for me ...




From nowonder@nowonder.de  Mon Aug 14 05:27:07 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Mon, 14 Aug 2000 04:27:07 +0000
Subject: [*].items() (was: Re: [Python-Dev] Lockstep iteration - eureka!)
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <l03102802b5b71c40f9fc@[193.78.237.121]> <3993FD49.C7E71108@prescod.net>
Message-ID: <3997751B.9BB1D9FA@nowonder.de>

Paul Prescod wrote:
> 
> Just van Rossum wrote:
> >
> >        for <index> indexing <element> in <seq>:
> 
> Let me throw out another idea. What if sequences just had .items()
> methods?
> 
> j=range(0,10)
> 
> for index, element in j.items():

I like the idea and so I've uploaded a patch for this to SF:
https://sourceforge.net/patch/?func=detailpatch&patch_id=101178&group_id=5470

For ease of reading:
This patch adds a .items() method to the list object.
.items() returns a list with of tuples. E.g.:

  for index, value in ["a", "b", "c"].items(): 
      print index, ":", value 

will print: 

  0: a 
  1: b 
  2: c 

I think this is an easy way to achieve looping over
index AND elements in parallel. Semantically the
following two expressions should be equivalent: 

for index, value in zip(range(len(mylist)), mylist):

for index, value in mylist.items():

In opposition to patch #110138 I would call this: 
"Adding syntactic sugar without adding syntax (or sugar<wink>):"

this-doesn't-deserve-new-syntax-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From greg@cosc.canterbury.ac.nz  Mon Aug 14 05:01:35 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:01:35 +1200 (NZST)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug
 #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811184407.A14470@xs4all.nl>
Message-ID: <200008140401.QAA14955@s454.cosc.canterbury.ac.nz>

> ERRORS
>
>       EINTR   A signal occurred.

Different unices seem to have manpages which differ considerably
in these areas. The Solaris manpage says:

     EINTR     The operation was interrupted  by  delivery  of  a
               signal  before  any  data  could be buffered to be
               sent.

which suggests that you won't get EINTR if some data *has* been
sent before the signal arrives. It seems to me the only thing that
could possibly happen in this case is to return with fewer bytes
than requested, whether the socket is non-blocking or not.

So it seems that, in the presence of signals, neither write()
nor send() can be relied upon to either completely succeed
or completely fail. 

Perhaps the reason this hasn't caused anyone a problem is that the
combination of blocking sockets and signals that you want to handle
and then carry on after are fairly rare.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Mon Aug 14 04:51:45 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 15:51:45 +1200 (NZST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000811103701.A25386@keymaster.enme.ucalgary.ca>
Message-ID: <200008140351.PAA14951@s454.cosc.canterbury.ac.nz>

> We don't limit the amount of memory you can allocate on all
> machines just because your program may run out of memory on some
> machine.

Legend has it that Steve Jobs tried to do something like that
with the original 128K Macintosh. He was against making the
machine expandable in any way, so that any program which ran
one Mac would run on all Macs.

Didn't stay that way for very long...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Mon Aug 14 05:17:30 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:17:30 +1200 (NZST)
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers
 for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>
Message-ID: <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>

Two reasons why list comprehensions fit better in Python
than the equivalent map/filter/lambda constructs:

1) Scoping. The expressions in the LC have direct access to the
   enclosing scope, which is not true of lambdas in Python.

2) Efficiency. An LC with if-clauses which weed out many potential
   list elements can be much more efficient than the equivalent
   filter operation, which must build the whole list first and
   then remove unwanted items.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Mon Aug 14 05:24:43 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:24:43 +1200 (NZST)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug
 #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <001301c00469$cb380fe0$f2a6b5d4@hagrid>
Message-ID: <200008140424.QAA14962@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <effbot@telia.com>:

> fwiw, I still haven't found a single reference (SUSv2 spec, man-
> pages, Stevens, the original BSD papers) that says that a blocking
> socket may do anything but sending all the data, or fail.

The Solaris manpage sort of seems to indirectly suggest that
it might conceivabley be possible:

     EMSGSIZE  The socket requires that message  be  sent  atomi-
               cally, and the message was too long.

Which suggests that some types of socket may not require the
message to be sent atomically. (TCP/IP, for example.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From thomas@xs4all.net  Mon Aug 14 06:38:55 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 07:38:55 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/api api.tex,1.76,1.77
In-Reply-To: <200008140250.TAA31549@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Sun, Aug 13, 2000 at 07:50:23PM -0700
References: <200008140250.TAA31549@slayer.i.sourceforge.net>
Message-ID: <20000814073854.O14470@xs4all.nl>

On Sun, Aug 13, 2000 at 07:50:23PM -0700, Fred L. Drake wrote:

> In the section on the "Very High Level Layer", address concerns brought up
> by Edward K. Ream <edream@users.sourceforge.net> about FILE* values and
> incompatible C libraries in dynamically linked extensions.  It is not clear
> (to me) how realistic the issue is, but it is better documented than not.

> + Note also that several of these functions take \ctype{FILE*}
> + parameters.  On particular issue which needs to be handled carefully
> + is that the \ctype{FILE} structure for different C libraries can be
> + different and incompatible.  Under Windows (at least), it is possible
> + for dynamically linked extensions to actually use different libraries,
> + so care should be taken that \ctype{FILE*} parameters are only passed
> + to these functions if it is certain that they were created by the same
> + library that the Python runtime is using.

I saw a Jitterbug 'suggestion' bugthread, where Guido ended up liking the
idea of wrapping fopen() and fclose() in the Python library, so that you got
the right FILE structures when linking with another libc/compiler. Whatever
happened to that idea ? Or does it just await implementation ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Mon Aug 14 06:57:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 07:57:13 +0200
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 13, 2000 at 08:08:45PM -0400
References: <20000813095357.K14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>
Message-ID: <20000814075713.P14470@xs4all.nl>

Well, Tim, thanx for that mini-PEP, if I can call your recap of years of
discussion that ;-) It did clear up my mind, though I have a few comments to
make. This is the last I have to say about it, though, I didn't intend to
drag you into a new long discussion ;)

On Sun, Aug 13, 2000 at 08:08:45PM -0400, Tim Peters wrote:

> Guido feels exactly the opposite:  the business about "alien, forced feel,
> not fitting" is exactly what he's said about map/filter/reduce/lambda on
> many occasions. 

Note that I didn't mention lambda, and did so purposely ;) Yes, listcomps
are much better than lambda. And I'll grant the special case of 'None' as
the function is unpythonic, in map/filter/reduce. Other than that, they're
just functions, which I hope aren't too unpythonic<wink>

> > [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> > S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> > for e in [Egg(a, b, c, d, e) for e in E]]

> That isn't a serious argument, to my eyes.

Well, it's at the core of my doubts :) 'for' and 'if' start out of thin air.
I don't think any other python statement or expression can be repeated and
glued together without any kind of separator, except string literals (which
I can see the point of, but scared me a little none the less.)

I don't know enough lisp to write this expression in that, but I assume you
could still match the parentheses to find out how they are grouped.

> I know you missed that possibility above because, despite your claim of
> being hard to parse, it's dead easy to spot where your listcomps begin:  "["
> is easy for the eye to find.

That's the start of a listcomp, but not of a specific listcomp-for or -if.

> > I hope anyone writing something like that (notice the shadowing of
> > some of the outer vrbls in the inner loops)

> You can find the same in nested lambdas littering map/reduce/etc today.

Yes, and wasn't the point to remove those ? <wink>

Like I said, I'm not arguing against listcomprehensions, I'm just saying I'm
sorry we didn't get yet another debate on syntax ;) Having said that, I'll
step back and let Eric's predicted doom fall over Python; hopefully we are
wrong and you all are right :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jack@oratrix.nl  Mon Aug 14 10:44:39 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 14 Aug 2000 11:44:39 +0200
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
 Fri, 11 Aug 2000 09:28:09 -0500 , <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <20000814094440.0BC7F303181@snelboot.oratrix.nl>

Isn't the solution to this problem to just implement PyOS_CheckStack() for 
unix?

I assume you can implement it fairly cheaply by having the first call compute 
a stack warning address and subsequent calls simply checking that the stack 
hasn't extended below the limit yet.

It might also be needed to add a few more PyOS_CheckStack() calls here and 
there, but I think most of them are in place.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From mal@lemburg.com  Mon Aug 14 12:27:27 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 14 Aug 2000 13:27:27 +0200
Subject: [Python-Dev] Doc-strings for class attributes ?!
Message-ID: <3997D79F.CC4A5A0E@lemburg.com>

I've been doing a lot of auto-doc style documenation lately
and have wondered how to include documentation for class attributes
in a nice and usable way.

Right now, we already have doc-strings for modules, classes,
functions and methods. Yet there is no way to assign doc-strings
to arbitrary class attributes.

I figured that it would be nice to have the doc-strings for
attributes use the same notation as for the other objects, e.g.

class C
    " My class C "

    a = 1
    " This is the attribute a of class C, used for ..."

    b = 0
    " Setting b to 1 causes..."

The idea is to create an implicit second attribute for every
instance of documented attribute with a special name, e.g. for
attribute b:

    __doc__b__ = " Setting b to 1 causes..."

That way doc-strings would be able to use class inheritance
just like the attributes themselves. The extra attributes can
be created by the compiler. In -OO mode, these attributes would
not be created.

What do you think about this idea ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From bwarsaw@beopen.com  Mon Aug 14 15:13:21 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 10:13:21 -0400 (EDT)
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de>
 <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
 <20000810174026.D17171@xs4all.nl>
 <39933AD8.B8EF5D59@nowonder.de>
 <20000811005013.F17171@xs4all.nl>
Message-ID: <14743.65153.264194.444209@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> Fine, the patch addresses that. When the hostname passed to
    TW> smtplib is "" (which is the default), it should be turned into
    TW> a FQDN. I agree. However, if someone passed in a name, we do
    TW> not know if they even *want* the name turned into a FQDN. In
    TW> the face of ambiguity, refuse the temptation to guess.

Just to weigh in after the fact, I agree with Thomas.  All this stuff
is primarily there to generate something sane for the default empty
string argument.  If the library client passes in their own name,
smtplib.py should use that as given.

-Barry


From fdrake@beopen.com  Mon Aug 14 15:46:17 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 10:46:17 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_ntpath.py,1.2,1.3
In-Reply-To: <200008140621.XAA12890@slayer.i.sourceforge.net>
References: <200008140621.XAA12890@slayer.i.sourceforge.net>
Message-ID: <14744.1593.850598.411098@cj42289-a.reston1.va.home.com>

Mark Hammond writes:
 > Test for fix to bug #110673: os.abspatth() now always returns
 > os.getcwd() on Windows, if an empty path is specified.  It
 > previously did not if an empty path was delegated to
 > win32api.GetFullPathName())
...
 > + tester('ntpath.abspath("")', os.getcwd())

  This doesn't work.  The test should pass on non-Windows platforms as
well; on Linux I get this:

cj42289-a(.../python/linux-beowolf); ./python ../Lib/test/test_ntpath.py
error!
evaluated: ntpath.abspath("")
should be: /home/fdrake/projects/python/linux-beowolf
 returned: \home\fdrake\projects\python\linux-beowolf\

1 errors.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From skip@mojam.com (Skip Montanaro)  Mon Aug 14 15:56:39 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 14 Aug 2000 09:56:39 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0001.txt,1.4,1.5
In-Reply-To: <200008141448.HAA18067@slayer.i.sourceforge.net>
References: <200008141448.HAA18067@slayer.i.sourceforge.net>
Message-ID: <14744.2215.11395.695253@beluga.mojam.com>

    Barry> There are now three basic types of PEPs: informational, standards
    Barry> track, and technical.

Looking more like RFCs all the time... ;-)

Skip


From jim@interet.com  Mon Aug 14 16:25:59 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Mon, 14 Aug 2000 11:25:59 -0400
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <39980F87.85641FD2@interet.com>

Bill Tutt wrote:
> 
> This is an alternative approach that we should certainly consider. We could
> use ANTLR (www.antlr.org) as our parser generator, and have it generate Java

What about using Bison/Yacc?  I have been playing with a
lint tool for Python, and have been using it.

JimA


From trentm@ActiveState.com  Mon Aug 14 16:41:28 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Mon, 14 Aug 2000 08:41:28 -0700
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was Lockstep iteration - eureka!))
In-Reply-To: <39980F87.85641FD2@interet.com>; from jim@interet.com on Mon, Aug 14, 2000 at 11:25:59AM -0400
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com> <39980F87.85641FD2@interet.com>
Message-ID: <20000814084128.A7537@ActiveState.com>

On Mon, Aug 14, 2000 at 11:25:59AM -0400, James C. Ahlstrom wrote:
> What about using Bison/Yacc?  I have been playing with a
> lint tool for Python, and have been using it.
> 
Oh yeah? What does the linter check? I would be interested in seeing that.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From bwarsaw@beopen.com  Mon Aug 14 16:46:50 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 11:46:50 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de>
 <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
 <20000810174026.D17171@xs4all.nl>
 <3993D570.7578FE71@nowonder.de>
Message-ID: <14744.5229.470633.973850@anthem.concentric.net>

>>>>> "PS" == Peter Schneider-Kamp <nowonder@nowonder.de> writes:

    PS> After sleeping over it, I noticed that at least
    PS> BaseHTTPServer and ftplib also use a similar
    PS> algorithm to get a fully qualified domain name.

    PS> Together with smtplib there are four occurences
    PS> of the algorithm (2 in BaseHTTPServer). I think
    PS> it would be good not to have four, but one
    PS> implementation.

    PS> First I thought it could be socket.get_fqdn(),
    PS> but it seems a bit troublesome to write it in C.

    PS> Should this go somewhere? If yes, where should
    PS> it go?

    PS> I'll happily prepare a patch as soon as I know
    PS> where to put it.

I wonder if we should move socket to _socket and write a Python
wrapper which would basically import * from _socket and add
make_fqdn().

-Barry


From thomas@xs4all.net  Mon Aug 14 16:48:37 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 17:48:37 +0200
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.5229.470633.973850@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 14, 2000 at 11:46:50AM -0400
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl> <3993D570.7578FE71@nowonder.de> <14744.5229.470633.973850@anthem.concentric.net>
Message-ID: <20000814174837.S14470@xs4all.nl>

On Mon, Aug 14, 2000 at 11:46:50AM -0400, Barry A. Warsaw wrote:

> >>>>> "PS" == Peter Schneider-Kamp <nowonder@nowonder.de> writes:

>     PS> After sleeping over it, I noticed that at least
>     PS> BaseHTTPServer and ftplib also use a similar
>     PS> algorithm to get a fully qualified domain name.
> 
>     PS> Together with smtplib there are four occurences
>     PS> of the algorithm (2 in BaseHTTPServer). I think
>     PS> it would be good not to have four, but one
>     PS> implementation.
> 
>     PS> First I thought it could be socket.get_fqdn(),
>     PS> but it seems a bit troublesome to write it in C.
> 
>     PS> Should this go somewhere? If yes, where should
>     PS> it go?
> 
>     PS> I'll happily prepare a patch as soon as I know
>     PS> where to put it.
> 
> I wonder if we should move socket to _socket and write a Python
> wrapper which would basically import * from _socket and add
> make_fqdn().

+1 on that idea, especially since BeOS and Windows (I think ?) already have
that constructions. If we are going to place this make_fqdn() function
anywhere, it should be the socket module or a 'dns' module. (And I mean a
real DNS module, not the low-level wrapper around raw DNS packets that Guido
wrote ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From bwarsaw@beopen.com  Mon Aug 14 16:56:15 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 11:56:15 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <l03102802b5b71c40f9fc@[193.78.237.121]>
 <3993FD49.C7E71108@prescod.net>
Message-ID: <14744.5791.895030.893545@anthem.concentric.net>

>>>>> "PP" == Paul Prescod <paul@prescod.net> writes:

    PP> Let me throw out another idea. What if sequences just had
    PP> .items() methods?

Funny, I remember talking with Guido about this on a lunch trip
several years ago.  Tim will probably chime in that /he/ proposed it
in the Python 0.9.3 time frame.  :)

-Barry


From fdrake@beopen.com  Mon Aug 14 16:59:53 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 11:59:53 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.5229.470633.973850@anthem.concentric.net>
References: <3992DF9E.BF5A080C@nowonder.de>
 <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
 <20000810174026.D17171@xs4all.nl>
 <3993D570.7578FE71@nowonder.de>
 <14744.5229.470633.973850@anthem.concentric.net>
Message-ID: <14744.6009.66009.888078@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > I wonder if we should move socket to _socket and write a Python
 > wrapper which would basically import * from _socket and add
 > make_fqdn().

  I think we could either do this or use PyRun_String() from
initsocket().


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Mon Aug 14 17:09:11 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:09:11 -0400 (EDT)
Subject: [Python-Dev] Cookie.py
References: <20000811122608.F20646@kronos.cnri.reston.va.us>
 <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <14744.6567.225562.458943@anthem.concentric.net>

>>>>> "MZ" == Moshe Zadka <moshez@math.huji.ac.il> writes:

    | a) SimpleCookie -- never uses pickle
    | b) SerilizeCookie -- always uses pickle
    | c) SmartCookie -- uses pickle based on old heuristic.

Very cool.  The autopicklification really bugged me too (literally) in
Mailman.

-Barry


From bwarsaw@beopen.com  Mon Aug 14 17:12:45 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:12:45 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
 !)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <14744.6781.535265.161119@anthem.concentric.net>

>>>>> "BT" == Bill Tutt <billtut@microsoft.com> writes:

    BT> This is an alternative approach that we should certainly
    BT> consider. We could use ANTLR (www.antlr.org) as our parser
    BT> generator, and have it generate Java for JPython, and C++ for
    BT> CPython.  This would be a good chunk of work, and it's
    BT> something I really don't have time to pursue. I don't even
    BT> have time to pursue the idea about moving keyword recognition
    BT> into the lexer.

    BT> I'm just not sure if you want to bother introducing C++ into
    BT> the Python codebase solely to only have one parser for CPython
    BT> and JPython.

We've talked about exactly those issues internally a while back, but
never came to a conclusion (IIRC) about the C++ issue for CPython.

-Barry


From jim@interet.com  Mon Aug 14 17:29:08 2000
From: jim@interet.com (James C. Ahlstrom)
Date: Mon, 14 Aug 2000 12:29:08 -0400
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was
 Lockstep iteration - eureka!))
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com> <39980F87.85641FD2@interet.com> <20000814084128.A7537@ActiveState.com>
Message-ID: <39981E54.D50BD0B4@interet.com>

Trent Mick wrote:
> 
> On Mon, Aug 14, 2000 at 11:25:59AM -0400, James C. Ahlstrom wrote:
> > What about using Bison/Yacc?  I have been playing with a
> > lint tool for Python, and have been using it.
> >
> Oh yeah? What does the linter check? I would be interested in seeing that.

Actually I have better luck parsing Python than linting it.  My
initial naive approach using C-language wisdom such as checking for
line numbers where variables are set/used failed.  I now feel that
a Python lint tool must either use complete data flow analysis
(hard) or must actually interpret the code as Python does (hard).
All I can really do so far is get and check function signatures.
I can supply more details if you want, but remember it doesn't
work yet, and I may not have time to complete it.  I learned a
lot though.

To parse Python I first use Parser/tokenizer.c to return tokens,
then a Yacc grammar file.  This parses all of Lib/*.py in less
than two seconds on a modest machine.  The tokens returned by
tokenizer.c must be massaged a bit to be suitable for Yacc, but
nothing major.

All the Yacc actions are calls to Python methods, so the real
work is written in Python.  Yacc just represents the grammar.

The problem I have with the current grammar is the large number
of confusing shifts required.  The grammar can't specify operator
precedence, so it uses shift/reduce conflicts instead.  Yacc
eliminates this problem.

JimA


From tim_one@email.msn.com  Mon Aug 14 17:42:14 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 14 Aug 2000 12:42:14 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <14744.5791.895030.893545@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>

[Paul Prescod]
> Let me throw out another idea. What if sequences just had
> .items() methods?

[Barry A. Warsaw]
> Funny, I remember talking with Guido about this on a lunch trip
> several years ago.  Tim will probably chime in that /he/ proposed it
> in the Python 0.9.3 time frame.  :)

Not me, although *someone* proposed it at least that early, perhaps at 0.9.1
already.  IIRC, that was the very first time Guido used the term
"hypergeneralization" in a cluck-cluck kind of public way.  That is,
sequences and mappings are different concepts in Python, and intentionally
so.  Don't know how he feels now.

But if you add seq.items(), you had better add seq.keys() too, and
seq.values() as a synonym for seq[:].  I guess the perceived advantage of
adding seq.items() is that it supplies yet another incredibly slow and
convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
allocate gazillabytes of storage and compute all the indexes into a massive
data structure up front, and then we can use the loop index that's already
sitting there for free anyway to index into that and get back a redundant
copy of itself!" <wink>.

not-a-good-sign-when-common-sense-is-offended-ly y'rs  - tim




From bwarsaw@beopen.com  Mon Aug 14 17:48:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:48:59 -0400 (EDT)
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com>
Message-ID: <14744.8955.35531.757406@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> Could someone at BeOpen please check what happened to the
    M> python-announce mailing list ?!

This is on my task list, but I was on vacation last week and have been
swamped with various other things.  My plan is to feed the
announcements to a Mailman list, where approval can happen using the
same moderator interface.  But I need to make a few changes to Mailman
to support this.

-Barry


From mal@lemburg.com  Mon Aug 14 17:54:05 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 14 Aug 2000 18:54:05 +0200
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com> <14744.8955.35531.757406@anthem.concentric.net>
Message-ID: <3998242D.A61010FB@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal@lemburg.com> writes:
> 
>     M> Could someone at BeOpen please check what happened to the
>     M> python-announce mailing list ?!
> 
> This is on my task list, but I was on vacation last week and have been
> swamped with various other things.  My plan is to feed the
> announcements to a Mailman list, where approval can happen using the
> same moderator interface.  But I need to make a few changes to Mailman
> to support this.

Great :-)

BTW, doesn't SourceForge have some News channel for Python
as well (I have seen these for other projects) ? Would be
cool to channel the announcements there as well.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From ping@lfw.org  Mon Aug 14 19:58:11 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Mon, 14 Aug 2000 13:58:11 -0500 (CDT)
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was
 Lockstep iteration - eureka!))
In-Reply-To: <39981E54.D50BD0B4@interet.com>
Message-ID: <Pine.LNX.4.10.10008141345220.3988-100000@server1.lfw.org>

Trent Mick wrote:
Oh yeah? What does the linter check? I would be interested in seeing that.

James C. Ahlstrom wrote:
> Actually I have better luck parsing Python than linting it.  [...]
> All I can really do so far is get and check function signatures.

Python is hard to lint-check because types and objects are so
dynamic.  Last time i remember visiting this issue, Tim Peters
came up with a lint program that was based on warning you if
you used a particular spelling of an identifier only once (thus
likely to indicate a typing mistake).

I enhanced this a bit to follow imports and the result is at

    http://www.lfw.org/python/

(look for "pylint").

The rule is pretty simplistic, but i've tweaked it a bit and it
has actually worked pretty well for me.

Anyway, feel free to give it a whirl.



-- ?!ng



From bwarsaw@beopen.com  Mon Aug 14 20:12:04 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:12:04 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de>
 <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
 <20000810174026.D17171@xs4all.nl>
 <3993D570.7578FE71@nowonder.de>
 <14744.5229.470633.973850@anthem.concentric.net>
 <14744.6009.66009.888078@cj42289-a.reston1.va.home.com>
Message-ID: <14744.17540.586064.729048@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake@beopen.com> writes:

    |   I think we could either do this or use PyRun_String() from
    | initsocket().

Ug.  -1 on using PyRun_String().  Doing the socket->_socket shuffle is
better for the long term.

-Barry


From nowonder@nowonder.de  Mon Aug 14 22:12:03 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Mon, 14 Aug 2000 21:12:03 +0000
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <399860A3.4E9A340E@nowonder.de>

Tim Peters wrote:
> 
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived advantage of
> adding seq.items() is that it supplies yet another incredibly slow and
> convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
> allocate gazillabytes of storage and compute all the indexes into a massive
> data structure up front, and then we can use the loop index that's already
> sitting there for free anyway to index into that and get back a redundant
> copy of itself!" <wink>.

That's a -1, right? <0.1 wink>

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From fdrake@beopen.com  Mon Aug 14 20:13:29 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 15:13:29 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.17540.586064.729048@anthem.concentric.net>
References: <3992DF9E.BF5A080C@nowonder.de>
 <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
 <20000810174026.D17171@xs4all.nl>
 <3993D570.7578FE71@nowonder.de>
 <14744.5229.470633.973850@anthem.concentric.net>
 <14744.6009.66009.888078@cj42289-a.reston1.va.home.com>
 <14744.17540.586064.729048@anthem.concentric.net>
Message-ID: <14744.17625.935969.667720@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Ug.  -1 on using PyRun_String().  Doing the socket->_socket shuffle is
 > better for the long term.

  I'm inclined to agree, simply because it allows at least a slight
simplification in socketmodule.c since the conditional naming of the
module init function can be removed.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Mon Aug 14 20:24:10 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:24:10 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <14744.5791.895030.893545@anthem.concentric.net>
 <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <14744.18266.840173.466719@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> But if you add seq.items(), you had better add seq.keys() too,
    TP> and seq.values() as a synonym for seq[:].  I guess the
    TP> perceived advantage of adding seq.items() is that it supplies
    TP> yet another incredibly slow and convoluted way to get at the
    TP> for-loop index?  "Ah, that's the ticket!  Let's allocate
    TP> gazillabytes of storage and compute all the indexes into a
    TP> massive data structure up front, and then we can use the loop
    TP> index that's already sitting there for free anyway to index
    TP> into that and get back a redundant copy of itself!" <wink>.

Or create a generator.  <oops, slap>

-Barry


From bwarsaw@beopen.com  Mon Aug 14 20:25:07 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:25:07 -0400 (EDT)
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com>
 <14744.8955.35531.757406@anthem.concentric.net>
 <3998242D.A61010FB@lemburg.com>
Message-ID: <14744.18323.499501.115700@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> BTW, doesn't SourceForge have some News channel for Python
    M> as well (I have seen these for other projects) ? Would be
    M> cool to channel the announcements there as well.

Yes, but it's a bit clunky.

-Barry


From esr@thyrsus.com  Mon Aug 14 23:57:18 2000
From: esr@thyrsus.com (esr@thyrsus.com)
Date: Mon, 14 Aug 2000 18:57:18 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>
References: <20000813095357.K14470@xs4all.nl> <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>
Message-ID: <20000814185718.A2509@thyrsus.com>

Greg Ewing <greg@cosc.canterbury.ac.nz>:
> Two reasons why list comprehensions fit better in Python
> than the equivalent map/filter/lambda constructs:
> 
> 1) Scoping. The expressions in the LC have direct access to the
>    enclosing scope, which is not true of lambdas in Python.

This is a bug in lambdas, not a feature of syntax.
 
> 2) Efficiency. An LC with if-clauses which weed out many potential
>    list elements can be much more efficient than the equivalent
>    filter operation, which must build the whole list first and
>    then remove unwanted items.

A better argument.  To refute it, I'd need to open a big can of worms
labeled "lazy evaluation".
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Freedom, morality, and the human dignity of the individual consists
precisely in this; that he does good not because he is forced to do
so, but because he freely conceives it, wants it, and loves it.
	-- Mikhail Bakunin 


From esr@thyrsus.com  Mon Aug 14 23:59:08 2000
From: esr@thyrsus.com (esr@thyrsus.com)
Date: Mon, 14 Aug 2000 18:59:08 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000814075713.P14470@xs4all.nl>
References: <20000813095357.K14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com> <20000814075713.P14470@xs4all.nl>
Message-ID: <20000814185908.B2509@thyrsus.com>

Thomas Wouters <thomas@xs4all.net>:
> Like I said, I'm not arguing against listcomprehensions, I'm just saying I'm
> sorry we didn't get yet another debate on syntax ;) Having said that, I'll
> step back and let Eric's predicted doom fall over Python; hopefully we are
> wrong and you all are right :-)

Now, now.  I'm not predicting the doom of Python as a whole, just that 
listcomp syntax will turn out to have been a bad, limiting mistake.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

It is proper to take alarm at the first experiment on our
liberties. We hold this prudent jealousy to be the first duty of
citizens and one of the noblest characteristics of the late
Revolution. The freemen of America did not wait till usurped power had
strengthened itself by exercise and entangled the question in
precedents. They saw all the consequences in the principle, and they
avoided the consequences by denying the principle. We revere this
lesson too much ... to forget it
	-- James Madison.


From MarkH@ActiveState.com  Tue Aug 15 01:46:56 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 15 Aug 2000 10:46:56 +1000
Subject: [Python-Dev] WindowsError repr
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>

I have just checked in a fix for: [ Bug #110670 ] Win32 os.listdir raises
confusing errors
http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=110670

In a nutshell:
>>> os.listdir('/cow')
...
OSError: [Errno 3] No such process: '/cow'
>>>

The solution here was to use the new WindowsError object that was defined
back in February
(http://www.python.org/pipermail/python-dev/2000-February/008803.html)  As
this is a sub-class of OSError, nothing will break.

However, the _look_ of the error does change.  After my fix, it now looks
like:

>>> os.listdir('/cow')
...
WindowsError: [Errno 3] The system cannot find the path specified: '/cow'
>>>

AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
work, as WindowsError derives from OSError.  It just worries me that people
will start explicitly catching "WindowsError", regardless of whatever
documentation we might write on the subject.

Does anyone see this as a problem?  Should a WindowsError masquerade as
"OSError", or maybe just look a little more like it - eg, "OSError
(windows)" ??

Thoughts,

Mark.



From tim_one@email.msn.com  Tue Aug 15 02:01:55 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 14 Aug 2000 21:01:55 -0400
Subject: [Python-Dev] WindowsError repr
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMIGPAA.tim_one@email.msn.com>

[Mark Hammond]
> ...
> However, the _look_ of the error does change.  After my fix, it now looks
> like:
>
> >>> os.listdir('/cow')
> ...
> WindowsError: [Errno 3] The system cannot find the path specified: '/cow'
> >>>

Thank you!

> AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
> work, as WindowsError derives from OSError.  It just worries me
> that people will start explicitly catching "WindowsError", regardless
> of whatever documentation we might write on the subject.
>
> Does anyone see this as a problem?  Should a WindowsError masquerade as
> "OSError", or maybe just look a little more like it - eg, "OSError
> (windows)" ??

I can assure you that nobody running on a Unix(tm) derivative is going to
catch WindowsError as such on purpose, so the question is how stupid are
Windows users?  I say leave it alone and let them tell us <wink>.




From greg@cosc.canterbury.ac.nz  Tue Aug 15 02:08:00 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 15 Aug 2000 13:08:00 +1200 (NZST)
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers
 for 2.0)
In-Reply-To: <20000814075713.P14470@xs4all.nl>
Message-ID: <200008150108.NAA15067@s454.cosc.canterbury.ac.nz>

> > [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> > S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> > for e in [Egg(a, b, c, d, e) for e in E]]

Note that shadowing of the local variables like that in
an LC is NOT allowed, because, like the variables in a
normal for loop, they're all at the same scope level.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Tue Aug 15 05:43:44 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 00:43:44 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <399860A3.4E9A340E@nowonder.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENAGPAA.tim_one@email.msn.com>

[Tim]
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived
> advantage of adding seq.items() is that it supplies yet another
> incredibly slow and convoluted way to get at the for-loop index?
> "Ah, that's the ticket!  Let's allocate gazillabytes of storage and
> compute all the indexes into a massive data structure up front, and
> then we can use the loop index that's already sitting there for
> free anyway to index into that and get back a redundant copy of
> itself!" <wink>.

[Peter Schneider-Kamp]]
> That's a -1, right? <0.1 wink>

-0 if you also add .keys() and .values() (if you're going to
hypergeneralize, don't go partway nuts -- then it's both more general than
it should be yet still not as general as people will expect).

-1 if it's *just* seq.items().

+1 on an "indexing" clause (the BDFL liked that enough to implement it a few
years ago, but it didn't go in then because he found some random putz who
had used "indexing" as a vrbl name; but if doesn't need to be a keyword,
even that lame (ask Just <wink>) objection goes away).

sqrt(-1) on Barry's generator tease, because that's an imaginary proposal at
this stage of the game.




From Fredrik Lundh" <effbot@telia.com  Tue Aug 15 06:33:03 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 07:33:03 +0200
Subject: [Python-Dev] WindowsError repr
References: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>
Message-ID: <003d01c0067a$4aa6dc40$f2a6b5d4@hagrid>

mark wrote:
> AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
> work, as WindowsError derives from OSError.  It just worries me that people
> will start explicitly catching "WindowsError", regardless of whatever
> documentation we might write on the subject.
> 
> Does anyone see this as a problem?

I've seen bigger problems -- but I think it's a problem.

any reason you cannot just use a plain OSError?  is the extra
"this is not a generic OSError" information bit actually used by
anyone?

</F>



From Fredrik Lundh" <effbot@telia.com  Tue Aug 15 07:14:42 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 08:14:42 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.7,1.8
References: <200008150558.WAA26703@slayer.i.sourceforge.net>
Message-ID: <006d01c00680$1c4469c0$f2a6b5d4@hagrid>

tim wrote:
> !     test_popen2       Win32      X X    26-Jul-2000
>           [believe this was fix by /F]
> !         [still fails 15-Aug-2000 for me, on Win98 - tim
> !          test test_popen2 crashed -- exceptions.WindowsError :
> !          [Errno 2] The system cannot find the file specified
> !         ]

do you have w9xpopen.exe in place?

(iirc, mark just added the build files)

</F>



From tim_one@email.msn.com  Tue Aug 15 07:30:40 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 02:30:40 -0400
Subject: [Python-Dev] [PEP 200] Help!
Message-ID: <LNBBLJKPBEHFEDALKOLCEENEGPAA.tim_one@email.msn.com>

I took a stab at updating PEP200 (the 2.0 release plan), but if you know
more about any of it that should be recorded or changed, please just do so!
There's no reason to funnel updates thru me.  Jeremy may feel differently
when he gets back, but in the meantime this is just more time-consuming
stuff I hadn't planned on needing to do.

Windows geeks:  what's going on with test_winreg2 and test_popen2?  Those
tests have been failing forever (at least on Win98 for me), and the grace
period has more than expired.  Fredrik, if you're still waiting for me to do
something with popen2 (rings a vague bell), please remind me -- I've
forgotten what it was!

thrashingly y'rs  - tim




From tim_one@email.msn.com  Tue Aug 15 07:43:06 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 02:43:06 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.7,1.8
In-Reply-To: <006d01c00680$1c4469c0$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENFGPAA.tim_one@email.msn.com>

> tim wrote:
> > !     test_popen2       Win32      X X    26-Jul-2000
> >           [believe this was fix by /F]
> > !         [still fails 15-Aug-2000 for me, on Win98 - tim
> > !          test test_popen2 crashed -- exceptions.WindowsError :
> > !          [Errno 2] The system cannot find the file specified
> > !         ]

[/F]
> do you have w9xpopen.exe in place?
>
> (iirc, mark just added the build files)

Ah, thanks!  This is coming back to me now ... kinda ... will pursue.




From tim_one@email.msn.com  Tue Aug 15 08:07:49 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 03:07:49 -0400
Subject: [Python-Dev] test_popen2 on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENFGPAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENGGPAA.tim_one@email.msn.com>

[/F]
> do you have w9xpopen.exe in place?
>
> (iirc, mark just added the build files)

Heh -- yes, and I wasn't building them.

Now test_popen2 fails for a different reason:

def _test():
    teststr = "abc\n"
    print "testing popen2..."
    r, w = popen2('cat')
    ...

Ain't no "cat" on Win98!  The test is specific to Unix derivatives.  Other
than that, popen2 is working for me now.

Mumble.




From MarkH@ActiveState.com  Tue Aug 15 09:08:33 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 15 Aug 2000 18:08:33 +1000
Subject: [Python-Dev] test_popen2 on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKENGGPAA.tim_one@email.msn.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCENBDEAA.MarkH@ActiveState.com>

> Ain't no "cat" on Win98!  The test is specific to Unix
> derivatives.  Other than that, popen2 is working for me

heh - I noticed that yesterday, then lumped it in the too hard basket.

What I really wanted was for test_popen2 to use python itself for the
sub-process.  This way, commands like 'python -c "import sys;sys.exit(2)"'
could test the handle close semantics, for example.  I gave up when I
realized I would probably need to create temp files with the mini-programs.

I was quite confident that if I attempted this, I would surely break the
test suite on a few platforms.  I wasn't brave enough to risk those
testicles of wrath at this stage in the game <wink>

Mark.




From thomas@xs4all.net  Tue Aug 15 12:15:42 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 13:15:42 +0200
Subject: [Python-Dev] New PEP for import-as
Message-ID: <20000815131542.B14470@xs4all.nl>

--QKdGvSO+nmPlgiQ/
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline


I wrote a quick PEP describing the 'import as' proposal I posted a patch for
last week. Mostly since I was bored in the train to work (too many kids running
around to play Diablo II or any other game -- I hate it when those brats go
'oh cool' and keep hanging around looking over my shoulder ;-) but also a
bit because Tim keeps insisting it should be easy to write a PEP. Perhaps
lowering the standard by providing a few *small* PEPs helps with that ;)
Just's 'indexing-for' PEP would be a good one, too, in that case.

Anyway, the proto-PEP is attached. It's in draft status as far as I'm
concerned, but the PEP isn't really necessary if the feature is accepted by
acclamation.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

--QKdGvSO+nmPlgiQ/
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="pep-02XX.txt"

PEP: 2XX
Title: Import As
Version: $Revision: 1.0 $
Owner: thomas@xs4all.net (Thomas Wouters)
Python-Version: 2.0
Status: Draft


Introduction

    This PEP describes the `import as' proposal for Python 2.0. This
    PEP tracks the status and ownership of this feature. It contains a
    description of the feature and outlines changes necessary to
    support the feature. The CVS revision history of this file
    contains the definitive historical record.


Rationale

    This PEP proposes a small extention of current Python syntax
    regarding the `import' and `from <module> import' statements. 
    These statements load in a module, and either bind that module to
    a local name, or binds objects from that module to a local name. 
    However, it is sometimes desirable to bind those objects to a
    different name, for instance to avoid name clashes. Currently, a
    round-about way has to be used to achieve this:

    import os
    real_os = os
    del os
    
    And similar for the `from ... import' statement:
    
    from os import fdopen, exit, stat
    os_fdopen = fdopen
    os_stat = stat
    del fdopen, stat
    
    The proposed syntax change would add an optional `as' clause to
    both these statements, as follows:

    import os as real_os
    from os import fdopen as os_fdopen, exit, stat as os_stat
    
    The `as' name is not intended to be a keyword, and some trickery
    has to be used to convince the CPython parser it isn't one. For
    more advanced parsers/tokenizers, however, this should not be a
    problem.


Implementation details

    A proposed implementation of this new clause can be found in the
    SourceForge patch manager[XX]. The patch uses a NAME field in the
    grammar rather than a bare string, to avoid the keyword issue. It
    also introduces a new bytecode, IMPORT_FROM_AS, which loads an
    object from a module and pushes it onto the stack, so it can be
    stored by a normal STORE_NAME opcode.
    
    The special case of `from module import *' remains a special case,
    in that it cannot accomodate an `as' clause. Also, the current
    implementation does not use IMPORT_FROM_AS for the old form of
    from-import, even though it would make sense to do so. The reason
    for this is that the current IMPORT_FROM bytecode loads objects
    directly from a module into the local namespace, in one bytecode
    operation, and also handles the special case of `*'. As a result
    of moving to the IMPORT_FROM_AS bytecode, two things would happen:
    
    - Two bytecode operations would have to be performed, per symbol,
      rather than one.
      
    - The names imported through `from-import' would be susceptible to
      the `global' keyword, which they currently are not. This means
      that `from-import' outside of the `*' special case behaves more
      like the normal `import' statement, which already follows the
      `global' keyword. It also means, however, that the `*' special
      case is even more special, compared to the ordinary form of
      `from-import'

    However, for consistency and for simplicity of implementation, it
    is probably best to split off the special case entirely, making a
    separate bytecode `IMPORT_ALL' that handles the special case of
    `*', and handle all other forms of `from-import' the way the
    proposed `IMPORT_FROM_AS' bytecode does.

    This dilemma does not apply to the normal `import' statement,
    because this is alread split into two opcodes, a `LOAD_MODULE' and a
    `STORE_NAME' opcode. Supporting the `import as' syntax is a slight
    change to the compiler only.


Copyright

    This document has been placed in the Public Domain.


References

    [1]
http://sourceforge.net/patch/?func=detailpatch&patch_id=101135&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

--QKdGvSO+nmPlgiQ/--


From nowonder@nowonder.de  Tue Aug 15 16:32:50 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Tue, 15 Aug 2000 15:32:50 +0000
Subject: [Python-Dev] IDLE development - Call for participation
Message-ID: <399962A2.D53A048F@nowonder.de>

To (hopefully) speed up the devlopment of IDLE a temporary
fork has been created as a seperate project at SourceForge:

  http://idlefork.sourceforge.net
  http://sourceforge.net/projects/idlefork

The CVS version represents the enhanced IDLE version
sed by David Scherer in his VPython. Besides other
improvements this version executes threads in a
seperate process.

The spanish inquisition invites everybody interested in
IDLE (and not keen to participate in any witch trials)
to contribute to the project.

Any kind of contribution (discussion of new features,
bug reports, patches) will be appreciated.

If we can get the new IDLE version stable and Python's
benevolent dictator for life blesses our lines of code,
the improved IDLE may go back into Python's source
tree proper.

at-least-it'll-be-part-of-Py3K-<wink>-ly y'rs
Peter

P.S.: You do not have to be a member of the Flying Circus.
P.P.S.: There is no Spanish inquisition <0.5 wink>!
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Tue Aug 15 16:56:46 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 17:56:46 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python pythonrun.c,2.105,2.106
In-Reply-To: <200008151549.IAA25722@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Tue, Aug 15, 2000 at 08:49:06AM -0700
References: <200008151549.IAA25722@slayer.i.sourceforge.net>
Message-ID: <20000815175646.A376@xs4all.nl>

On Tue, Aug 15, 2000 at 08:49:06AM -0700, Fred L. Drake wrote:

> + #include "osdefs.h"			/* SEP */

This comment is kind of cryptic... I know of only one SEP, and that's in "a
SEP field", a construct we use quite often at work ;-) Does this comment
mean the same ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Tue Aug 15 17:09:34 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 12:09:34 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python pythonrun.c,2.105,2.106
In-Reply-To: <20000815175646.A376@xs4all.nl>
References: <200008151549.IAA25722@slayer.i.sourceforge.net>
 <20000815175646.A376@xs4all.nl>
Message-ID: <14745.27454.815489.456310@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > On Tue, Aug 15, 2000 at 08:49:06AM -0700, Fred L. Drake wrote:
 > 
 > > + #include "osdefs.h"			/* SEP */
 > 
 > This comment is kind of cryptic... I know of only one SEP, and that's in "a
 > SEP field", a construct we use quite often at work ;-) Does this comment
 > mean the same ?

  Very cryptic indeed!  It meant I was including osdefs.h to get the
SEP #define from there, but then I didn't need it in the final version
of the code, so the #include can be removed.
  I'll remove those pronto!  Thanks for pointing out my sloppiness!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From trentm@ActiveState.com  Tue Aug 15 18:47:23 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Tue, 15 Aug 2000 10:47:23 -0700
Subject: [Python-Dev] segfault in sre on 64-bit plats
Message-ID: <20000815104723.A27306@ActiveState.com>

Fredrik,

The sre module currently segfaults on one of the tests suite tests on both
Win64 and 64-bit linux:

    [trentm@nickel src]$ ./python -c "import sre; sre.match('(x)*', 50000*'x')" > srefail.out
    Segmentation fault (core dumped)

I know that I can't expect you to debug this completely, as you don't have to
hardware, but I was hoping you might be able to shed some light on the
subject for me.

This test on Win32 and Linux32 hits the recursion limit check of 10000 in
SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
7500. I don't want to just willy-nilly drop the recursion limit down to make
the problem go away.

Do you have any idea why the segfault may be occuring on 64-bit platforms?

Mark (Favas), have you been having any problems with sre on your 64-bit plats?


In the example above I turned VERBOSE on in _sre.c. WOuld the trace help you?
Here is the last of it (the whole thing is 2MB so I am not sending it all):

    copy 0:1 to 15026 (2)
    |0x600000000020b90c|0x6000000000200d72|ENTER 7517
    |0x600000000020b90e|0x6000000000200d72|MARK 0
    |0x600000000020b912|0x6000000000200d72|LITERAL 120
    |0x600000000020b916|0x6000000000200d73|MARK 1
    |0x600000000020b91a|0x6000000000200d73|MAX_UNTIL 7515
    copy 0:1 to 15028 (2)
    |0x600000000020b90c|0x6000000000200d73|ENTER 7518
    |0x600000000020b90e|0x6000000000200d73|MARK 0
    |0x600000000020b912|0x6000000000200d73|LITERAL 120
    |0x600000000020b916|0x6000000000200d74|MARK 1
    |0x600000000020b91a|0x6000000000200d74|MAX_UNTIL 7516
    copy 0:1 to 15030 (2)
    |0x600000000020b90c|0x6000000000200d74|ENTER 7519
    |0x600000000020b90e|0x6000000000200d74|MARK 0
    |0x600000000020b912|0x6000000000200d74|LITERAL 120
    |0x600000000020b916|0x6000000000200d75|MARK 1
    |0x600000000020b91a|0x6000000000200d75|MAX_UNTIL 7517
    copy 0:1 to 15032 (2)
    |0x600000000020b90c|0x6000000000200d75|ENTER 7520
    |0x600000000020b90e|0x6000000000200d75|MARK 0
    |0x600000000020b912|0x6000000000200d75|LITERAL 120
    |0x600000000020b916|0x6000000000200d76|MARK 1
    |0x600000000020b91a|0x6000000000200d76|MAX_UNTIL 7518
    copy 0:1 to 15034 (2)
    |0x600000000020b90c|0x6000000000200d76|ENTER 7521
    |0x600000000020b90e|0x6000000000200d76|MARK 0
    |0x600000000020b912|0x6000000000200d76|LITERAL 120
    |0x600000000020b916|0x6000000000200d77|MARK 1
    |0x600000000020b91a|0x6000000000200d77|MAX_UNTIL 7519
    copy 0:1 to 15036 (2)
    |0x600000000020b90c|0x600



Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From thomas@xs4all.net  Tue Aug 15 19:24:14 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 20:24:14 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008151746.KAA06454@bush.i.sourceforge.net>; from noreply@sourceforge.net on Tue, Aug 15, 2000 at 10:46:39AM -0700
References: <200008151746.KAA06454@bush.i.sourceforge.net>
Message-ID: <20000815202414.B376@xs4all.nl>

On Tue, Aug 15, 2000 at 10:46:39AM -0700, noreply@sourceforge.net wrote:

[ About my slight fix to ref5.tex, on list comprehensions syntax ]

> Comment by tim_one:

> Reassigned to Fred, because it's a simple doc change.  Fred, accept this
> <wink> and check it in.  Note that the grammar has a bug, though, so this
> will need to be changed again (and so will the implementation).  That is,
> [x if 6] should not be a legal expression but the grammar allows it today.

A comment by someone (?!ng ?) who forgot to login, at the original
list-comprehensions patch suggests that Skip forgot to include the
documentation patch to listcomps he provided. Ping, Skip, can you sort this
out and check in the rest of that documentation (which supposedly includes a
tutorial section as well) ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Tue Aug 15 19:27:38 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 20:27:38 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/ref ref5.tex,1.32,1.33
In-Reply-To: <200008151754.KAA19233@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Tue, Aug 15, 2000 at 10:54:51AM -0700
References: <200008151754.KAA19233@slayer.i.sourceforge.net>
Message-ID: <20000815202737.C376@xs4all.nl>

On Tue, Aug 15, 2000 at 10:54:51AM -0700, Fred L. Drake wrote:

> Index: ref5.tex
> diff -C2 -r1.32 -r1.33
> *** ref5.tex	2000/08/12 18:09:50	1.32
> --- ref5.tex	2000/08/15 17:54:49	1.33
> ***************
> *** 153,157 ****
>   
>   \begin{verbatim}
> ! list_display:   "[" [expression_list [list_iter]] "]"
>   list_iter:   list_for | list_if
>   list_for:    "for" expression_list "in" testlist [list_iter]
> --- 153,158 ----
>   
>   \begin{verbatim}
> ! list_display:   "[" [listmaker] "]"
> ! listmaker:   expression_list ( list_iter | ( "," expression)* [","] )

Uhm, this is wrong, and I don't think it was what I submitted either
(though, if I did, I apologize :) The first element of listmaker is an
expression, not an expression_list. I'll change that, unless Ping and Skip
wake up and fix it in a much better way instead.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Tue Aug 15 19:32:07 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 14:32:07 -0400 (EDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
References: <200008151746.KAA06454@bush.i.sourceforge.net>
 <20000815202414.B376@xs4all.nl>
Message-ID: <14745.36007.423378.87635@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > A comment by someone (?!ng ?) who forgot to login, at the original
 > list-comprehensions patch suggests that Skip forgot to include the
 > documentation patch to listcomps he provided. Ping, Skip, can you sort this
 > out and check in the rest of that documentation (which supposedly includes a
 > tutorial section as well) ?

  I've not been tracking the list comprehensions discussion, but there
is a (minimal) entry in the tutorial.  It could use some fleshing out.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Tue Aug 15 19:34:43 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 14:34:43 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/ref ref5.tex,1.32,1.33
In-Reply-To: <20000815202737.C376@xs4all.nl>
References: <200008151754.KAA19233@slayer.i.sourceforge.net>
 <20000815202737.C376@xs4all.nl>
Message-ID: <14745.36163.362268.388275@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Uhm, this is wrong, and I don't think it was what I submitted either
 > (though, if I did, I apologize :) The first element of listmaker is an
 > expression, not an expression_list. I'll change that, unless Ping and Skip
 > wake up and fix it in a much better way instead.

  You're right; that's what I get for applying it manually (trying to
avoid all the machinery of saving/patching from SF...).
  Fixed in a sec!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Tue Aug 15 20:11:11 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:11:11 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
References: <20000815104723.A27306@ActiveState.com>
Message-ID: <005401c006ec$a95a74a0$f2a6b5d4@hagrid>

trent wrote:
> This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> 7500. I don't want to just willy-nilly drop the recursion limit down to make
> the problem go away.

SRE is overflowing the stack, of course :-(

:::

I spent a little time surfing around on the MSDN site, and came
up with the following little PyOS_CheckStack implementation for
Visual C (and compatibles):

#include <malloc.h>

int __inline
PyOS_CheckStack()
{
    __try {
        /* alloca throws a stack overflow exception if there's less
           than 2k left on the stack */
        alloca(2000);
        return 0;
    } __except (1) {
        /* just ignore the error */
    }
    return 1;
}

a quick look at the resulting assembler code indicates that this
should be pretty efficient (some exception-related stuff, and a
call to an internal stack probe function), but I haven't added it
to the interpreter (and I don't have time to dig deeper into this
before the weekend).

maybe someone else has a little time to spare?

it shouldn't be that hard to figure out 1) where to put this, 2) what
ifdef's to use around it, and 3) what "2000" should be changed to...

(and don't forget to set USE_STACKCHECK)

</F>



From Fredrik Lundh" <effbot@telia.com  Tue Aug 15 20:17:49 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:17:49 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
References: <20000815104723.A27306@ActiveState.com> <005401c006ec$a95a74a0$f2a6b5d4@hagrid>
Message-ID: <008601c006ed$8100c120$f2a6b5d4@hagrid>

I wrote:
>     } __except (1) {

should probably be:

    } __except (EXCEPTION_EXECUTE_HANDLER) {

</F>



From Fredrik Lundh" <effbot@telia.com  Tue Aug 15 20:19:32 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:19:32 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
Message-ID: <009501c006ed$be40afa0$f2a6b5d4@hagrid>

I wrote:
>     } __except (EXCEPTION_EXECUTE_HANDLER) {

which is defined in "excpt.h"...

</F>



From tim_one@email.msn.com  Tue Aug 15 20:19:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 15:19:23 -0400
Subject: [Python-Dev] Call for reviewer!
Message-ID: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>

There are 5 open & related patches to getopt.py:  101106 thru 101110
inclusive.  Who wants to review these?  Fair warning in advance that Guido
usually hates adding stuff to getopt, and the patch comment

    I examined the entire 1.6b1 tarball for incompatibilities,
    and found only 2 in 90+ modules using getopt.py.

probably means it's dead on arrival (2+% is infinitely more than 0% <0.1
wink>).

On that basis alone, my natural inclination is to reject them for lack of
backward compatibility.  So let's get some votes and see whether there's
sufficient support to overcome that.




From trentm@ActiveState.com  Tue Aug 15 20:53:46 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Tue, 15 Aug 2000 12:53:46 -0700
Subject: [Python-Dev] Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Aug 15, 2000 at 03:19:23PM -0400
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
Message-ID: <20000815125346.I30086@ActiveState.com>

On Tue, Aug 15, 2000 at 03:19:23PM -0400, Tim Peters wrote:
> There are 5 open & related patches to getopt.py:  101106 thru 101110
> inclusive.  Who wants to review these?  Fair warning in advance that Guido
> usually hates adding stuff to getopt, and the patch comment
> 
>     I examined the entire 1.6b1 tarball for incompatibilities,
>     and found only 2 in 90+ modules using getopt.py.
> 
> probably means it's dead on arrival (2+% is infinitely more than 0% <0.1
> wink>).
> 
> On that basis alone, my natural inclination is to reject them for lack of
> backward compatibility.  So let's get some votes and see whether there's
> sufficient support to overcome that.
> 

-0 (too timid to use -1)

getopt is a nice simple, quick, useful module. Rather than extending it I
would rather see a separate getopt-like module for those who need some more
heavy duty option processing. One that supports windows '/' switch markers.
One where each option is maybe a class instance with methods that do the
processing and record state for that option and with attributes for help
strings and the number of arguments accepted and argument validification
methods. One that supports abstraction of options to capabilities (e.g. two
compiler interfaces, same capability, different option to specify it, share
option processing). One that support different algorithms for parsing the
command line (some current apps like to run through and grab *all* the
options, some like to stop option processing at the first non-option).

Call it 'supergetopt' and whoever cam 'import supergetopt as getopt'.

Keep getopt the way it is. Mind you, I haven't looked at the proposed patches
so my opinion might be unfair.

Trent


-- 
Trent Mick
TrentM@ActiveState.com


From akuchlin@mems-exchange.org  Tue Aug 15 21:01:56 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 15 Aug 2000 16:01:56 -0400
Subject: [Python-Dev] Call for reviewer!
In-Reply-To: <20000815125346.I30086@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 15, 2000 at 12:53:46PM -0700
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com> <20000815125346.I30086@ActiveState.com>
Message-ID: <20000815160156.D16506@kronos.cnri.reston.va.us>

On Tue, Aug 15, 2000 at 12:53:46PM -0700, Trent Mick wrote:
>Call it 'supergetopt' and whoever cam 'import supergetopt as getopt'.

Note that there's Lib/distutils/fancy_getopt.py.  The docstring reads:

Wrapper around the standard getopt module that provides the following
additional features:
  * short and long options are tied together
  * options have help strings, so fancy_getopt could potentially
    create a complete usage summary
  * options set attributes of a passed-in object

--amk


From bwarsaw@beopen.com  Tue Aug 15 21:30:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 16:30:59 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
Message-ID: <14745.43139.834290.323136@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <twouters@users.sourceforge.net> writes:

    TW> Apply SF patch #101151, by Peter S-K, which fixes smtplib's
    TW> passing of the 'helo' and 'ehlo' message, and exports the
    TW> 'make_fqdn' function. This function should be moved to
    TW> socket.py, if that module ever gets a Python wrapper.

Should I work on this for 2.0?  Specifically 1) moving socketmodule to
_socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
socket.py instead of smtplib.

It makes no sense for make_fqdn to live in smtplib.

I'd be willing to do this.

-Barry


From gstein@lyra.org  Tue Aug 15 21:42:02 2000
From: gstein@lyra.org (Greg Stein)
Date: Tue, 15 Aug 2000 13:42:02 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.43139.834290.323136@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:30:59PM -0400
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <20000815134202.K19525@lyra.org>

On Tue, Aug 15, 2000 at 04:30:59PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TW" == Thomas Wouters <twouters@users.sourceforge.net> writes:
> 
>     TW> Apply SF patch #101151, by Peter S-K, which fixes smtplib's
>     TW> passing of the 'helo' and 'ehlo' message, and exports the
>     TW> 'make_fqdn' function. This function should be moved to
>     TW> socket.py, if that module ever gets a Python wrapper.
> 
> Should I work on this for 2.0?  Specifically 1) moving socketmodule to
> _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
> socket.py instead of smtplib.
> 
> It makes no sense for make_fqdn to live in smtplib.
> 
> I'd be willing to do this.

Note that Windows already has a socket.py module (under plat-win or
somesuch). You will want to integrate that with any new socket.py that you
implement.

Also note that Windows does some funny stuff in socketmodule.c to export
itself as _socket. (the *.dsp files already build it as _socket.dll)


+1

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From bwarsaw@beopen.com  Tue Aug 15 21:46:15 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 16:46:15 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
 <14745.43139.834290.323136@anthem.concentric.net>
 <20000815134202.K19525@lyra.org>
Message-ID: <14745.44055.15573.283903@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

    GS> Note that Windows already has a socket.py module (under
    GS> plat-win or somesuch). You will want to integrate that with
    GS> any new socket.py that you implement.

    GS> Also note that Windows does some funny stuff in socketmodule.c
    GS> to export itself as _socket. (the *.dsp files already build it
    GS> as _socket.dll)

    GS> +1

Should we have separate plat-*/socket.py files or does it make more
sense to try to integrate them into one shared socket.py?  From quick
glance it certainly looks like there's Windows specific stuff in
plat-win/socket.py (big surprise, huh?)

-Barry


From nowonder@nowonder.de  Tue Aug 15 23:47:24 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Tue, 15 Aug 2000 22:47:24 +0000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib
 libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <3999C87C.24A0DF82@nowonder.de>

"Barry A. Warsaw" wrote:
> 
> Should I work on this for 2.0?  Specifically 1) moving socketmodule to
> _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
> socket.py instead of smtplib.

+1 on you doing that. I'd volunteer, but I am afraid ...

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Tue Aug 15 22:04:11 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 23:04:11 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.44055.15573.283903@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:46:15PM -0400
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net> <20000815134202.K19525@lyra.org> <14745.44055.15573.283903@anthem.concentric.net>
Message-ID: <20000815230411.D376@xs4all.nl>

On Tue, Aug 15, 2000 at 04:46:15PM -0400, Barry A. Warsaw wrote:

>     GS> Note that Windows already has a socket.py module (under
>     GS> plat-win or somesuch). You will want to integrate that with
>     GS> any new socket.py that you implement.

BeOS also has its own socket.py wrapper, to provide some functions BeOS
itself is missing (dup, makefile, fromfd, ...) I'm not sure if that's still
necessary, though, perhaps BeOS decided to implement those functions in a
later version ?

> Should we have separate plat-*/socket.py files or does it make more
> sense to try to integrate them into one shared socket.py?  From quick
> glance it certainly looks like there's Windows specific stuff in
> plat-win/socket.py (big surprise, huh?)

And don't forget the BeOS stuff ;P This is the biggest reason I didn't do it
myself: it takes some effort and a lot of grokking to fix this up properly,
without spreading socket.py out in every plat-dir. Perhaps it needs to be
split up like the os module ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gmcm@hypernet.com  Tue Aug 15 23:25:33 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 15 Aug 2000 18:25:33 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <20000815230411.D376@xs4all.nl>
References: <14745.44055.15573.283903@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:46:15PM -0400
Message-ID: <1245744161-146360088@hypernet.com>

Thomas Wouters wrote:
> On Tue, Aug 15, 2000 at 04:46:15PM -0400, Barry A. Warsaw wrote:
> 
> >     GS> Note that Windows already has a socket.py module (under
> >     GS> plat-win or somesuch). You will want to integrate that
> >     with GS> any new socket.py that you implement.
> 
> BeOS also has its own socket.py wrapper, to provide some
> functions BeOS itself is missing (dup, makefile, fromfd, ...) I'm
> not sure if that's still necessary, though, perhaps BeOS decided
> to implement those functions in a later version ?

Sounds very close to what Windows left out. As for *nixen, 
there are some differences between BSD and SysV sockets, 
but they're really, really arcane.
 


- Gordon


From fdrake@beopen.com  Wed Aug 16 00:06:15 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 19:06:15 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.43139.834290.323136@anthem.concentric.net>
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
 <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <14745.52455.487734.450253@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Should I work on this for 2.0?  Specifically 1) moving socketmodule to
 > _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
 > socket.py instead of smtplib.
 > 
 > It makes no sense for make_fqdn to live in smtplib.

  I've started, but am momentarily interupted.  Watch for it late
tonight.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Wed Aug 16 00:19:42 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 19:19:42 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
 <14745.43139.834290.323136@anthem.concentric.net>
 <14745.52455.487734.450253@cj42289-a.reston1.va.home.com>
Message-ID: <14745.53262.605601.806635@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake@beopen.com> writes:

    Fred>   I've started, but am momentarily interupted.  Watch for it
    Fred> late tonight.  ;)

Okay fine.  I'll hold off on socket module then, and will take a look
at whatever you come up with.

-Barry


From gward@mems-exchange.org  Wed Aug 16 00:57:51 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Tue, 15 Aug 2000 19:57:51 -0400
Subject: [Python-Dev] Winreg update
In-Reply-To: <3993FEC7.4E38B4F1@prescod.net>; from paul@prescod.net on Fri, Aug 11, 2000 at 08:25:27AM -0500
References: <3993FEC7.4E38B4F1@prescod.net>
Message-ID: <20000815195751.A16100@ludwig.cnri.reston.va.us>

On 11 August 2000, Paul Prescod said:
> This is really easy so I want
> some real feedback this time. Distutils people, this means you! Mark! I
> would love to hear Bill Tutt, Greg Stein and anyone else who claims some
> knowledge of Windows!

All I know is that the Distutils only use the registry for one thing:
finding the MSVC binaries (in distutils/msvccompiler.py).  The registry
access is coded in such a way that we can use either the
win32api/win32con modules ("old way") or _winreg ("new way", but still
the low-level interface).

I'm all in favour of high-level interfaces, and I'm also in favour of
speaking the local tongue -- when in Windows, follow the Windows API (at
least for features that are totally Windows-specific, like the
registry).  But I know nothing about all this stuff, and as far as I
know the registry access in distutils/msvccompiler.py works just fine as
is.

        Greg


From tim_one@email.msn.com  Wed Aug 16 01:28:10 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 20:28:10 -0400
Subject: [Python-Dev] Release Satii
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAAHAAA.tim_one@email.msn.com>

1.6:  Is done, but being held back (by us -- two can play at this game
<wink>) pending resolution of license issues.  Since 2.0 will be a
derivative work of 1.6, the license that goes out with 1.6 affects us
forever after.  Can't say more about that because I don't know more; and
Guido is out of town this week.

2.0:  Full steam ahead!  Just finished going thru every patch on
SourceForge.  What's Open at this instant is *it* for new 2.0 features.
More accurately, they're the only new features that will still be
*considered* for 2.0 (not everything in Open now will necessarily be
accepted).  The only new patches that won't be instantly Postponed from now
until 2.0 final ships are bugfixes.  Some oddities:

+ 8 patches remain unassigned.  7 of those are part of a single getopt
crusade (well, two getopt crusades, since as always happens when people go
extending getopt, they can't agree about what to do), and if nobody speaks
in their favor they'll probably get gently rejected.  The eighth is a CGI
patch from Ping that looks benign to me but is incomplete (missing doc
changes).

+ /F's Py_ErrFormat patch got moved back from Rejected to Open so we can
find all the putative 2.0 patches in one SF view (i.e., Open).

I've said before that I have no faith in the 2.0 release schedule.  Here's
your chance to make a fool of me -- and in public too <wink>!

nothing-would-make-me-happier-ly y'rs  - tim




From greg@cosc.canterbury.ac.nz  Wed Aug 16 01:57:18 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 12:57:18 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
Message-ID: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas@xs4all.net>:

> Comment by tim_one:

> [x if 6] should not be a legal expression but the grammar allows it today.

Why shouldn't it be legal?

The meaning is quite clear (either a one-element list or an empty
list). It's something of a degenerate case, but I don't think
degenerate cases should be excluded simply because they're
degenerate.

Excluding it will make both the implementation and documentation
more complicated, with no benefit that I can see.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Wed Aug 16 02:26:36 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 21:26:36 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEADHAAA.tim_one@email.msn.com>

[Tim]
> [x if 6] should not be a legal expression but the grammar
> allows it today.

[Greg Ewing]
> Why shouldn't it be legal?

Because Guido hates it.  It's almost certainly an error on the part of the
user; really the same reason that zip() without arguments raises an
exception.

> ...
> Excluding it will make both the implementation and documentation
> more complicated,

Of course, but marginally so.  "The first clause must be an iterator"; end
of doc changes.

> with no benefit that I can see.

Catching likely errors is a benefit for the user.  I realize that Haskell
does allow it -- although that would be a surprise to most Haskell users
<wink>.




From dgoodger@bigfoot.com  Wed Aug 16 03:36:02 2000
From: dgoodger@bigfoot.com (David Goodger)
Date: Tue, 15 Aug 2000 22:36:02 -0400
Subject: [Python-Dev] Re: Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
Message-ID: <B5BF7652.7B39%dgoodger@bigfoot.com>

I thought the "backwards compatibility" issue might be a sticking point ;>
And I *can* see why.

So, if I were to rework the patch to remove the incompatibility, would it
fly or still be shot down? Here's the change, in a nutshell:

Added a function getoptdict(), which returns the same data as getopt(), but
instead of a list of [(option, optarg)], it returns a dictionary of
{option:optarg}, enabling random/direct access.

getoptdict() turns this:

    if __name__ == '__main__':
        import getopt
        opts, args = getopt.getopt(sys.argv[1:], 'a:b')
        if len(args) <> 2:
            raise getopt.error, 'Exactly two arguments required.'
        options = {'a': [], 'b': 0}  # default option values
        for opt, optarg in opts:
            if opt == '-a':
                options['a'].append(optarg)
            elif opt == '-b':
                options['b'] = 1
        main(args, options)

into this:

    if __name__ == '__main__':
        import getopt
        opts, args = getopt.getoptdict(sys.argv[1:], 'a:b',
                                       repeatedopts=APPEND)
        if len(args) <> 2:
            raise getopt.error, 'Exactly two arguments required.'
        options = {'a': opts.get('-a', [])}
        options['b'] = opts.has_key('-b')
        main(args, options)

(Notice how the defaults get subsumed into the option processing, which goes
from 6 lines to 2 for this short example. A much higher-level interface,
IMHO.)

BUT WAIT, THERE'S MORE! As part of the deal, you get a free test_getopt.py
regression test module! Act now; vote +1! (Actually, you'll get that no
matter what you vote. I'll remove the getoptdict-specific stuff and resubmit
it if this patch is rejected.)

The incompatibility was introduced because the current getopt() returns an
empty string as the optarg (second element of the tuple) for an argumentless
option. I changed it to return None. Otherwise, it's impossible to
differentiate between an argumentless option '-a' and an empty string
argument '-a ""'. But I could rework it to remove the incompatibility.

Again: If the patch were to become 100% backwards-compatible, with just the
addition of getoptdict(), would it still be rejected, or does it have a
chance?

Eagerly awaiting your judgement...

-- 
David Goodger    dgoodger@bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)



From akuchlin@mems-exchange.org  Wed Aug 16 04:13:08 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Tue, 15 Aug 2000 23:13:08 -0400
Subject: [Python-Dev] Fate of Include/my*.h
Message-ID: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>

The now-redundant Include/my*.h files in Include should either be
deleted, or at least replaced with empty files containing only a "This
file is obsolete" comment.  I don't think they were ever part of the
public API (Python.h always included them), so deleting them shouldn't
break anything.   

--amk


From greg@cosc.canterbury.ac.nz  Wed Aug 16 04:04:33 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 15:04:33 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEADHAAA.tim_one@email.msn.com>
Message-ID: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>

Tim Peters:

> Because Guido hates it.  It's almost certainly an error on the part
> of the user

Guido doesn't like it, therefore it must be an error. Great
piece of logic there.

> Catching likely errors is a benefit for the user.

What evidence is there that this particular "likely error" is
going to be prevalent enough to justify outlawing a potentially
useful construct? Where are the hoardes of Haskell user falling
into this trap and clamouring for it to be disallowed?

> really the same reason that zip() without arguments raises an
> exception.

No, I don't think it's the same reason. It's not clear what
zip() without arguments should return. There's no such difficulty
in this case.

For the most part, Python is free of artificial restrictions, and I
like it that way. Imposing a restriction of this sort seems
un-Pythonic.

This is the second gratuitous change that's been made to my
LC syntax without any apparent debate. While I acknowledge the
right of the BDFL to do this, I'm starting to feel a bit
left out...

> I realize that Haskell does allow it -- although that would be a
> surprise to most Haskell users

Which suggests that they don't trip over this feature very
often, otherwise they'd soon find out about it!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From gstein@lyra.org  Wed Aug 16 04:20:13 2000
From: gstein@lyra.org (Greg Stein)
Date: Tue, 15 Aug 2000 20:20:13 -0700
Subject: [Python-Dev] Fate of Include/my*.h
In-Reply-To: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>; from amk@s205.tnt6.ann.va.dialup.rcn.com on Tue, Aug 15, 2000 at 11:13:08PM -0400
References: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <20000815202013.H17689@lyra.org>

On Tue, Aug 15, 2000 at 11:13:08PM -0400, A.M. Kuchling wrote:
> The now-redundant Include/my*.h files in Include should either be
> deleted, or at least replaced with empty files containing only a "This
> file is obsolete" comment.  I don't think they were ever part of the
> public API (Python.h always included them), so deleting them shouldn't
> break anything.   

+1 on deleting them.

-- 
Greg Stein, http://www.lyra.org/


From tim_one@email.msn.com  Wed Aug 16 04:23:44 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 23:23:44 -0400
Subject: [Python-Dev] Nasty new bug in test_longexp
Message-ID: <LNBBLJKPBEHFEDALKOLCEEAGHAAA.tim_one@email.msn.com>

Fred, I vaguely recall you touched something here recently, so you're top o'
the list.  Smells like an uninitialized variable.

1 of 4:  test_longexp fails in release build:

C:\PCbuild>python ..\lib\test\regrtest.py test_longexp
test_longexp
test test_longexp failed -- Writing: '\012', expected: ' '
1 test failed: test_longexp

2 of 4:  but passes in verbose mode, despite that the output doesn't appear
to match what's expected (missing " (line 1)"):

C:\PCbuild>python ..\lib\test\regrtest.py -v test_longexp
test_longexp
test_longexp
Caught SyntaxError for long expression: expression too long
1 test OK.

3 of 4:  but passes in debug build:

C:\PCbuild>python_d ..\lib\test\regrtest.py test_longexp
Adding parser accelerators ...
Done.
test_longexp
1 test OK.
[3962 refs]

4 of 4: and verbose debug output does appear to match what's expected:

C:\PCbuild>python_d ..\lib\test\regrtest.py -v test_longexp

Adding parser accelerators ...
Done.
test_longexp
test_longexp
Caught SyntaxError for long expression: expression too long (line 1)
1 test OK.
[3956 refs]

C:\PCbuild>




From tim_one@email.msn.com  Wed Aug 16 04:24:44 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 23:24:44 -0400
Subject: [Python-Dev] Fate of Include/my*.h
In-Reply-To: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAHHAAA.tim_one@email.msn.com>

[A.M. Kuchling]
> The now-redundant Include/my*.h files in Include should either be
> deleted, or at least replaced with empty files containing only a "This
> file is obsolete" comment.  I don't think they were ever part of the
> public API (Python.h always included them), so deleting them shouldn't
> break anything.   

+1




From tim_one@email.msn.com  Wed Aug 16 05:13:00 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 00:13:00 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>

[Tim]
>> Because Guido hates it.  It's almost certainly an error on the part
>> of the user

[Greg Ewing]
> Guido doesn't like it, therefore it must be an error. Great
> piece of logic there.

Perhaps I should have used a colon:  Guido hates it *because* it's almost
certainly an error.  I expect the meaning was plain enough without that,
though.

>> Catching likely errors is a benefit for the user.

> What evidence is there that this particular "likely error" is

Nobody said it was likely.  Scare quotes don't work unless you quote
something that was actually said <wink>.  Likeliness has nothing to do with
whether Python calls something an error anyway, here or anywhere else.

> going to be prevalent enough to justify outlawing a potentially
> useful construct?

Making a list that's either empty or a singleton is useful?  Fine, here you
go:

   (boolean and [x] or [])

We don't need listcomps for that.  listcomps are a concrete implementation
of mathematical set-builder notation, and without iterators to supply a
universe of elements to build *on*, it may make *accidental* sense thanks to
this particular implementation -- but about as much *logical* sense as
map(None, seq1, seq2, ...) makes now.  SETL is the original computer
language home for comprehensions (both set and list), and got this part
right (IMO; Guido just hates it for his own incrutable reasons <wink>).

> Where are the hoardes of Haskell user falling into this trap and
> clamouring for it to be disallowed?

I'd look over on comp.lang.haskell -- provided anyone is still hanging out
there.

>> really the same reason that zip() without arguments raises an
>> exception.

> No, I don't think it's the same reason. It's not clear what
> zip() without arguments should return. There's no such difficulty
> in this case.

A zip with no arguments has no universe to zip over; a listcomp without
iterators has no universe to build on.  I personally don't want syntax
that's both a floor wax and a dessert topping.  The *intent* here is to
supply a flexible and highly expressive way to build lists out of other
sequences; no other sequences, use something else.

> For the most part, Python is free of artificial restrictions, and I
> like it that way. Imposing a restriction of this sort seems
> un-Pythonic.
>
> This is the second gratuitous change that's been made to my
> LC syntax without any apparent debate.

The syntax hasn't been changed yet -- this *is* the debate.  I won't say any
more about it, let's hear what others think.

As to being upset over changes to your syntax, I offered you ownership of
the PEP the instant it was dumped on me (26-Jul), but didn't hear back.
Perhaps you didn't get the email.  BTW, what was the other gratuitous
change?  Requiring parens around tuple targets?  That was discussed here
too, but the debate was brief as consensus seemed clearly to favor requiring
them.  That, plus Guido suggested it at a PythonLabs mtg, and agreement was
unanimous on that point.  Or are you talking about some other change (I
can't recall any other one)?

> While I acknowledge the right of the BDFL to do this, I'm starting
> to feel a bit left out...

Well, Jeez, Greg -- Skip took over the patch, Ping made changes to it after,
I got stuck with the PEP and the Python-Dev rah-rah stuff, and you just sit
back and snipe.  That's fine, you're entitled, but if you choose not to do
the work anymore, you took yourself out of the loop.

>> I realize that Haskell does allow it -- although that would be a
>> surprise to most Haskell users

> Which suggests that they don't trip over this feature very
> often, otherwise they'd soon find out about it!

While also suggesting it's useless to allow it.




From paul@prescod.net  Wed Aug 16 05:30:06 2000
From: paul@prescod.net (Paul Prescod)
Date: Wed, 16 Aug 2000 00:30:06 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <399A18CE.6CFFCAB9@prescod.net>

Tim Peters wrote:
> 
> ...
> 
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived advantage of
> adding seq.items() is that it supplies yet another incredibly slow and
> convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
> allocate gazillabytes of storage and compute all the indexes into a massive
> data structure up front, and then we can use the loop index that's already
> sitting there for free anyway to index into that and get back a redundant
> copy of itself!" <wink>.
> 
> not-a-good-sign-when-common-sense-is-offended-ly y'rs  - tim

.items(), .keys(), .values() and range() all offended my common sense
when I started using Python in the first place. I got over it. 

I really don't see this "indexing" issue to be common enough either for
special syntax OR to worry alot about efficiency. Nobody is forcing
anyone to use .items(). If you want a more efficient way to do it, it's
available (just not as syntactically beautifu -- same as range/xrangel).

That isn't the case for dictionary .items(), .keys() and .values().

Also, if .keys() returns a range object then theoretically the
interpreter could recognize that it is looping over a range and optimize
it at runtime. That's an alternate approach to optimizing range literals
through new byte-codes. I don't have time to think about what that would
entail right now.... :(

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html


From fdrake@beopen.com  Wed Aug 16 05:51:34 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 00:51:34 -0400 (EDT)
Subject: [Python-Dev] socket module changes
Message-ID: <14746.7638.870650.747281@cj42289-a.reston1.va.home.com>

  This is a brief description of what I plan to check in to make the
changes we've discussed regarding the socket module.  I'll make the
checkins tomorrow morning, allowing you all night to scream if you
think that'll help.  ;)
  Windows and BeOS both use a wrapper module, but these are
essentially identical; the Windows variant has evolved a bit more, but
that evolution is useful for BeOS as well, aside from the errorTab
table (which gives messages for Windows-specific error numbers).  I
will be moving the sharable portions to a new module, _dupless_socket,
which the new socket module will import on Windows and BeOS.  (That
name indicates why they use a wrapper in the first place!)  The
errorTab definition will be moved to the new socket module and will
only be defined on Windows.  The exist wrappers, plat-beos/socket.py
and plat-win/socket.py, will be removed.
  socketmodule.c will only build as _socket, allowing much
simplification of the conditional compilation at the top of the
initialization function.
  The socket module will include the make_fqdn() implementation,
adjusted to make local references to the socket module facilities it
requires and to use string methods instead of using the string
module.  It is documented.
  The docstring in _socket will be moved to socket.py.
  If the screaming doesn't wake me, I'll check this in in the
morning.  The regression test isn't complaining!  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Wed Aug 16 06:12:21 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 01:12:21 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <399A18CE.6CFFCAB9@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com>

[Paul Prescod]
> ...
> I really don't see this "indexing" issue to be common enough

A simple grep (well, findstr under Windows) finds over 300 instances of

    for ... in range(len(...

in the .py files on my laptop.  I don't recall exactly what the percentages
were when I looked over a very large base of Python code several years ago,
but I believe it was about 1 in 7 for loops.

> for special syntax OR to worry alot about efficiency.

1 in 7 is plenty.  range(len(seq)) is a puzzler to newbies, too -- it's
*such* an indirect way of saying what they say directly in other languages.

> Nobody is forcing anyone to use .items().

Agreed, but since seq.items() doesn't exist now <wink>.

> If you want a more efficient way to do it, it's available (just not as
> syntactically beautiful -- same as range/xrangel).

Which way would that be?  I don't know of one, "efficient" either in the
sense of runtime speed or of directness of expression.  xrange is at least a
storage-efficient way, and isn't it grand that we index the xrange object
with the very integer we're (usually) trying to get it to return <wink>?
The "loop index" isn't an accident of the way Python happens to implement
"for" today, it's the very basis of Python's thing.__getitem__(i)/IndexError
iteration protocol.  Exposing it is natural, because *it* is natural.

> ...
> Also, if .keys() returns a range object then theoretically the
> interpreter could recognize that it is looping over a range and optimize
> it at runtime.

Sorry, but seq.keys() just makes me squirm.  It's a little step down the
Lispish path of making everything look the same.  I don't want to see
float.write() either <wink>.

although-that-would-surely-be-more-orthogonal-ly y'rs  - tim




From thomas@xs4all.net  Wed Aug 16 06:34:29 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 16 Aug 2000 07:34:29 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 12:13:00AM -0400
References: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz> <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <20000816073429.E376@xs4all.nl>

On Wed, Aug 16, 2000 at 12:13:00AM -0400, Tim Peters wrote:

> > This is the second gratuitous change that's been made to my
> > LC syntax without any apparent debate.
> 
> The syntax hasn't been changed yet -- this *is* the debate.  I won't say any
> more about it, let's hear what others think.

It'd be nice to hear *what* the exact syntax issue is. At first I thought
you meant forcing parentheses around all forms of iterator expressions, but
apparently you mean requiring at least a single 'for' statement in a
listcomp ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Wed Aug 16 06:36:24 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 01:36:24 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAMHAAA.tim_one@email.msn.com>

Clarification:

[Tim]
>>> Catching likely errors is a benefit for the user.

[Greg Ewing]
>> What evidence is there that this particular "likely error" is ..

[Tim]
> Nobody said it was likely. ...

Ha!  I did!  But not in Greg's sense.  It was originally in the sense of "if
we see it, it's almost certainly an error on the part of the user", not that
"it's likely we'll see this".  This is in the same sense that Python
considers

    x = float(i,,)
or
    x = for i [1,2,3]

to be likely errors -- you don't see 'em often, but they're most likely
errors on the part of the user when you do.

back-to-the-more-mundane-confusions-ly y'rs  - tim




From greg@cosc.canterbury.ac.nz  Wed Aug 16 07:02:23 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 18:02:23 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <200008160602.SAA15239@s454.cosc.canterbury.ac.nz>

> Guido hates it *because* it's almost certainly an error.

Yes, I know what you meant. I was just trying to point out
that, as far as I can see, it's only Guido's *opinion* that
it's almost certainly an error.

Let n1 be the number of times that [x if y] appears in some
program and the programmer actually meant to write something
else. Let n2 be the number of times [x if y] appears and
the programmer really meant it.

Now, I agree that n1+n2 will probably be a very small number.
But from that alone it doesn't follow that a given instance
of [x if y] is probably an error. That is only true if
n1 is much greater than n2, and in the absence of any
experience, there's no reason to believe that.

> A zip with no arguments has no universe to zip over; a listcomp without
> iterators has no universe to build on... The *intent* here is to
> supply a flexible and highly expressive way to build lists out of other
> sequences; no other sequences, use something else.

That's a reasonable argument. It might even convince me if
I think about it some more. I'll think about it some more.

> if you choose not to do the work anymore, you took yourself out of the
> loop.

You're absolutely right. I'll shut up now.

(By the way, I think your mail must have gone astray, Tim --
I don't recall ever being offered ownership of a PEP, whatever
that might entail.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Wed Aug 16 07:18:30 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 02:18:30 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000816073429.E376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEAOHAAA.tim_one@email.msn.com>

[Thomas Wouters]
> It'd be nice to hear *what* the exact syntax issue is. At first I thought
> you meant forcing parentheses around all forms of iterator
> expressions,

No, that's an old one, and was requiring parens around a target expression
iff it's a tuple.  So

    [x, y for x in s for y in t]  # BAD
    [(x, y) for x in s for y in t]  # good
    [(((x, y))) for x in s for y in t]  # good, though silly
    [x+y for x in s for y in t] # good
    [(x+y) for x in s for y in t] # good
    [x , for x in s] # BAD
    [(x ,) for x in s] # good

That much is already implemented in the patch currently on SourceForge.

> but apparently you mean requiring at least a single 'for' statement
> in a listcomp ?

No too <wink>, but closer:  it's that the leftmost ("first") clause must be
a "for".  So, yes, at least one for, but also that an "if" can't precede
*all* the "for"s:

   [x for x in s if x & 1] # good
   [x if x & 1 for x in s] # BAD
   [x for x in s]  # good
   [x if y & 1] # BAD

Since the leftmost clause can't refer to any bindings established "to its
right", an "if" as the leftmost clause can't act to filter the elements
generated by the iterators, and so Guido (me too) feels it's almost
certainly an error on the user's part if Python sees an "if" in the leftmost
position.  The current patch allows all of these, though.

In (mathematical) set-builder notation, you certainly see things like

    odds = {x | mod(x, 2) = 1}

That is, with "just a condition".  But in such cases the universe over which
x ranges is implicit from context (else the expression is not
well-defined!), and can always be made explicit; e.g., perhaps the above is
in a text where it's assumed everything is a natural number, and then it can
be made explicit via

    odds = {x in Natural | mod(x, 2) = 1|

In the concrete implementation afforded by listcomps, there is no notion of
an implicit universal set, so (as in SETL too, where this all came from
originally) explicit naming of the universe is required.

The way listcomps are implemented *can* make

   [x if y]

"mean something" (namely one of [x] or [], depending on y's value), but that
has nothing to do with its set-builder heritage.  Looks to me like the user
is just confused!  To Guido too.  Hence the desire not to allow this form at
all.






From ping@lfw.org  Wed Aug 16 07:23:57 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 15 Aug 2000 23:23:57 -0700 (PDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in
 the Ref manual docs on listcomprehensions
In-Reply-To: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008152212170.416-100000@skuld.lfw.org>

On Wed, 16 Aug 2000, Greg Ewing wrote:
> > [x if 6] should not be a legal expression but the grammar allows it today.
> 
> Why shouldn't it be legal?
[...]
> Excluding it will make both the implementation and documentation
> more complicated, with no benefit that I can see.

I don't have a strong opinion on this either way, but i can state
pretty confidently that the change would be tiny and simple: just
replace "list_iter" in the listmaker production with "list_for",
and you are done.


-- ?!ng

"I'm not trying not to answer the question; i'm just not answering it."
    -- Lenore Snell




From tim_one@email.msn.com  Wed Aug 16 07:59:06 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 02:59:06 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160602.SAA15239@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEAPHAAA.tim_one@email.msn.com>

[Tim]
>> Guido hates it *because* it's almost certainly an error.

[Greg Ewing]
> Yes, I know what you meant. I was just trying to point out
> that, as far as I can see, it's only Guido's *opinion* that
> it's almost certainly an error.

Well, it's mine too, but I always yield to him on stuff like that anyway;
and I guess I *have* to now, because he's my boss <wink>.

> Let n1 be the number of times that [x if y] appears in some
> program and the programmer actually meant to write something
> else. Let n2 be the number of times [x if y] appears and
> the programmer really meant it.
>
> Now, I agree that n1+n2 will probably be a very small number.
> But from that alone it doesn't follow that a given instance
> of [x if y] is probably an error. That is only true if
> n1 is much greater than n2, and in the absence of any
> experience, there's no reason to believe that.

I argued that one all I'm going to -- I think there is.

>> ... The *intent* here is to supply a flexible and highly expressive
> way to build lists out of other sequences; no other sequences, use
> something else.

> That's a reasonable argument. It might even convince me if
> I think about it some more. I'll think about it some more.

Please do, because ...

>> if you choose not to do the work anymore, you took yourself out
>> of the loop.

> You're absolutely right. I'll shut up now.

Please don't!  This patch is not without opposition, and while consensus is
rarely reached on Python-Dev, I think that's partly because "the BDFL ploy"
is overused to avoid the pain of principled compromise.  If this ends in a
stalement among the strongest proponents, it may not be such a good idea
after all.

> (By the way, I think your mail must have gone astray, Tim --
> I don't recall ever being offered ownership of a PEP, whatever
> that might entail.)

All explained at

    http://python.sourceforge.net/peps/

Although in this particular case, I haven't done anything with the PEP
except argue in favor of what I haven't yet written!  Somebody else filled
in the skeletal text that's there now.  If you still want it, it's yours;
I'll attach the email in question.

ok-that's-16-hours-of-python-today-in-just-a-few-more-i'll-
    have-to-take-a-pee<wink>-ly y'rs  - tim


-----Original Message-----

From: Tim Peters [mailto:tim_one@email.msn.com]
Sent: Wednesday, July 26, 2000 1:25 AM
To: Greg Ewing <greg@cosc.canterbury.ac.nz>
Subject: RE: [Python-Dev] PEP202


Greg, nice to see you on Python-Dev!  I became the PEP202 shepherd because
nobody else volunteered, and I want to see the patch get into 2.0.  That's
all there was to it, though:  if you'd like to be its shepherd, happy to
yield to you.  You've done the most to make this happen!  Hmm -- but maybe
that also means you don't *want* to do more.  That's OK too.




From bwarsaw@beopen.com  Wed Aug 16 14:21:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 09:21:59 -0400 (EDT)
Subject: [Python-Dev] Re: Call for reviewer!
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
 <B5BF7652.7B39%dgoodger@bigfoot.com>
Message-ID: <14746.38263.433927.239480@anthem.concentric.net>

I used to think getopt needed a lot of changes, but I'm not so sure
anymore.  getopt's current API works fine for me and I use it in all
my scripts.  However,

>>>>> "DG" == David Goodger <dgoodger@bigfoot.com> writes:

    DG> The incompatibility was introduced because the current
    DG> getopt() returns an empty string as the optarg (second element
    DG> of the tuple) for an argumentless option. I changed it to
    DG> return None. Otherwise, it's impossible to differentiate
    DG> between an argumentless option '-a' and an empty string
    DG> argument '-a ""'. But I could rework it to remove the
    DG> incompatibility.

I don't think that's necessary.  In my own use, if I /know/ -a doesn't
have an argument (because I didn't specify as "a:"), then I never
check the optarg.  And it's bad form for a flag to take an optional
argument; it either does or it doesn't and you know that in advance.

-Barry


From bwarsaw@beopen.com  Wed Aug 16 14:23:45 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 09:23:45 -0400 (EDT)
Subject: [Python-Dev] Fate of Include/my*.h
References: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <14746.38369.116212.875999@anthem.concentric.net>

>>>>> "AMK" == A M Kuchling <amk@s205.tnt6.ann.va.dialup.rcn.com> writes:

    AMK> The now-redundant Include/my*.h files in Include should
    AMK> either be deleted

+1

-Barry


From fdrake@beopen.com  Wed Aug 16 15:26:29 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 10:26:29 -0400 (EDT)
Subject: [Python-Dev] socket module changes
Message-ID: <14746.42133.355087.417895@cj42289-a.reston1.va.home.com>

  The changes to the socket module are now complete.  Note two changes
to yesterdays plan:
  - there is no _dupless_socket; I just merged that into socket.py
  - make_fqdn() got renamed to getfqdn() for consistency with the rest
of the module.
  I also remembered to update smptlib.  ;)
  I'll be away from email during the day; Windows & BeOS users, please
test this!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From bwarsaw@beopen.com  Wed Aug 16 15:46:26 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 10:46:26 -0400 (EDT)
Subject: [Python-Dev] socket module changes
References: <14746.42133.355087.417895@cj42289-a.reston1.va.home.com>
Message-ID: <14746.43330.134066.238781@anthem.concentric.net>

    >> - there is no _dupless_socket; I just merged that into socket.py -

Thanks, that's the one thing I was going to complain about. :)

-Barry


From bwarsaw@beopen.com  Wed Aug 16 16:11:57 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 11:11:57 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
Message-ID: <14746.44861.78992.343012@anthem.concentric.net>

After channeling and encouragement by Tim Peters, I've updated PEP
214, the extended print statement.  Text is included below, but is
also available at

    http://python.sourceforge.net/peps/pep-0214.html

SourceForge patch #100970 contains the patch to apply against the
current CVS tree if you want to play with it

    http://sourceforge.net/patch/download.php?id=100970

-Barry

-------------------- snip snip --------------------
PEP: 214
Title: Extended Print Statement
Version: $Revision: 1.3 $
Author: bwarsaw@beopen.com (Barry A. Warsaw)
Python-Version: 2.0
Status: Draft
Created: 24-Jul-2000
Post-History: 16-Aug-2000


Introduction

    This PEP describes a syntax to extend the standard `print'
    statement so that it can be used to print to any file-like object,
    instead of the default sys.stdout.  This PEP tracks the status and
    ownership of this feature.  It contains a description of the
    feature and outlines changes necessary to support the feature.
    This PEP summarizes discussions held in mailing list forums, and
    provides URLs for further information, where appropriate.  The CVS
    revision history of this file contains the definitive historical
    record.


Proposal

    This proposal introduces a syntax extension to the print
    statement, which allows the programmer to optionally specify the
    output file target.  An example usage is as follows:

        print >> mylogfile, 'this message goes to my log file'

    Formally, the syntax of the extended print statement is
    
        print_stmt: ... | '>>' test [ (',' test)+ [','] ] )

    where the ellipsis indicates the original print_stmt syntax
    unchanged.  In the extended form, the expression just after >>
    must yield an object with a write() method (i.e. a file-like
    object).  Thus these two statements are equivalent:

	print 'hello world'
        print >> sys.stdout, 'hello world'

    As are these two statements:

        print
        print >> sys.stdout

    These two statements are syntax errors:

        print ,
        print >> sys.stdout,


Justification

    `print' is a Python keyword and introduces the print statement as
    described in section 6.6 of the language reference manual[1].
    The print statement has a number of features:

    - it auto-converts the items to strings
    - it inserts spaces between items automatically
    - it appends a newline unless the statement ends in a comma

    The formatting that the print statement performs is limited; for
    more control over the output, a combination of sys.stdout.write(),
    and string interpolation can be used.

    The print statement by definition outputs to sys.stdout.  More
    specifically, sys.stdout must be a file-like object with a write()
    method, but it can be rebound to redirect output to files other
    than specifically standard output.  A typical idiom is

        sys.stdout = mylogfile
	try:
	    print 'this message goes to my log file'
	finally:
	    sys.stdout = sys.__stdout__

    The problem with this approach is that the binding is global, and
    so affects every statement inside the try: clause.  For example,
    if we added a call to a function that actually did want to print
    to stdout, this output too would get redirected to the logfile.

    This approach is also very inconvenient for interleaving prints to
    various output streams, and complicates coding in the face of
    legitimate try/except or try/finally clauses.


Reference Implementation

    A reference implementation, in the form of a patch against the
    Python 2.0 source tree, is available on SourceForge's patch
    manager[2].  This approach adds two new opcodes, PRINT_ITEM_TO and
    PRINT_NEWLINE_TO, which simply pop the file like object off the
    top of the stack and use it instead of sys.stdout as the output
    stream.


Alternative Approaches

    An alternative to this syntax change has been proposed (originally
    by Moshe Zadka) which requires no syntax changes to Python.  A
    writeln() function could be provided (possibly as a builtin), that
    would act much like extended print, with a few additional
    features.

	def writeln(*args, **kws):
	    import sys
	    file = sys.stdout
	    sep = ' '
	    end = '\n'
	    if kws.has_key('file'):
		file = kws['file']
		del kws['file']
	    if kws.has_key('nl'):
		if not kws['nl']:
		    end = ' '
		del kws['nl']
	    if kws.has_key('sep'):
		sep = kws['sep']
		del kws['sep']
	    if kws:
		raise TypeError('unexpected keywords')
	    file.write(sep.join(map(str, args)) + end)

    writeln() takes a three optional keyword arguments.  In the
    context of this proposal, the relevant argument is `file' which
    can be set to a file-like object with a write() method.  Thus

        print >> mylogfile, 'this goes to my log file'

    would be written as

        writeln('this goes to my log file', file=mylogfile)

    writeln() has the additional functionality that the keyword
    argument `nl' is a flag specifying whether to append a newline or
    not, and an argument `sep' which specifies the separator to output
    in between each item.


References

    [1] http://www.python.org/doc/current/ref/print.html
    [2] http://sourceforge.net/patch/download.php?id=100970


From gvwilson@nevex.com  Wed Aug 16 16:49:06 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Wed, 16 Aug 2000 11:49:06 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
Message-ID: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>

> Barry Warsaw wrote:
> [extended print PEP]

+1 --- it'll come in handy when teaching newbies on Windows and Unix
simultaneously.

Greg



From skip@mojam.com (Skip Montanaro)  Wed Aug 16 17:33:30 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 16 Aug 2000 11:33:30 -0500 (CDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
References: <200008151746.KAA06454@bush.i.sourceforge.net>
 <20000815202414.B376@xs4all.nl>
Message-ID: <14746.49754.697401.684106@beluga.mojam.com>

    Thomas> A comment by someone (?!ng ?) who forgot to login, at the
    Thomas> original list-comprehensions patch suggests that Skip forgot to
    Thomas> include the documentation patch to listcomps he provided. Ping,
    Thomas> Skip, can you sort this out and check in the rest of that
    Thomas> documentation (which supposedly includes a tutorial section as
    Thomas> well) ?

Ping & I have already taken care of this off-list.  His examples should be
checked in shortly, if not already.

Skip


From skip@mojam.com (Skip Montanaro)  Wed Aug 16 17:43:44 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 16 Aug 2000 11:43:44 -0500 (CDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
References: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
 <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <14746.50368.982239.813435@beluga.mojam.com>

    Tim> Well, Jeez, Greg -- Skip took over the patch, Ping made changes to
    Tim> it after, I got stuck with the PEP and the Python-Dev rah-rah
    Tim> stuff, and you just sit back and snipe.  That's fine, you're
    Tim> entitled, but if you choose not to do the work anymore, you took
    Tim> yourself out of the loop.

Tim,

I think that's a bit unfair to Greg.  Ages ago Greg offered up a prototype
implementation of list comprehensions based upon a small amount of
discussion on c.l.py.  I took over the patch earlier because I wanted to see
it added to Python (originally 1.7, which is now 2.0).  I knew it would
languish or die if someone on python-dev didn't shepherd it.  I was just get
the thing out there for discussion, and I knew that Greg wasn't on
python-dev to do it himself, which is where most of the discussion about
list comprehensions has taken place.  When I've remembered to, I've tried to
at least CC him on threads I've started so he could participate.  My
apologies to Greg for not being more consistent in that regard.  I don't
think we can fault him for not having been privy to all the discussion.

Skip



From gward@mems-exchange.org  Wed Aug 16 18:34:02 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Wed, 16 Aug 2000 13:34:02 -0400
Subject: [Python-Dev] Python 1.6 & Distutils 0.9.1: success
Message-ID: <20000816133401.F16672@ludwig.cnri.reston.va.us>

[oops, screwed up the cc: python-dev when I sent this to Fred.  let's
 try again, shall we?]

Hi Fred --

I went ahead and tried out the current cnri-16-start branch on Solaris
2.6.  (I figured you guys are all using Linux by now, so you might want
to hear back how it works on Solaris.)

In short: no problem!  It built, tested, and installed just fine.

Oops, just noticed that my configure.in fix from late May didn't make
the cut:

  revision 1.124
  date: 2000/05/26 12:22:54;  author: gward;  state: Exp;  lines: +6 -2
  When building on Solaris and the compiler is GCC, use '$(CC) -G' to
  create shared extensions rather than 'ld -G'.  This ensures that shared
  extensions link against libgcc.a, in case there are any functions in the
  GCC runtime not already in the Python core.

Oh well.  This means that Robin Dunn's bsddb extension won't work with
Python 1.6 under Solaris.

So then I tried Distutils 0.9.1 with the new build: again, it worked
just fine.  I was able to build and install the Distutils proper, and
then NumPy.  And I made a NumPy source dist.  Looks like it works just
fine, although this is hardly a rigorous test (sigh).

I'd say go ahead and release Distutils 0.9.1 with Python 1.6...

        Greg
-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From thomas@xs4all.net  Wed Aug 16 21:55:53 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 16 Aug 2000 22:55:53 +0200
Subject: [Python-Dev] Pending patches for 2.0
Message-ID: <20000816225552.H376@xs4all.nl>

I have a small problem with the number of pending patches that I wrote, and
haven't fully finished yet: I'm going to be away on vacation from about
September 1st until about October 1st or so ;P I'll try to finish them as
much as possible before then (they mostly need just documentation anyway)
but if Guido decides to go for a different approach for one or more of them
(like allowing floats and/or longs in range literals) someone will have to
take them over to finish them in time for 2.0.

I'm not sure when I'll be leaving my internet connection behind, where we'll
be going or when I'll be back, but I won't be able to do too much rewriting
in the next two weeks either -- work is killing me. (Which is one of the
reasons I'm going to try to be as far away from work as possible, on
September 2nd ;) However, if a couple of patches are rejected/postponed and
others don't require substantial changes, and if those decisions are made
before, say, August 30th, I think I can move them into the CVS tree before
leaving and just shove the responsibility for them on the entire dev team ;)

This isn't a push to get them accepted ! Just a warning that if they aren't
accepted before then, someone will have to take over the breastfeeding ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Wed Aug 16 22:07:35 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 17:07:35 -0400 (EDT)
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <20000816225552.H376@xs4all.nl>
References: <20000816225552.H376@xs4all.nl>
Message-ID: <14747.663.260950.537440@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > much as possible before then (they mostly need just documentation anyway)
                                                  ^^^^^^^^^^^^^^^^^^

  Don't underestimate that requirement!

 > This isn't a push to get them accepted ! Just a warning that if they aren't
 > accepted before then, someone will have to take over the breastfeeding ;)

  I don't think I want to know too much about your development tools!
;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Wed Aug 16 22:24:19 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 17:24:19 -0400 (EDT)
Subject: [Python-Dev] Python 1.6 & Distutils 0.9.1: success
In-Reply-To: <20000816133401.F16672@ludwig.cnri.reston.va.us>
References: <20000816133401.F16672@ludwig.cnri.reston.va.us>
Message-ID: <14747.1667.252426.489530@cj42289-a.reston1.va.home.com>

Greg Ward writes:
 > I went ahead and tried out the current cnri-16-start branch on Solaris
 > 2.6.  (I figured you guys are all using Linux by now, so you might want
 > to hear back how it works on Solaris.)

  Great!  I've just updated 1.6 to include the Distutils-0_9_1 tagged
version of the distutils package and the documentation.  I'm
rebuilding our release candidates now.

 > In short: no problem!  It built, tested, and installed just fine.

  Great!  Thanks!

 > Oops, just noticed that my configure.in fix from late May didn't make
 > the cut:
...
 > Oh well.  This means that Robin Dunn's bsddb extension won't work with
 > Python 1.6 under Solaris.

  That's unfortunate.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Wed Aug 16 23:22:05 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 00:22:05 +0200
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <14747.663.260950.537440@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Wed, Aug 16, 2000 at 05:07:35PM -0400
References: <20000816225552.H376@xs4all.nl> <14747.663.260950.537440@cj42289-a.reston1.va.home.com>
Message-ID: <20000817002205.I376@xs4all.nl>

On Wed, Aug 16, 2000 at 05:07:35PM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > much as possible before then (they mostly need just documentation anyway)
>                                                   ^^^^^^^^^^^^^^^^^^
>   Don't underestimate that requirement!

I'm not, especially since the things that need documentation (if they are in
principle acceptable to Guido) are range literals (tutorials and existing
code examples), 'import as' (ref, tut), augmented assignment (ref, tut, lib,
api, ext, existing examples), the getslice->getitem change (tut, lib, all
other references to getslice/extended slices and existing example code) and
possibly the 'indexing for' patch (ref, tut, a large selection of existing
example code.)

Oh, and I forgot, some patches would benefit from more library changes, too,
like augmented assignment and getslice-to-getitem. That can always be done
after the patches are in, by other people (if they can't, the patch
shouldn't go in in the first place!)

I guess I'll be doing one large, intimate pass over all documentation, do
everything at once, and later split it up. I also think I'm going to post
them seperately, to allow for easier proofreading. I also think I'm in need
of some sleep, and will think about this more tomorrow, after I get
LaTeX2HTML working on my laptop, so I can at least review my own changes ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From trentm@ActiveState.com  Thu Aug 17 00:55:42 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 16 Aug 2000 16:55:42 -0700
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
Message-ID: <20000816165542.D29260@ActiveState.com>

Hello autoconf-masters,

I am currently trying to port Python to Monterey (64-bit AIX) and I need to
add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
whatever appropriate variables for all 'cc' and 'ld' invocations) but it is
not obvious *at all* how to do that in configure.in. Can anybody helpme on
that?

ANother issue that I am having. This is how the python executable is linked
on Linux with gcc:

gcc  -Xlinker -export-dynamic python.o ../libpython2.0.a -lpthread -ldl  -lutil -lm  -o python
          
It, of course, works fine, but shouldn't the proper (read "portable")
invocation to include the python2.0 library be

gcc  -Xlinker -export-dynamic python.o -L.. -lpython2.0 -lpthread -ldl  -lutil -lm  -o python

That invocation form (i.e. with the '-L.. -lpython2.0') works on Linux, and
is *required* on Monterey. Does this problem not show up with other Unix
compilers. My hunch is that simply listing library (*.a) arguments on the gcc
command line is a GNU gcc/ld shortcut to the more portable usage of -L and
-l. Any opinions. I would either like to change the form to the latter or
I'll have to special case the invocation for Monterey. ANy opinions on which
is worse.


Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Thu Aug 17 01:24:25 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 16 Aug 2000 17:24:25 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
Message-ID: <20000816172425.A32338@ActiveState.com>

I am porting Python to Monterey (64-bit AIX) and have a small (hopefully)
question about POSIX threads. I have Monterey building and passing the
threads test suite using Python/thread_pthread.h with just one issue:


-------------- snipped from current thread_pthread.h ---------------
long
PyThread_get_thread_ident(void)
{
    volatile pthread_t threadid;
    if (!initialized)
        PyThread_init_thread();
    /* Jump through some hoops for Alpha OSF/1 */
    threadid = pthread_self();
    return (long) *(long *) &threadid;
}
-------------------------------------------------------------------

Does the POSIX threads spec specify a C type or minimum size for
pthread_t? Or can someone point me to the appropriate resource to look
this up. On Linux (mine at least):
  /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int pthread_t;

On Monterey:
  typedef unsigned int pthread_t;
 
That is fine, they are both 32-bits, however Monterey is an LP64 platform
(sizeof(long)==8, sizeof(int)=4), which brings up the question:

WHAT IS UP WITH THAT return STATEMENT?
  return (long) *(long *) &threadid;

My *guess* is that this is an attempt to just cast 'threadid' (a pthread_t)
to a long and go through hoops to avoid compiler warnings. I dont' know what
else it could be. Is that what the "Alpha OSF/1" comment is about? Anybody
have an Alpha OSF/1 hanging around. The problem is that when
sizeof(pthread_t) != sizeof(long) this line is just broken.

Could this be changed to
  return threadid;
safely?


Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From greg@cosc.canterbury.ac.nz  Thu Aug 17 01:33:40 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 12:33:40 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEAPHAAA.tim_one@email.msn.com>
Message-ID: <200008170033.MAA15351@s454.cosc.canterbury.ac.nz>

> If this ends in a stalement among the strongest proponents, it may not
> be such a good idea after all.

Don't worry, I really don't have any strong objection to
either of these changes. They're only cosmetic, after all.
It's still a good idea.

Just one comment: even if the first clause *is* a 'for',
there's no guarantee that the rest of the clauses have
to have anything to do with what it produces. E.g.

   [x for y in [1] if z]

The [x if y] case is only one of an infinite number of
possible abuses. Do you still think it's worth taking
special steps to catch that particular one?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 17 02:17:53 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 13:17:53 +1200 (NZST)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>
Message-ID: <200008170117.NAA15360@s454.cosc.canterbury.ac.nz>

Looks reasonably good. Not entirely sure I like the look
of >> though -- a bit too reminiscent of C++.

How about

   print to myfile, x, y, z

with 'to' as a non-reserved keyword. Or even

   print to myfile: x, y, z

but that might be a bit too radical!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From m.favas@per.dem.csiro.au  Thu Aug 17 02:17:42 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 17 Aug 2000 09:17:42 +0800
Subject: [Python-Dev] [Fwd: segfault in sre on 64-bit plats]
Message-ID: <399B3D36.6921271@per.dem.csiro.au>

This is a multi-part message in MIME format.
--------------53A0211FF1B8BFA142994B9A
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

 
--------------53A0211FF1B8BFA142994B9A
Content-Type: message/rfc822
Content-Disposition: inline

Message-ID: <399B3CAA.C8815E61@per.dem.csiro.au>
Date: Thu, 17 Aug 2000 09:15:22 +0800
From: Mark Favas <m.favas@per.dem.csiro.au>
Organization: CSIRO Exploration & Mining
X-Mailer: Mozilla 4.73 [en] (X11; U; OSF1 V4.0 alpha)
X-Accept-Language: en
MIME-Version: 1.0
To: Trent Mick <trentm@ActiveState.com>
Subject: Re: segfault in sre on 64-bit plats
References: <20000815104723.A27306@ActiveState.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Trent Mick wrote:
> 
> Fredrik,
> 
> The sre module currently segfaults on one of the tests suite tests on both
> Win64 and 64-bit linux:
> 
>     [trentm@nickel src]$ ./python -c "import sre; sre.match('(x)*', 50000*'x')" > srefail.out
>     Segmentation fault (core dumped)
> 
> I know that I can't expect you to debug this completely, as you don't have to
> hardware, but I was hoping you might be able to shed some light on the
> subject for me.
> 
> This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> 7500. I don't want to just willy-nilly drop the recursion limit down to make
> the problem go away.
> 
> Do you have any idea why the segfault may be occuring on 64-bit platforms?
> 
> Mark (Favas), have you been having any problems with sre on your 64-bit plats?
> 

Sorry for the delay - yes, I had these segfaults due to exceeding the
stack size on Tru64 Unix (which, by default, is 2048 kbytes) before
Fredrick introduced the recusrion limit of 10000 in _sre.c. You'd expect
a 64-bit OS to use a bit more bytes of the stack when handling recursive
calls, but your 7500 down from 10000 sounds a bit much - unless the
stack size limit you're using on Linux64 is smaller than that for
Linux32 - what are they? I certainly agree that it'd be better to solve
this in a way other than selecting a sufficiently low recursion limit (I
won't mention stackless here... <grin>.) - getrlimit(2), getrusage(2),
or other???
-- 
Mark

--------------53A0211FF1B8BFA142994B9A--



From greg@cosc.canterbury.ac.nz  Thu Aug 17 02:26:59 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 13:26:59 +1200 (NZST)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com>
Message-ID: <200008170126.NAA15363@s454.cosc.canterbury.ac.nz>

> My hunch is that simply listing library (*.a) arguments on the gcc
> command line is a GNU gcc/ld shortcut to the more portable usage of -L
> and -l.

I've never encountered a Unix that wouldn't let you explicity
give .a files to cc or ld. It's certainly not a GNU invention.

Sounds like Monterey is the odd one out here. ("Broken" is
another word that comes to mind.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+



From Vladimir.Marangozov@inrialpes.fr  Thu Aug 17 02:41:48 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 03:41:48 +0200 (CEST)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000816172425.A32338@ActiveState.com> from "Trent Mick" at Aug 16, 2000 05:24:25 PM
Message-ID: <200008170141.DAA17229@python.inrialpes.fr>

Trent Mick wrote:
> 
> I am porting Python to Monterey (64-bit AIX) and have a small (hopefully)
> question about POSIX threads. I have Monterey building and passing the
> threads test suite using Python/thread_pthread.h with just one issue:
> 
> -------------- snipped from current thread_pthread.h ---------------
> long
> PyThread_get_thread_ident(void)
> {
>     volatile pthread_t threadid;
>     if (!initialized)
>         PyThread_init_thread();
>     /* Jump through some hoops for Alpha OSF/1 */
>     threadid = pthread_self();
>     return (long) *(long *) &threadid;
> }
> -------------------------------------------------------------------
> 
> ...
> 
> WHAT IS UP WITH THAT return STATEMENT?
>   return (long) *(long *) &threadid;

I don't know and I had the same question at the time when there was some
obscure bug on my AIX combo at this location. I remember that I had played
with the debugger and the only workaround at the time which solved the
mystery was to add the 'volatile' qualifier. So if you're asking yourself
what that 'volatile' is for, you have one question less...

> 
> My *guess* is that this is an attempt to just cast 'threadid' (a pthread_t)
> to a long and go through hoops to avoid compiler warnings. I dont' know what
> else it could be. Is that what the "Alpha OSF/1" comment is about? Anybody
> have an Alpha OSF/1 hanging around. The problem is that when
> sizeof(pthread_t) != sizeof(long) this line is just broken.
> 
> Could this be changed to
>   return threadid;
> safely?

I have the same question. If Guido can't answer this straight, we need
to dig the CVS logs.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From Vladimir.Marangozov@inrialpes.fr  Thu Aug 17 02:43:33 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 03:43:33 +0200 (CEST)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com> from "Trent Mick" at Aug 16, 2000 04:55:42 PM
Message-ID: <200008170143.DAA17238@python.inrialpes.fr>

Trent Mick wrote:
> 
> Hello autoconf-masters,
> 
> I am currently trying to port Python to Monterey (64-bit AIX) and I need to
> add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> whatever appropriate variables for all 'cc' and 'ld' invocations) but it is
> not obvious *at all* how to do that in configure.in. Can anybody helpme on
> that?

How can we help? What do want to do, exactly?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From fdrake@beopen.com  Thu Aug 17 02:40:32 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 21:40:32 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <200008170141.DAA17229@python.inrialpes.fr>
References: <20000816172425.A32338@ActiveState.com>
 <200008170141.DAA17229@python.inrialpes.fr>
Message-ID: <14747.17040.968927.914435@cj42289-a.reston1.va.home.com>

Vladimir Marangozov writes:
 > I have the same question. If Guido can't answer this straight, we need
 > to dig the CVS logs.

  Guido is out of town right now, and doesn't have his usual email
tools with him, so he may not respond this week.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Vladimir.Marangozov@inrialpes.fr  Thu Aug 17 03:12:18 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 04:12:18 +0200 (CEST)
Subject: [Python-Dev] shm + Win32 + docs (was: Adding library modules to the core)
In-Reply-To: <20000808114655.C29686@thyrsus.com> from "Eric S. Raymond" at Aug 08, 2000 11:46:55 AM
Message-ID: <200008170212.EAA17523@python.inrialpes.fr>

Eric S. Raymond wrote:
> 
> Vladimir, I suggest that the most useful thing you could do to advance
> the process at this point would be to document shm in core-library style.

Eric, I'm presently suffering from chronic lack of time (as you probably
do too) so if you could write the docs for me and take all associated
credits for them, please do so (shouldn't be that hard, after all -- the
web page and the comments are self-explanatory :-). I'm willing to "unblock"
you on this, but I can hardly make the time for it -- it's low-priority on
my dynamic task schedule. :(

I'd also love to assemble the win32 bits on the matter (what's in win32event
for the semaphore interface + my Windows book) to add shm and sem for
Windows and rewrite the interface, but I have no idea on when this could
happen.

I will disappear from the face of the World sometime soon and it's
unclear on when I'll be able to reappear (nor how soon I'll disappear, btw)
So, be aware of that. I hope to be back again before 2.1 so if we can
wrap up a Unix + win32 shm, that would be much appreciated!

> 
> At the moment, core Python has nothing (with the weak and nonportable 
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

Indeed. This is currently being discussed on the french Python list,
where Richard Gruet (rgruet@ina.fr) posted the following code for
inter-process locks: glock.py

I don't have the time to look at it in detail, just relaying here
for food and meditation :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252

------------------------------[ glock.py ]----------------------------
#!/usr/bin/env python
#----------------------------------------------------------------------------
# glock.py: 				Global mutex
#
# Prerequisites:
#    - Python 1.5.2 or newer (www.python.org)
#    - On windows: win32 extensions installed
#			(http://www.python.org/windows/win32all/win32all.exe)
#    - OS: Unix, Windows.
#
# History:
#	-22 Jan 2000 (R.Gruet): creation
#
# Limitations:
# TODO:
#----------------------------------------------------------------------------
'''	This module defines the class GlobalLock that implements a global
	(inter-process) mutex that works on Windows and Unix, using
	file-locking on Unix (I also tried this approach on Windows but got
	some tricky problems so I ended using Win32 Mutex)..
	See class GlobalLock for more details.
'''
__version__ = 1,0,2
__author__ = ('Richard Gruet', 'rgruet@ina.fr')

# Imports:
import sys, string, os

# System-dependent imports for locking implementation:
_windows = (sys.platform == 'win32')

if _windows:
	try:
		import win32event, win32api, pywintypes
	except ImportError:
		sys.stderr.write('The win32 extensions need to be installed!')
else:	# assume Unix
	try:
		import fcntl
	except ImportError:
		sys.stderr.write("On what kind of OS am I ? (Mac?) I should be on "
						 "Unix but can't import fcntl.\n")
		raise
	import threading

# Exceptions :
# ----------
class GlobalLockError(Exception):
	''' Error raised by the glock module.
	'''
	pass

class NotOwner(GlobalLockError):
	''' Attempt to release somebody else's lock.
	'''
	pass


# Constants
# ---------:
true=-1
false=0

#----------------------------------------------------------------------------
class GlobalLock:
#----------------------------------------------------------------------------
	''' A global mutex.

		*Specification:
		 -------------
		 The lock must act as a global mutex, ie block between different
		 candidate processus, but ALSO between different candidate
		 threads of the same process.
		 It must NOT block in case of recursive lock request issued by
		 the SAME thread.
		 Extraneous unlocks should be ideally harmless.

		*Implementation:
		 --------------
		 In Python there is no portable global lock AFAIK.
		 There is only a LOCAL/ in-process Lock mechanism
		 (threading.RLock), so we have to implement our own solution.

		Unix: use fcntl.flock(). Recursive calls OK. Different process OK.
			  But <> threads, same process don't block so we have to
			  use an extra threading.RLock to fix that point.
		Win: We use WIN32 mutex from Python Win32 extensions. Can't use
			 std module msvcrt.locking(), because global lock is OK, but
			 blocks also for 2 calls from the same thread!
	'''
	def __init__(self, fpath, lockInitially=false):
		'''	Creates (or opens) a global lock.

			@param fpath Path of the file used as lock target. This is also
						 the global id of the lock. The file will be created
						 if non existent.
			@param lockInitially if true locks initially.
		'''
		if _windows:
			self.name = string.replace(fpath, '\\', '_')
			self.mutex = win32event.CreateMutex(None, lockInitially, self.name)
		else: # Unix
			self.name = fpath
			self.flock = open(fpath, 'w')
			self.fdlock = self.flock.fileno()
			self.threadLock = threading.RLock()
		if lockInitially:
			self.acquire()

	def __del__(self):
		#print '__del__ called' ##
		try: self.release()
		except: pass
		if _windows:
			win32api.CloseHandle(self.mutex)
		else:
			try: self.flock.close()
			except: pass

	def __repr__(self):
		return '<Global lock @ %s>' % self.name

	def acquire(self):
		''' Locks. Suspends caller until done.

			On windows an IOError is raised after ~10 sec if the lock
			can't be acquired.
			@exception GlobalLockError if lock can't be acquired (timeout)
		'''
		if _windows:
			r = win32event.WaitForSingleObject(self.mutex, win32event.INFINITE)
			if r == win32event.WAIT_FAILED:
				raise GlobalLockError("Can't acquire mutex.")
		else:
			# Acquire 1st the global (inter-process) lock:
			try:
				fcntl.flock(self.fdlock, fcntl.LOCK_EX)	# blocking
			except IOError:	#(errno 13: perm. denied,
							#		36: Resource deadlock avoided)
				raise GlobalLockError('Cannot acquire lock on "file" %s\n' %
										self.name)
			#print 'got file lock.' ##
			# Then acquire the local (inter-thread) lock:
			self.threadLock.acquire()
			#print 'got thread lock.' ##

	def release(self):
		''' Unlocks. (caller must own the lock!)

			@return The lock count.
			@exception IOError if file lock can't be released
			@exception NotOwner Attempt to release somebody else's lock.
		'''
		if _windows:
			try:
				win32event.ReleaseMutex(self.mutex)
			except pywintypes.error, e:
				errCode, fctName, errMsg =  e.args
				if errCode == 288:
					raise NotOwner("Attempt to release somebody else's lock")
				else:
					raise GlobalLockError('%s: err#%d: %s' % (fctName, errCode,
															  errMsg))
		else:
			# Acquire 1st the local (inter-thread) lock:
			try:
				self.threadLock.release()
			except AssertionError:
				raise NotOwner("Attempt to release somebody else's lock")

			# Then release the global (inter-process) lock:
			try:
				fcntl.flock(self.fdlock, fcntl.LOCK_UN)
			except IOError:	# (errno 13: permission denied)
				raise GlobalLockError('Unlock of file "%s" failed\n' %
															self.name)

#----------------------------------------------------------------------------
#		M A I N
#----------------------------------------------------------------------------
def main():
	# unfortunately can't test inter-process lock here!
	lockName = 'myFirstLock'
	l = GlobalLock(lockName)
	if not _windows:
		assert os.path.exists(lockName)
	l.acquire()
	l.acquire()	# rentrant lock, must not block
	l.release()
	l.release()
	if _windows:
		try: l.release()
		except NotOwner: pass
		else: raise Exception('should have raised a NotOwner exception')

	# Check that <> threads of same process do block:
	import threading, time
	thread = threading.Thread(target=threadMain, args=(l,))
	print 'main: locking...',
	l.acquire()
	print ' done.'
	thread.start()
	time.sleep(3)
	print '\nmain: unlocking...',
	l.release()
	print ' done.'
	time.sleep(0.1)
	del l	# to close file
	print 'tests OK.'

def threadMain(lock):
	print 'thread started(%s).' % lock
	print 'thread: locking (should stay blocked for ~ 3 sec)...',
	lock.acquire()
	print 'thread: locking done.'
	print 'thread: unlocking...',
	lock.release()
	print ' done.'
	print 'thread ended.'

if __name__ == "__main__":
	main()


From bwarsaw@beopen.com  Thu Aug 17 04:17:23 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 23:17:23 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>
 <200008170117.NAA15360@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.22851.266303.28877@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    GE> Looks reasonably good. Not entirely sure I like the look
    GE> of >> though -- a bit too reminiscent of C++.

    GE> How about

    GE>    print to myfile, x, y, z

Not bad at all.  Seems quite Pythonic to me.

    GE> with 'to' as a non-reserved keyword. Or even

    GE>    print to myfile: x, y, z

    GE> but that might be a bit too radical!

Definitely so.

-Barry


From bwarsaw@beopen.com  Thu Aug 17 04:19:25 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 23:19:25 -0400 (EDT)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
References: <20000816165542.D29260@ActiveState.com>
 <200008170126.NAA15363@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.22973.502494.739270@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    >> My hunch is that simply listing library (*.a) arguments on the
    >> gcc command line is a GNU gcc/ld shortcut to the more portable
    >> usage of -L and -l.

    GE> I've never encountered a Unix that wouldn't let you explicity
    GE> give .a files to cc or ld. It's certainly not a GNU invention.

That certainly jives with my experience.  All the other non-gcc C
compilers I've used (admittedly only on *nix) have always accepted
explicit .a files on the command line.

-Barry


From MarkH@ActiveState.com  Thu Aug 17 04:32:25 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 17 Aug 2000 13:32:25 +1000
Subject: [Python-Dev] os.path.commonprefix breakage
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>

Hi,
	I believe that Skip recently made a patch to os.path.commonprefix to only
return the portion of the common prefix that corresponds to a directory.

I have just dicovered some code breakage from this change.  On 1.5.2, the
behaviour was:

>>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
'../foo/'

While since the change we have:
'../foo'

Note that the trailing slash has been dropped.

The code this broke did similar to:

prefix = os.path.commonprefix(files)
for file in files:
  tail_portion = file[len(prefix):]

In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
"/spam", respectively.  The intent was obviously to get absolute path names
back ("bar" and "spam")

The code that broke is not mine, so you can safely be horrified at how
broken it is :-)  The point, however, is that code like this does exist out
there.

I'm obviously going to change the code that broke, and don't have time to
look into the posixpath.py code - but is this level of possible breakage
acceptable?

Thanks,

Mark.




From tim_one@email.msn.com  Thu Aug 17 04:34:12 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 23:34:12 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000816172425.A32338@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>

[Trent Mick]
> I am porting Python to Monterey (64-bit AIX) and have a small
> (hopefully) question about POSIX threads.

POSIX threads. "small question".  HAHAHAHAHAHA.  Thanks, that felt good
<wink>.

> I have Monterey building and passing the threads test suite using
> Python/thread_pthread.h with just one issue:
>
>
> -------------- snipped from current thread_pthread.h ---------------
> long
> PyThread_get_thread_ident(void)
> {
>     volatile pthread_t threadid;
>     if (!initialized)
>         PyThread_init_thread();
>     /* Jump through some hoops for Alpha OSF/1 */
>     threadid = pthread_self();
>     return (long) *(long *) &threadid;
> }
> -------------------------------------------------------------------
>
> Does the POSIX threads spec specify a C type or minimum size for
> pthread_t?

Which POSIX threads spec?  There are so very many (it went thru many
incompatible changes).  But, to answer your question, I don't know but doubt
it.  In practice, some implementations return pointers into kernel space,
others pointers into user space, others small integer indices into kernel-
or user-space arrays of structs.  So I think it's *safe* to assume it will
always fit in an integral type large enough to hold a pointer, but not
guaranteed.  Plain "long" certainly isn't safe in theory.

> Or can someone point me to the appropriate resource to look
> this up. On Linux (mine at least):
>   /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int
> pthread_t;

And this is a 32- or 64-bit Linux?

> On Monterey:
>   typedef unsigned int pthread_t;
>
> That is fine, they are both 32-bits, however Monterey is an LP64 platform
> (sizeof(long)==8, sizeof(int)=4), which brings up the question:
>
> WHAT IS UP WITH THAT return STATEMENT?
>   return (long) *(long *) &threadid;

Heh heh.  Thanks for the excuse!  I contributed the pthreads implementation
originally, and that eyesore sure as hell wasn't in it when I passed it on.
That's easy for me to be sure of, because that entire function was added by
somebody after me <wink>.  I've been meaning to track down where that crap
line came from for *years*, but never had a good reason before.

So, here's the scoop:

+ The function was added in revision 2.3, more than 6 years ago.  At that
time, the return had a direct cast to long.

+ The "Alpha OSF/1" horror was the sole change made to get revision 2.5.

Back in those days, the "patches list" was Guido's mailbox, and *all* CVS
commits were done by him.  So he checked in everything everyone could
convince them they needed, and sometimes without knowing exactly why.  So I
strongly doubt he'll even remember this change, and am certain it's not his
code.

> My *guess* is that this is an attempt to just cast 'threadid' (a
> pthread_t) to a long and go through hoops to avoid compiler warnings. I
> dont' know what else it could be.

Me neither.

> Is that what the "Alpha OSF/1" comment is about?

That comment was introduced by the commit that added the convoluted casting,
so yes, that's what the comment is talking about.

> Anybody have an Alpha OSF/1 hanging around. The problem is that when
> sizeof(pthread_t) != sizeof(long) this line is just broken.
>
> Could this be changed to
>   return threadid;
> safely?

Well, that would return it to exactly the state it was in at revision 2.3,
except with the cast to long left implicit.  Apparently that "didn't work"!

Something else is broken here, too, and has been forever:  the thread docs
claim that thread.get_ident() returns "a nonzero integer".  But across all
the thread implementations, there's nothing that guarantees that!  It's a
goof, based on the first thread implementation in which it just happened to
be true for that platform.

So thread.get_ident() is plain braindead:  if Python wants to return a
unique non-zero long across platforms, the current code doesn't guarantee
any of that.

So one of two things can be done:

1. Bite the bullet and do it correctly.  For example, maintain a static
   dict mapping the native pthread_self() return value to Python ints,
   and return the latter as Python's thread.get_ident() value.  Much
   better would to implement a x-platform thread-local storage
   abstraction, and use that to hold a Python-int ident value.

2. Continue in the tradition already established <wink>, and #ifdef the
   snot out of it for Monterey.

In favor of #2, the code is already so hosed that making it hosier won't be
a significant relative increase in its inherent hosiness.

spoken-like-a-true-hoser-ly y'rs  - tim




From tim_one@email.msn.com  Thu Aug 17 04:47:04 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 23:47:04 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.22851.266303.28877@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDGHAAA.tim_one@email.msn.com>

[Greg Ewing]
> Looks reasonably good. Not entirely sure I like the look
> of >> though -- a bit too reminiscent of C++.
>
> How about
>
>    print to myfile, x, y, z

[Barry Warsaw]
> Not bad at all.  Seems quite Pythonic to me.

Me too!  +1 on changing ">>" to "to" here.  Then we can introduce

   x = print from myfile, 3

as a synonym for

   x = myfile.read(3)

too <wink>.

People should know that Guido doesn't seem to like the idea of letting print
specify the output target at all.  "Why not?"  "Because people say print is
pretty useless anyway, for example, when they want to write to something
other than stdout."  "But that's the whole point of this change!  To make
print more useful!"  "Well, but then ...".  After years of channeling, you
get a feel for when to change the subject and bring it up again later as if
it were brand new <wink>.

half-of-channeling-is-devious-persuasion-ly y'rs  - tim




From skip@mojam.com (Skip Montanaro)  Thu Aug 17 05:04:54 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:04:54 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
Message-ID: <14747.25702.435148.549678@beluga.mojam.com>

>>>>> "Mark" == Mark Hammond <MarkH@ActiveState.com> writes:

    Mark> I believe that Skip recently made a patch to os.path.commonprefix
    Mark> to only return the portion of the common prefix that corresponds
    Mark> to a directory.

    Mark> I have just dicovered some code breakage from this change.  On
    Mark> 1.5.2, the behaviour was:

    >>>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
    Mark> '../foo/'

    Mark> While since the change we have:
    Mark> '../foo'

I'm sure it can be argued that the slash should be there.  The previous
behavior was clearly broken, however, because it was advancing
character-by-character instead of directory-by-directory.  Consequently,
calling 

    os.path.commonprefix(["/home/swen", "/home/swenson"])

would yield the most likely invalid path "/home/sw" as the common prefix.

It would be easy enough to append the appropriate path separator to the the
result before returning.  I have no problem with that.  Others with more
knowledge of path semantics should chime in.  Also, should the behavior be
consistent across platforms or should it do what is correct for each
platform on which it's implemented (dospath, ntpath, macpath)?

Skip



From tim_one@email.msn.com  Thu Aug 17 05:05:12 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 00:05:12 -0400
Subject: [Python-Dev] os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDIHAAA.tim_one@email.msn.com>

I agree this is Bad Damage, and should be fixed before 2.0b1 goes out.  Can
you enter a bug report?

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Mark Hammond
> Sent: Wednesday, August 16, 2000 11:32 PM
> To: python-dev@python.org
> Subject: [Python-Dev] os.path.commonprefix breakage
>
>
> Hi,
> 	I believe that Skip recently made a patch to
> os.path.commonprefix to only
> return the portion of the common prefix that corresponds to a directory.
>
> I have just dicovered some code breakage from this change.  On 1.5.2, the
> behaviour was:
>
> >>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
> '../foo/'
>
> While since the change we have:
> '../foo'
>
> Note that the trailing slash has been dropped.
>
> The code this broke did similar to:
>
> prefix = os.path.commonprefix(files)
> for file in files:
>   tail_portion = file[len(prefix):]
>
> In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
> "/spam", respectively.  The intent was obviously to get absolute
> path names
> back ("bar" and "spam")
>
> The code that broke is not mine, so you can safely be horrified at how
> broken it is :-)  The point, however, is that code like this does
> exist out
> there.
>
> I'm obviously going to change the code that broke, and don't have time to
> look into the posixpath.py code - but is this level of possible breakage
> acceptable?
>
> Thanks,
>
> Mark.




From greg@cosc.canterbury.ac.nz  Thu Aug 17 05:11:51 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:11:51 +1200 (NZST)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEDGHAAA.tim_one@email.msn.com>
Message-ID: <200008170411.QAA15381@s454.cosc.canterbury.ac.nz>

tim_one:

> +1 on changing ">>" to "to" here.

Your +1 might be a bit hasty. I've just realised that
a non-reserved word in that position would be ambiguous,
as can be seen by considering

   print to(myfile), x, y, z

> Then we can introduce
>
>   x = print from myfile, 3

Actually, for the sake of symmetry, I was going to suggest

    input from myfile, x, y ,z

except that the word 'input' is already taken. Bummer.

But wait a moment, we could have

    from myfile input x, y, z

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From fdrake@beopen.com  Thu Aug 17 05:11:44 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 00:11:44 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
 <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > I'm sure it can be argued that the slash should be there.  The previous
 > behavior was clearly broken, however, because it was advancing
 > character-by-character instead of directory-by-directory.  Consequently,
 > calling 
 > 
 >     os.path.commonprefix(["/home/swen", "/home/swenson"])
 > 
 > would yield the most likely invalid path "/home/sw" as the common prefix.

  You have a typo in there... ;)

 > It would be easy enough to append the appropriate path separator to the the
 > result before returning.  I have no problem with that.  Others with more
 > knowledge of path semantics should chime in.  Also, should the behavior be

  I'd guess that the path separator should only be appended if it's
part of the passed-in strings; that would make it a legitimate part of
the prefix.  If it isn't present for all of them, it shouldn't be part
of the result:

>>> os.path.commonprefix(["foo", "foo/bar"])
'foo'


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From skip@mojam.com (Skip Montanaro)  Thu Aug 17 05:23:37 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:23:37 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
 <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <14747.26825.977663.599413@beluga.mojam.com>

    Skip> os.path.commonprefix(["/home/swen", "/home/swenson"])

    Skip> would yield the most likely invalid path "/home/sw" as the common
    Skip> prefix.

Ack!  I meant to use this example:

    os.path.commonprefix(["/home/swen", "/home/swanson"])

which would yield "/home/sw"...

S


From m.favas@per.dem.csiro.au  Thu Aug 17 05:27:20 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 17 Aug 2000 12:27:20 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
Message-ID: <399B69A8.4A94337C@per.dem.csiro.au>

[Trent}
-------------- snipped from current thread_pthread.h ---------------
long
PyThread_get_thread_ident(void)
{
    volatile pthread_t threadid;
    if (!initialized)
        PyThread_init_thread();
    /* Jump through some hoops for Alpha OSF/1 */
    threadid = pthread_self();
    return (long) *(long *) &threadid;
}
-------------------------------------------------------------------
WHAT IS UP WITH THAT return STATEMENT?
  return (long) *(long *) &threadid;

My *guess* is that this is an attempt to just cast 'threadid' (a
pthread_t)
to a long and go through hoops to avoid compiler warnings. I dont' know
what
else it could be. Is that what the "Alpha OSF/1" comment is about?
Anybody
have an Alpha OSF/1 hanging around. The problem is that when
sizeof(pthread_t) != sizeof(long) this line is just broken.

Could this be changed to
  return threadid;
safely?

This is a DEC-threads thing... (and I'm not a DEC-threads savant). 
Making the suggested change gives the compiler warning:

cc -O -Olimit 1500 -I./../Include -I.. -DHAVE_CONFIG_H   -c thread.c -o
thread.o
cc: Warning: thread_pthread.h, line 182: In this statement, "threadid"
of type "
pointer to struct __pthread_t", is being converted to "long".
(cvtdiftypes)
        return threadid;
---------------^

The threads test still passes with this change.


From the pthread.h file,

typedef struct __pthreadTeb_t {
    __pthreadLongAddr_p _Pfield(reserved1);     /* Reserved to
DECthreads */
    __pthreadLongAddr_p _Pfield(reserved2);     /* Reserved to
DECthreads */
    unsigned short      _Pfield(size);          /* V1: Size of TEB */
    unsigned char       _Pfield(version);       /* TEB version */
    unsigned char       _Pfield(reserved3);     /* Reserved to
DECthreads */
    unsigned char       _Pfield(external);      /* V1: PTHREAD_TEB_EFLG_
flgs */
    unsigned char       _Pfield(reserved4)[2];  /* RESERVED */
    unsigned char       _Pfield(creator);       /* V1:
PTHREAD_TEB_CREATOR_* */
    __pthreadLongUint_t _Pfield(sequence);      /* V0: Thread sequence
*/
    __pthreadLongUint_t _Pfield(reserved5)[2];  /* Reserved to
DECthreads */
    __pthreadLongAddr_t _Pfield(per_kt_area);   /* V0: Reserved */
    __pthreadLongAddr_t _Pfield(stack_base);    /* V0: Initial SP */
    __pthreadLongAddr_t _Pfield(stack_reserve); /* V0: reserved stack */
    __pthreadLongAddr_t _Pfield(stack_yellow);  /* V0: yellow zone */
    __pthreadLongAddr_t _Pfield(stack_guard);   /* V0: guard (red) zone
*/
    __pthreadLongUint_t _Pfield(stack_size);    /* V0: total stack size
*/
    __pthreadTsd_t      _Pfield(tsd_values);    /* V0: TSD array (void
*) */
    unsigned long       _Pfield(tsd_count);     /* V0: TSD array size */
    unsigned int        _Pfield(reserved6);     /* Reserved to
DECthreads */
    unsigned int        _Pfield(reserved7);     /* Reserved to
DECthreads */
    unsigned int        _Pfield(thread_flags);  /* Reserved to
DECthreads */
    int                 _Pfield(thd_errno);     /* V1: thread's errno */
    __pthreadLongAddr_t _Pfield(stack_hiwater); /* V1: lowest known SP
*/
    } pthreadTeb_t, *pthreadTeb_p;
# if defined (_PTHREAD_ALLOW_MIXED_PROTOS_) && defined
(__INITIAL_POINTER_SIZE)
typedef pthreadTeb_p    pthread_t;      /* Long pointer if possible */
#  pragma __required_pointer_size __restore
# elif defined (_PTHREAD_ENV_ALPHA) && defined (_PTHREAD_ENV_VMS)
typedef unsigned __int64        pthread_t;      /* Force 64 bits anyway
*/
# else
typedef pthreadTeb_p            pthread_t;      /* Pointers is pointers
*/
# endif
#endif

--
Mark


From greg@cosc.canterbury.ac.nz  Thu Aug 17 05:28:19 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:28:19 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>

Skip:

> Also, should the behavior be
> consistent across platforms or should it do what is correct for each
> platform on which it's implemented (dospath, ntpath, macpath)?

Obviously it should do what's correct for each platform,
although more than one thing can be correct for a
given platform -- e.g Unix doesn't care whether there's a
trailing slash on a pathname.

In the Unix case it's probably less surprising if the trailing
slash is removed, because it's redundant.

The "broken" code referred to in the original message highlights
another problem, however: there is no platform-independent way
provided to remove a prefix from a pathname, given the prefix
as returned by one of the other platform-independent path
munging functions.

So maybe there should be an os.path.removeprefix(prefix, path)
function.

While we're on the subject, another thing that's missing is
a platform-independent way of dealing with the notion of
"up one directory".

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+



From greg@cosc.canterbury.ac.nz  Thu Aug 17 05:34:01 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:34:01 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>

Skip:

> The previous behavior was clearly broken, however, because it was
> advancing character-by-character instead of directory-by-directory.

I've just looked at the 1.5.2 docs and realised that this is
what it *says* it does! So it's right according to the docs,
although it's obviously useless as a pathname manipulating
function.

The question now is, do we change both the specification and the
behaviour, which could break existing code, or leave it be and
add a new function which does the right thing?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From skip@mojam.com (Skip Montanaro)  Thu Aug 17 05:41:59 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:41:59 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
 <14747.25702.435148.549678@beluga.mojam.com>
 <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>
Message-ID: <14747.27927.170223.873328@beluga.mojam.com>

    Fred> I'd guess that the path separator should only be appended if it's
    Fred> part of the passed-in strings; that would make it a legitimate
    Fred> part of the prefix.  If it isn't present for all of them, it
    Fred> shouldn't be part of the result:

    >>> os.path.commonprefix(["foo", "foo/bar"])
    'foo'

Hmmm... I think you're looking at it character-by-character again.  I see
three possibilities:

    * it's invalid to have a path with a trailing separator

    * it's okay to have a path with a trailing separator

    * it's required to have a path with a trailing separator

In the first and third cases, you have no choice.  In the second you have to
decide which would be best.

On Unix my preference would be to not include the trailing "/" for aesthetic
reasons.  The shell's pwd command, the os.getcwd function and the
os.path.normpath function all return directories without the trailing slash.
Also, while Python may not have this problem (and os.path.join seems to
normalize things), some external tools will interpret doubled "/" characters
as single characters while others (most notably Emacs), will treat the
second slash as "erase the prefix and start from /".  

In fact, the more I think of it, the more I think that Mark's reliance on
the trailing slash is a bug waiting to happen (in fact, it just happened
;-).  There's certainly nothing wrong (on Unix anyway) with paths that don't
contain a trailing slash, so if you're going to join paths together, you
ought to be using os.path.join.  To whack off prefixes, perhaps we need
something more general than os.path.split, so instead of

    prefix = os.path.commonprefix(files)
    for file in files:
       tail_portion = file[len(prefix):]

Mark would have used

    prefix = os.path.commonprefix(files)
    for file in files:
       tail_portion = os.path.splitprefix(prefix, file)[1]

The assumption being that

    os.path.splitprefix("/home", "/home/beluga/skip")

would return

    ["/home", "beluga/skip"]

Alternatively, how about os.path.suffixes?  It would work similar to
os.path.commonprefix, but instead of returning the prefix of a group of
files, return a list of the suffixes resulting in the application of the
common prefix:

    >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
    >>> prefix = os.path.commonprefix(files)
    >>> print prefix
    "/home"
    >>> suffixes = os.path.suffixes(prefix, files)
    >>> print suffixes
    ["swen", "swanson", "jules"]

Skip



From fdrake@beopen.com  Thu Aug 17 05:49:24 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 00:49:24 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>
References: <14747.25702.435148.549678@beluga.mojam.com>
 <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > I've just looked at the 1.5.2 docs and realised that this is
 > what it *says* it does! So it's right according to the docs,
 > although it's obviously useless as a pathname manipulating
 > function.

  I think we should now fix the docs; Skip's right about the desired
functionality.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From greg@cosc.canterbury.ac.nz  Thu Aug 17 05:53:05 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:53:05 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.27927.170223.873328@beluga.mojam.com>
Message-ID: <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>

Skip:

> Alternatively, how about os.path.suffixes?  It would work similar to
> os.path.commonprefix, but instead of returning the prefix of a group of
> files, return a list of the suffixes resulting in the application of the
> common prefix:

To avoid duplication of effort, how about a single function
that does both:

   >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
   >>> os.path.factorize(files)
   ("/home", ["swen", "swanson", "jules"])

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From nowonder@nowonder.de  Thu Aug 17 08:13:08 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 17 Aug 2000 07:13:08 +0000
Subject: [Python-Dev] timeout support for socket.py? (was: [ANN] TCP socket timeout module -->
 timeoutsocket.py)
References: <300720002054234840%timo@alum.mit.edu>
Message-ID: <399B9084.C068DCE3@nowonder.de>

As the socketmodule is now exported as _socket and a seperate socket.py
file wrapping _socket is now available in the standard library,
wouldn't it be possible to include timeout capabilities like in
  http://www.timo-tasi.org/python/timeoutsocket.py

If the default behaviour would be "no timeout", I would think this
would not break any code. But it would give an easy(and documentable)
solution) to people who e.g. have their
  urllib.urlopen("http://spam.org").read()
hang on them. (Actually the approach should work for all streaming
socket connections, as far as I understand it.)

Are there any efficiency concerns? If so, would it be possible to
include a second socket class timeoutsocket in socket.py, so that
this could be used instead of the normal socket class? [In this case
a different default timeout than "None" could be chosen.]

Peter

P.S.: For your convenience a quote of the announcement on c.l.py,
      for module documentation (== endlessly long doc string) look in
        http://www.timo-tasi.org/python/timeoutsocket.py

Timothy O'Malley wrote:
> 
> Numerous times I have seen people request a solution for TCP socket
> timeouts in conjunction with urllib.  Recently, I found myself in the
> same boat.  I wrote a server that would connect to skytel.com and send
> automated pages.  Periodically, the send thread in the server would
> hang for a long(!) time.  Yup -- I was bit by a two hour timeout built
> into tcp sockets.
> 
> Thus, I wrote timeoutsocket.py
> 
> With timeoutsocket.py, you can force *all* TCP sockets to have a
> timeout.  And, this is all accomplished without interfering with the
> standard python library!
> 
> Here's how to put a 20 second timeout on all TCP sockets for urllib:
> 
>    import timeoutsock
>    import urllib
>    timeoutsocket.setDefaultSocketTimeout(20)
> 
> Just like that, any TCP connection made by urllib will have a 20 second
> timeout.  If a connect(), read(), or write() blocks for more than 20
> seconds, then a socket.Timeout error will be raised.
> 
> Want to see how to use this in ftplib?
> 
>    import ftplib
>    import timeoutsocket
>    timeoutsocket.setDefaultSocketTimeout(20)
> 
> Wasn't that easy!
> The timeoutsocket.py module acts as a shim on top of the standard
> socket module.  Whenever a TCP socket is requested, an instance of
> TimeoutSocket is returned.  This wrapper class adds timeout support to
> the standard TCP socket.
> 
> Where can you get this marvel of modern engineering?
> 
>    http://www.timo-tasi.org/python/timeoutsocket.py
> 
> And it will very soon be found on the Vaults of Parnassus.
> 
> Good Luck!
> 
> --
> --
> happy daze
>   -tim O
> --
> http://www.python.org/mailman/listinfo/python-list

-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 17 07:16:29 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 17 Aug 2000 09:16:29 +0300 (IDT)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.22851.266303.28877@anthem.concentric.net>
Message-ID: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>

On Wed, 16 Aug 2000, Barry A. Warsaw wrote:

> 
> >>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:
> 
>     GE> Looks reasonably good. Not entirely sure I like the look
>     GE> of >> though -- a bit too reminiscent of C++.
> 
>     GE> How about
> 
>     GE>    print to myfile, x, y, z
> 
> Not bad at all.  Seems quite Pythonic to me.

Ummmmm......

print to myfile  (print a newline on myfile)
print to, myfile (print to+" "+myfile to stdout)

Perl has similar syntax, and I always found it horrible.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Thu Aug 17 07:30:23 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 08:30:23 +0200
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 17, 2000 at 09:16:29AM +0300
References: <14747.22851.266303.28877@anthem.concentric.net> <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
Message-ID: <20000817083023.J376@xs4all.nl>

On Thu, Aug 17, 2000 at 09:16:29AM +0300, Moshe Zadka wrote:
> On Wed, 16 Aug 2000, Barry A. Warsaw wrote:

> > >>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

> >     GE> How about
> >     GE>    print to myfile, x, y, z

> > Not bad at all.  Seems quite Pythonic to me.

> print to myfile  (print a newline on myfile)
> print to, myfile (print to+" "+myfile to stdout)

> Perl has similar syntax, and I always found it horrible.

Agreed. It might be technically unambiguous, but I think it's too hard for a
*human* to parse this correctly. The '>>' version might seem more C++ish and
less pythonic, but it also stands out a lot more. The 'print from' statement
could easily (and more consistently, IMHO ;) be written as 'print <<' (not
that I like the 'print from' idea, though.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Thu Aug 17 07:41:29 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 02:41:29 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>

[Greg Ewing]
> I've just looked at the 1.5.2 docs and realised that this is
> what it *says* it does! So it's right according to the docs,
> although it's obviously useless as a pathname manipulating
> function.

[Fred Drake]
>   I think we should now fix the docs; Skip's right about the desired
> functionality.

Oddly enough, I don't:  commonprefix worked exactly as documented for at
least 6 years and 5 months (which is when CVS shows Guido checking in
ntpath.py with the character-based functionality), and got out of synch with
the docs about 5 weeks ago when Skip changed to this other algorithm.  Since
the docs *did* match the code, there's no reason to believe the original
author was confused, and no reason to believe users aren't relying on it
(they've had over 6 years to gripe <wink>).

I think it's wrong to change what released code or docs do or say in
non-trivial ways when they weren't ever in conflict.  We have no idea who
may be relying on the old behavior!  Bitch all you like about MarkH's test
case, it used to work, it doesn't now, and that sucks for the user.

I appreciate that some other behavior may be more useful more often, but if
you can ever agree on what that is across platforms, it should be spelled
via a new function name ("commonpathprefix" comes to mind), or optional flag
(defaulting to "old behavior") on commonprefix (yuck!).  BTW, the presence
or absence of a trailing path separator makes a *big* difference to many
cmds on Windows, and you can't tell me nobody isn't currently doing e.g.

    commonprefix(["blah.o", "blah", "blah.cpp"])

on Unix either.




From thomas@xs4all.net  Thu Aug 17 07:55:41 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 08:55:41 +0200
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com>; from trentm@ActiveState.com on Wed, Aug 16, 2000 at 04:55:42PM -0700
References: <20000816165542.D29260@ActiveState.com>
Message-ID: <20000817085541.K376@xs4all.nl>

On Wed, Aug 16, 2000 at 04:55:42PM -0700, Trent Mick wrote:

> I am currently trying to port Python to Monterey (64-bit AIX) and I need
> to add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> whatever appropriate variables for all 'cc' and 'ld' invocations) but it
> is not obvious *at all* how to do that in configure.in. Can anybody helpme
> on that?

You'll have to write a shell 'case' for AIX Monterey, checking to make sure
it is monterey, and setting LDFLAGS accordingly. If you look around in
configure.in, you'll see a few other 'special cases', all to tune the
way the compiler is called. Depending on what you need to do to detect
monterey, you could fit it in one of those. Just search for 'Linux' or
'bsdos' to find a couple of those cases.

> ANother issue that I am having. This is how the python executable is linked
> on Linux with gcc:

> gcc  -Xlinker -export-dynamic python.o ../libpython2.0.a -lpthread -ldl  -lutil -lm  -o python

> It, of course, works fine, but shouldn't the proper (read "portable")
> invocation to include the python2.0 library be

> gcc  -Xlinker -export-dynamic python.o -L.. -lpython2.0 -lpthread -ldl  -lutil -lm  -o python

> That invocation form (i.e. with the '-L.. -lpython2.0') works on Linux, and
> is *required* on Monterey. Does this problem not show up with other Unix
> compilers. My hunch is that simply listing library (*.a) arguments on the gcc
> command line is a GNU gcc/ld shortcut to the more portable usage of -L and
> -l. Any opinions. I would either like to change the form to the latter or
> I'll have to special case the invocation for Monterey. ANy opinions on which
> is worse.

Well, as far as I know, '-L.. -lpython2.0' does something *different* than
just '../libpython2.0.a' ! When supplying the static library on the command
line, the library is always statically linked. When using -L/-l, it is
usually dynamically linked, unless a dynamic library doesn't exist. We
currently don't have a libpython2.0.so, but a patch to add it is on Barry's
plate. Also, I'm not entirely sure about the search order in such a case:
gcc's docs seem to suggest that the systemwide library directories are
searched before the -L directories. I'm not sure on that, though.

Also, listing the library on the command line is not a gcc shortcut, but
other people already said that :) I'd be suprised if AIX removed it (but not
especially so; my girlfriend works with AIX machines a lot, and she already
showed me some suprising things ;) but perhaps there is another workaround ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gstein@lyra.org  Thu Aug 17 08:01:22 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 00:01:22 -0700
Subject: [Python-Dev] os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Thu, Aug 17, 2000 at 01:32:25PM +1000
References: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>
Message-ID: <20000817000122.L17689@lyra.org>

>>> os.path.split('/foo/bar/')
('/foo/bar', '')
>>> 

Jamming a trailing slash on the end is a bit wonky. I'm with Skip on saying
that the slash should probably *not* be appended. It gives funny behavior
with the split. Users should use .join() to combine the resulting with
something else.

The removal of a prefix is an interesting issue. No opinions there.

Cheers,
-g

On Thu, Aug 17, 2000 at 01:32:25PM +1000, Mark Hammond wrote:
> Hi,
> 	I believe that Skip recently made a patch to os.path.commonprefix to only
> return the portion of the common prefix that corresponds to a directory.
> 
> I have just dicovered some code breakage from this change.  On 1.5.2, the
> behaviour was:
> 
> >>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
> '../foo/'
> 
> While since the change we have:
> '../foo'
> 
> Note that the trailing slash has been dropped.
> 
> The code this broke did similar to:
> 
> prefix = os.path.commonprefix(files)
> for file in files:
>   tail_portion = file[len(prefix):]
> 
> In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
> "/spam", respectively.  The intent was obviously to get absolute path names
> back ("bar" and "spam")
> 
> The code that broke is not mine, so you can safely be horrified at how
> broken it is :-)  The point, however, is that code like this does exist out
> there.
> 
> I'm obviously going to change the code that broke, and don't have time to
> look into the posixpath.py code - but is this level of possible breakage
> acceptable?
> 
> Thanks,
> 
> Mark.
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/


From thomas@xs4all.net  Thu Aug 17 08:09:42 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 09:09:42 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Aug 17, 2000 at 04:28:19PM +1200
References: <14747.25702.435148.549678@beluga.mojam.com> <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>
Message-ID: <20000817090942.L376@xs4all.nl>

On Thu, Aug 17, 2000 at 04:28:19PM +1200, Greg Ewing wrote:

> given platform -- e.g Unix doesn't care whether there's a
> trailing slash on a pathname.

Bzzzt. This is unfortunately not true. Observe:

daemon2:~/python > mkdir perl
daemon2:~/python > rm perl/
rm: perl/: is a directory
daemon2:~/python > rmdir perl/
rmdir: perl/: Is a directory
daemon2:~/python > rm -rf perl/
rm: perl/: Is a directory
daemon2:~/python > su
# rmdir perl/
rmdir: perl/: Is a directory
# rm -rf perl/
rm: perl/: Is a directory
# ^D
daemon2:~/python > rmdir perl
daemon2:~/python >

Note that the trailing slash is added by all tab-completing shells that I
know. And the problem *really* is that trailing slash, I shit you not.
Needless to say, every one of us ran into this at one time or another, and
spent an hour figuring out *why* the rmdir wouldn't remove a directory.

Consequently, I'm all for removing trailing slashes, but not enough to break
existing code. I wonder howmuch breakage there really is, though.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Thu Aug 17 08:49:33 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 03:49:33 -0400
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <20000816225552.H376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEBHAAA.tim_one@email.msn.com>

[Thomas Wouters, needs a well-earned vacation!]
> ...
> and if those decisions are made before, say, August 30th, I think
> I can move them into the CVS tree before leaving and just shove
> the responsibility for them on the entire dev team ;)
>
> This isn't a push to get them accepted ! Just a warning that if
> they aren't accepted before then, someone will have to take over
> the breastfeeding ;)

Guido will be back from his travels next week, and PythonLabs will have an
intense 2.0 release meeting on Tuesday or Wednesday (depending also on
exactly when Jeremy gets back).  I expect all go/nogo decisions will be made
then.  Part of deciding on a patch that isn't fully complete is deciding
whether others can take up the slack in time.  That's just normal release
business as usual -- nothing to worry about.  Well, not for *you*, anyway.

BTW, there's a trick few have learned:  get the doc patches in *first*, and
then we look like idiots if we release without code to implement it.  And
since this trick has hardly ever been tried, I bet you can sneak it by Fred
Drake for at least a year before anyone at PythonLabs catches on to it <0.9
wink>.

my-mouth-is-sealed-ly y'rs  - tim




From tim_one@email.msn.com  Thu Aug 17 08:29:05 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 03:29:05 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>

[Moshe Zadka]
> Ummmmm......
>
> print to myfile  (print a newline on myfile)
> print to, myfile (print to+" "+myfile to stdout)

Like I (and Greg too) clearly said all along, -1 on changing ">>" to "to"!




From mal@lemburg.com  Thu Aug 17 08:31:55 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 17 Aug 2000 09:31:55 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
 <14747.25702.435148.549678@beluga.mojam.com>
 <14747.26112.609255.338170@cj42289-a.reston1.va.home.com> <14747.27927.170223.873328@beluga.mojam.com>
Message-ID: <399B94EB.E95260EE@lemburg.com>

Skip Montanaro wrote:
> 
>     Fred> I'd guess that the path separator should only be appended if it's
>     Fred> part of the passed-in strings; that would make it a legitimate
>     Fred> part of the prefix.  If it isn't present for all of them, it
>     Fred> shouldn't be part of the result:
> 
>     >>> os.path.commonprefix(["foo", "foo/bar"])
>     'foo'
> 
> Hmmm... I think you're looking at it character-by-character again.  I see
> three possibilities:
> 
>     * it's invalid to have a path with a trailing separator
> 
>     * it's okay to have a path with a trailing separator
> 
>     * it's required to have a path with a trailing separator
> 
> In the first and third cases, you have no choice.  In the second you have to
> decide which would be best.
> 
> On Unix my preference would be to not include the trailing "/" for aesthetic
> reasons.

Wait, Skip :-) By dropping the trailing slash from the path
you are removing important information from the path information.

This information can only be regained by performing an .isdir()
check and then only of the directory exists somewhere. If it
doesn't you are loosing valid information here.

Another aspect: 
Since posixpath is also used by URL handling code,
I would suspect that this results in some nasty problems too.
You'd have to actually ask the web server to give you back the
information you already had.

Note that most web-servers send back a redirect in case a
directory is queried without ending slash in the URL. They
do this for exactly the reason stated above: to add the
.isdir() information to the path itself.

Conclusion:
Please don't remove the slash -- at least not in posixpath.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Thu Aug 17 10:54:16 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 11:54:16 +0200
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 03:29:05AM -0400
References: <Pine.GSO.4.10.10008170915050.24783-100000@sundial> <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>
Message-ID: <20000817115416.M376@xs4all.nl>

On Thu, Aug 17, 2000 at 03:29:05AM -0400, Tim Peters wrote:
> [Moshe Zadka]
> > Ummmmm......
> >
> > print to myfile  (print a newline on myfile)
> > print to, myfile (print to+" "+myfile to stdout)
> 
> Like I (and Greg too) clearly said all along, -1 on changing ">>" to "to"!

Really ? Hmmmm...

[Greg Ewing]
> Looks reasonably good. Not entirely sure I like the look
> of >> though -- a bit too reminiscent of C++.
>
> How about
>
>    print to myfile, x, y, z

[Barry Warsaw]
> Not bad at all.  Seems quite Pythonic to me.

[Tim Peters]
> Me too!  +1 on changing ">>" to "to" here.  Then we can introduce
[print from etc]

I guessed I missed the sarcasm ;-P

Gullib-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gmcm@hypernet.com  Thu Aug 17 12:58:26 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 07:58:26 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>
References: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <1245608987-154490918@hypernet.com>

Greg Ewing wrote:
[snip]
> While we're on the subject, another thing that's missing is
> a platform-independent way of dealing with the notion of
> "up one directory".

os.chdir(os.pardir)

- Gordon


From paul@prescod.net  Thu Aug 17 13:56:23 2000
From: paul@prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:56:23 -0400
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net> <20000815195751.A16100@ludwig.cnri.reston.va.us>
Message-ID: <399BE0F7.F00765DA@prescod.net>

Greg Ward wrote:
> 
> ...
> I'm all in favour of high-level interfaces, and I'm also in favour of
> speaking the local tongue -- when in Windows, follow the Windows API (at
> least for features that are totally Windows-specific, like the
> registry).

At this point, the question is not whether to follow the Microsoft API
or not (any more). It is whether to follow the early 1990s Microsoft API
for C programmers or the new Microsoft API for Visual Basic, C#, Eiffel
and Javascript programmers.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From paul@prescod.net  Thu Aug 17 13:57:08 2000
From: paul@prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:57:08 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com>
Message-ID: <399BE124.9920B0B6@prescod.net>

Tim Peters wrote:
> 
> ...
> > If you want a more efficient way to do it, it's available (just not as
> > syntactically beautiful -- same as range/xrangel).
> 
> Which way would that be?  I don't know of one, "efficient" either in the
> sense of runtime speed or of directness of expression.  

One of the reasons for adding range literals was for efficiency.

So

for x in [:len(seq)]:
  ...

should be efficient.

> The "loop index" isn't an accident of the way Python happens to implement
> "for" today, it's the very basis of Python's thing.__getitem__(i)/IndexError
> iteration protocol.  Exposing it is natural, because *it* is natural.

I don't think of iterators as indexing in terms of numbers. Otherwise I
could do this:

>>> a={0:"zero",1:"one",2:"two",3:"three"}
>>> for i in a:
...     print i
...

So from a Python user's point of view, for-looping has nothing to do
with integers. From a Python class/module creator's point of view it
does have to do with integers. I wouldn't be either surprised nor
disappointed if that changed one day.

> Sorry, but seq.keys() just makes me squirm.  It's a little step down the
> Lispish path of making everything look the same.  I don't want to see
> float.write() either <wink>.

You'll have to explain your squeamishness better if you expect us to
channel you in the future. Why do I use the same syntax for indexing
sequences and dictionaries and for deleting sequence and dictionary
items? Is the rule: "syntax can work across types but method names
should never be shared"?

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From paul@prescod.net  Thu Aug 17 13:58:00 2000
From: paul@prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:58:00 -0400
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net> <045901c00414$27a67010$8119fea9@neil>
Message-ID: <399BE158.C2216D34@prescod.net>

Neil Hodgson wrote:
> 
> ...
> 
>    The registry is just not important enough to have this much attention or
> work.

I remain unreconstructed. My logic is as follows:

 * The registry is important enough to be in the standard library ...
unlike, let's say, functions to operate the Remote Access Service.

 * The registry is important enough that the interface to it is
documented (partially)

 * Therefore, the registry is important enough to have a decent API with
complete documentation.

You know the old adage: "anything worth doing..."

If the registry is just supposed to expose one or two functions for
distutils then it could expose one or two functions for distutils, be
called _distreg and be undocumented and formally unsupported.

>    The Microsoft.Win32.Registry* API appears to be a hacky legacy API to me.
> Its there for compatibility during the transition to the
> System.Configuration API. Read the blurb for ConfigManager to understand the
> features of System.Configuration. Its all based on XML files. What a
> surprise.

Nobody on Windows is going to migrate to XML configuration files this
year or next year. The change-over is going to be too difficult.
Predicting Microsoft configuration ideology in 2002 is highly risky. If
we need to do the registry today then we can do the registry right
today.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From skip@mojam.com (Skip Montanaro)  Thu Aug 17 13:50:28 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 17 Aug 2000 07:50:28 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
References: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>
 <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
Message-ID: <14747.57236.264324.165612@beluga.mojam.com>

    Tim> Oddly enough, I don't: commonprefix worked exactly as documented
    Tim> for at least 6 years and 5 months (which is when CVS shows Guido
    Tim> checking in ntpath.py with the character-based functionality), and
    Tim> got out of synch with the docs about 5 weeks ago when Skip changed
    Tim> to this other algorithm.  Since the docs *did* match the code,
    Tim> there's no reason to believe the original author was confused, and
    Tim> no reason to believe users aren't relying on it (they've had over 6
    Tim> years to gripe <wink>).

I don't realize that because a bug wasn't noticed for a long time was any
reason not to fix it.  Guido was also involved in the repair of the bug, and
had no objections to the fix I eventually arrived at.  Also, when I
announced my original patch the subject of the message was

    patch for os.path.commonprefix (changes semantics - pls review)

In the body of the message I said

    Since my patch changes the semantics of the function, I submitted a
    patch via SF that implements what I believe to be the correct behavior
    instead of just checking in the change, so people could comment on it.

I don't think I could have done much more to alert people to the change than
I did.  I didn't expect the patch to go into 1.6.  (Did it?  It shouldn't
have.)  I see nothing wrong with correcting the semantics of a function that
is broken when we increment the major version number of the code.

    Tim> I appreciate that some other behavior may be more useful more
    Tim> often, but if you can ever agree on what that is across platforms,
    Tim> it should be spelled via a new function name ("commonpathprefix"
    Tim> comes to mind), or optional flag (defaulting to "old behavior") on
    Tim> commonprefix (yuck!).  BTW, the presence or absence of a trailing
    Tim> path separator makes a *big* difference to many cmds on Windows,
    Tim> and you can't tell me nobody isn't currently doing e.g.

    Tim>     commonprefix(["blah.o", "blah", "blah.cpp"])

    Tim> on Unix either.

Fine.  Let's preserve the broken implementation and not break any broken
usage.  Switch it back then.

Taking a look at the copious documentation for posixpath.commonprefix:

    Return the longest string that is a prefix of all strings in
    list.  If list is empty, return the empty string ''.

I see no mention of anything in this short bit of documentation taken
completely out of context that suggests that posixpath.commonprefix has
anything to do with paths, so maybe we should move it to some other module
that has no directory path implications.  That way nobody can make the
mistake of trying to assume it operates on paths.  Perhaps string?  Oh,
that's deprecated.  Maybe we should undeprecate it or make commonprefix a
string method.  Maybe I'll just reopen the patch and assign it to Barry
since he's the string methods guru.

On a more realistic note, perhaps I should submit a patch that corrects the
documentation.

Skip


From skip@mojam.com (Skip Montanaro)  Thu Aug 17 13:19:46 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 17 Aug 2000 07:19:46 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>
References: <14747.27927.170223.873328@beluga.mojam.com>
 <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.55394.783997.167234@beluga.mojam.com>

    Greg> To avoid duplication of effort, how about a single function that
    Greg> does both:

    >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
    >>> os.path.factorize(files)
    ("/home", ["swen", "swanson", "jules"])

Since we already have os.path.commonprefix and it's not going away, it
seemed to me that just adding a complementary function to return the
suffixes made sense.  Also, there's nothing in the name factorize that
suggests that it would split the paths at the common prefix.

It could easily be written in terms of the two:

    def factorize(files):
	pfx = os.path.commonprefix(files)
	suffixes = os.path.suffixes(pfx, files)
	return (pfx, suffixes)

Skip



From bwarsaw@beopen.com  Thu Aug 17 15:35:03 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 10:35:03 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <14747.22851.266303.28877@anthem.concentric.net>
 <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
 <20000817083023.J376@xs4all.nl>
Message-ID: <14747.63511.725610.771162@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> Agreed. It might be technically unambiguous, but I think it's
    TW> too hard for a *human* to parse this correctly. The '>>'
    TW> version might seem more C++ish and less pythonic, but it also
    TW> stands out a lot more. The 'print from' statement could easily
    TW> (and more consistently, IMHO ;) be written as 'print <<' (not
    TW> that I like the 'print from' idea, though.)

I also played around with trying to get the grammar and parser to
recognize 'print to' and variants, and it seemed difficult and
complicated.  So I'm back to -0 on 'print to' and +1 on 'print >>'.

-Barry


From bwarsaw@beopen.com  Thu Aug 17 15:43:02 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 10:43:02 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
 <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>
 <20000817115416.M376@xs4all.nl>
Message-ID: <14747.63990.296049.566791@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> Really ? Hmmmm...

    TW> [Tim Peters]
    >> Me too!  +1 on changing ">>" to "to" here.  Then we can
    >> introduce

    TW> I guessed I missed the sarcasm ;-P

No, Tim just forgot to twist the blue knob while he was pressing the
shiny pedal on Guido's time machine.  I've made the same mistake
myself before -- the VRTM can be as inscrutable as the BDFL himself at
times.  Sadly, changing those opinions now would cause an irreparable
time paradox, the outcome of which would force Python to be called
Bacon and require you to type `albatross' instead of colons to start
every block.

good-thing-tim-had-the-nose-plugs-in-or-Python-would-only-work-on-
19-bit-architectures-ly y'rs,
-Barry


From Vladimir.Marangozov@inrialpes.fr  Thu Aug 17 16:09:44 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 17:09:44 +0200 (CEST)
Subject: [Python-Dev] PyErr_NoMemory
Message-ID: <200008171509.RAA20891@python.inrialpes.fr>

The current PyErr_NoMemory() function reads:

PyObject *
PyErr_NoMemory(void)
{
        /* raise the pre-allocated instance if it still exists */
        if (PyExc_MemoryErrorInst)
                PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
        else
                /* this will probably fail since there's no memory and hee,
                   hee, we have to instantiate this class
                */
                PyErr_SetNone(PyExc_MemoryError);

        return NULL;
}

thus overriding any previous exceptions unconditionally. This is a
problem when the current exception already *is* PyExc_MemoryError,
notably when we have a chain (cascade) of memory errors. It is a
problem because the original memory error and eventually its error
message is lost.

I suggest to make this code look like:

PyObject *
PyErr_NoMemory(void)
{
	if (PyErr_ExceptionMatches(PyExc_MemoryError))
		/* already current */
		return NULL;

        /* raise the pre-allocated instance if it still exists */
        if (PyExc_MemoryErrorInst)
                PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
...


If nobody sees a problem with this, I'm very tempted to check it in.
Any objections?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From gmcm@hypernet.com  Thu Aug 17 16:22:27 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 11:22:27 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.63990.296049.566791@anthem.concentric.net>
Message-ID: <1245596748-155226852@hypernet.com>

> No, Tim just forgot to twist the blue knob while he was pressing
> the shiny pedal on Guido's time machine.  I've made the same
> mistake myself before -- the VRTM can be as inscrutable as the
> BDFL himself at times.  Sadly, changing those opinions now would
> cause an irreparable time paradox, the outcome of which would
> force Python to be called Bacon and require you to type
> `albatross' instead of colons to start every block.

That accounts for the strange python.ba (mtime 1/1/70) I 
stumbled across this morning:

#!/usr/bin/env bacon
# released to the public domain at least one Tim Peters
import sys, string, tempfile
txt = string.replace(open(sys.argv[1]).read(), ':', ' albatross')
fnm = tempfile.mktemp() + '.ba'
open(fnm, 'w').write(txt)
os.system('bacon %s %s' % (fnm, string.join(sys.argv[2:]))



- Gordon


From nowonder@nowonder.de  Thu Aug 17 20:30:13 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Thu, 17 Aug 2000 19:30:13 +0000
Subject: [Python-Dev] PyErr_NoMemory
References: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <399C3D45.95ED79D8@nowonder.de>

Vladimir Marangozov wrote:
> 
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

This change makes sense to me. I can't see any harm in checking
it in.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From tim_one@email.msn.com  Thu Aug 17 18:58:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 13:58:25 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.63990.296049.566791@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>

> >>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:
>
>     TW> Really ? Hmmmm...
>
>     TW> [Tim Peters]
>     >> Me too!  +1 on changing ">>" to "to" here.  Then we can
>     >> introduce
>     TW> I guessed I missed the sarcasm ;-P

[Barry A. Warsaw]
> No, Tim just forgot to twist the blue knob while he was pressing the
> shiny pedal on Guido's time machine.  I've made the same mistake
> myself before -- the VRTM can be as inscrutable as the BDFL himself at
> times.  Sadly, changing those opinions now would cause an irreparable
> time paradox, the outcome of which would force Python to be called
> Bacon and require you to type `albatross' instead of colons to start
> every block.
>
> good-thing-tim-had-the-nose-plugs-in-or-Python-would-only-work-on-
> 19-bit-architectures-ly y'rs,

I have no idea what this is about.  I see an old msg from Barry voting "-1"
on changing ">>" to "to", but don't believe any such suggestion was ever
made.  And I'm sure that had such a suggestion ever been made, it would have
been voted down at once by everyone.

OTOH, there is *some* evidence that an amateur went mucking with the time
machine! No 19-bit architectures, but somewhere in a reality distortion
field around Vancouver, it appears that AIX actually survived long enough to
see the 64-bit world, and that some yahoo vendor decided to make a version
of C where sizeof(void*) > sizeof(long).  There's no way either of those
could have happened naturally.

even-worse-i-woke-up-today-*old*!-ly y'rs  - tim




From trentm@ActiveState.com  Thu Aug 17 19:21:22 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 11:21:22 -0700
Subject: screwin' with the time machine in Canada, eh (was: Re: [Python-Dev] PEP 214, extended print statement)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 01:58:25PM -0400
References: <14747.63990.296049.566791@anthem.concentric.net> <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>
Message-ID: <20000817112122.A27284@ActiveState.com>

On Thu, Aug 17, 2000 at 01:58:25PM -0400, Tim Peters wrote:
> 
> OTOH, there is *some* evidence that an amateur went mucking with the time
> machine! No 19-bit architectures, but somewhere in a reality distortion
> field around Vancouver, it appears that AIX actually survived long enough to
> see the 64-bit world, and that some yahoo vendor decided to make a version
> of C where sizeof(void*) > sizeof(long).  There's no way either of those
> could have happened naturally.
> 

And though this place is supposed to be one the more successful pot havens on
the planet I just can't seem to compete with the stuff those "vendors" in
Austin (AIX) and Seattle must have been smokin'.

<puff>-<inhale>-if-i-wasn't-seeing-flying-bunnies-i-would-swear-that-compiler
is-from-SCO-ly-y'rs - trent


> even-worse-i-woke-up-today-*old*!-ly y'rs  - tim

Come on up for a visit and we'll make you feel young again. :)


Trent

-- 
Trent Mick


From akuchlin@mems-exchange.org  Thu Aug 17 21:40:35 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 16:40:35 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
Message-ID: <E13PWSp-0006w9-00@kronos.cnri.reston.va.us>

Tim O'Malley finally mailed me the correct URL for the latest version
of the cookie module: http://www.timo-tasi.org/python/Cookie.py 

*However*...  I think the Web support in Python needs more work in
generally, and certainly more than can be done for 2.0.  One of my
plans for the not-too-distant future is to start writing a Python/CGI
guide, and the process of writing it is likely to shake out more
ugliness that should be fixed.

I'd like to propose a 'Web Library Enhancement PEP', and offer to
champion and write it.  Its goal would be to identify missing features
and specify them, and list other changes to improve Python as a
Web/CGI language.  Possibly the PEP would also drop
backward-compatibility cruft.

(Times like this I wish the Web-SIG hadn't been killed...)

--amk


From akuchlin@mems-exchange.org  Thu Aug 17 21:43:45 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 16:43:45 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
Message-ID: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>

Tim O'Malley finally mailed me the correct URL for the latest version
of the cookie module: http://www.timo-tasi.org/python/Cookie.py 

*However*...  I think the Web support in Python needs more work in
generally, and certainly more than can be done for 2.0.  One of my
plans for the not-too-distant future is to start writing a Python/CGI
guide, and the process of writing it is likely to shake out more
ugliness that should be fixed.

I'd like to propose a 'Web Library Enhancement PEP', and offer to
champion and write it.  Its goal would be to identify missing features
and specify them, and list other changes to improve Python as a
Web/CGI language.  Possibly the PEP would also drop
backward-compatibility cruft.

(Times like this I wish the Web-SIG hadn't been killed...)

--amk


From bwarsaw@beopen.com  Thu Aug 17 22:05:10 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 17:05:10 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
Message-ID: <14748.21382.305979.784637@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin@mems-exchange.org> writes:

    AK> Tim O'Malley finally mailed me the correct URL for the latest
    AK> version of the cookie module:
    AK> http://www.timo-tasi.org/python/Cookie.py

    AK> *However*...  I think the Web support in Python needs more
    AK> work in generally, and certainly more than can be done for
    AK> 2.0.

I agree, but I still think Cookie.py should go in the stdlib for 2.0.

-Barry


From akuchlin@mems-exchange.org  Thu Aug 17 22:13:52 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 17:13:52 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <14748.21382.305979.784637@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 05:05:10PM -0400
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us> <14748.21382.305979.784637@anthem.concentric.net>
Message-ID: <20000817171352.B26730@kronos.cnri.reston.va.us>

On Thu, Aug 17, 2000 at 05:05:10PM -0400, Barry A. Warsaw wrote:
>I agree, but I still think Cookie.py should go in the stdlib for 2.0.

Fine.  Shall I just add it as-is?  (Opinion was generally positive as
I recall, unless the BDFL wants to exercise his veto for some reason.)

--amk


From thomas@xs4all.net  Thu Aug 17 22:19:42 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:19:42 +0200
Subject: [Python-Dev] 'import as'
Message-ID: <20000817231942.O376@xs4all.nl>

I have two remaining issues regarding the 'import as' statement, which I'm
just about ready to commit. The first one is documentation: I have
documentation patches, to the ref and the libdis sections, but I can't
really test them :P I *think* they are fine, though, and they aren't really
complicated. Should I upload a patch for them, so Fred or someone else can
look at them, or just check them in ?

The other issue is the change in semantics for 'from-import'. Currently,
'IMPORT_FROM' is a single operation that retrieves a name (possible '*')
from the module object at TOS, and stores it directly in the local
namespace. This is contrary to 'import <module>', which pushes it onto the
stack and uses a normal STORE operation to store it. It's also necessary for
'from ... import *', which can load any number of objects.

After the patch, 'IMPORT_FROM' is only used to load normal names, and a new
opcode, 'IMPORT_STAR' (with no argument) is used for 'from module import *'.
'IMPORT_FROM' pushes the result on the stack, instead of modifying the local
namespace directly, so that it's possible to store it to a different name.
This also means that a 'global' statement now has effect on objects
'imported from' a module, *except* those imported by '*'.

I don't think that's a big issue. 'global' is not that heavily used, and old
code mixing 'import from' and 'global' statements on the same identifier
would not have been doing what the programmer intended. However, if it *is*
a big issue, I can revert to an older version of the patch, that added a new
bytecode to handle 'from x import y as z', and leave the bytecode for the
currently valid cases unchanged. That would mean that only the '... as z'
would be effected by 'global' statements. 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From trentm@ActiveState.com  Thu Aug 17 22:22:07 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 14:22:07 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 11:34:12PM -0400
References: <20000816172425.A32338@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>
Message-ID: <20000817142207.A5592@ActiveState.com>

On Wed, Aug 16, 2000 at 11:34:12PM -0400, Tim Peters wrote:
> [Trent Mick]
> > I am porting Python to Monterey (64-bit AIX) and have a small
> > (hopefully) question about POSIX threads.
> 
> POSIX threads. "small question".  HAHAHAHAHAHA.  Thanks, that felt good
> <wink>.

Happy to provide you with cheer. <grumble>



> > Does the POSIX threads spec specify a C type or minimum size for
> > pthread_t?
> 
> or user-space arrays of structs.  So I think it's *safe* to assume it will
> always fit in an integral type large enough to hold a pointer, but not
> guaranteed.  Plain "long" certainly isn't safe in theory.

Not for pthread ports to Win64 anyway. But that is not my concern right now.
I'll let the pthreads-on-Windows fans worry about that when the time comes.


> > this up. On Linux (mine at least):
> >   /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int
> > pthread_t;
> 
> And this is a 32- or 64-bit Linux?

That was 32-bit Linux. My 64-bit Linux box is down right now, I can tell
later if you really want to know.


> > WHAT IS UP WITH THAT return STATEMENT?
> >   return (long) *(long *) &threadid;
> 
<snip>
> 
> So, here's the scoop:
> 
<snip>

Thanks for trolling the cvs logs, Tim!

> 
> So one of two things can be done:
> 
> 1. Bite the bullet and do it correctly.  For example, maintain a static
>    dict mapping the native pthread_self() return value to Python ints,
>    and return the latter as Python's thread.get_ident() value.  Much
>    better would to implement a x-platform thread-local storage
>    abstraction, and use that to hold a Python-int ident value.
> 
> 2. Continue in the tradition already established <wink>, and #ifdef the
>    snot out of it for Monterey.
> 
> In favor of #2, the code is already so hosed that making it hosier won't be
> a significant relative increase in its inherent hosiness.
> 
> spoken-like-a-true-hoser-ly y'rs  - tim
> 

I'm all for being a hoser then. #ifdef's a-comin' down the pipe. One thing,
the only #define that I know I have a handle on for Monterey is '_LP64'. Do
you have an objection to that (seeing at is kind of misleading)? I will
accompany it with an explicative comment of course.


take-off-you-hoser-ly y'rs - wannabe Bob & Doug fan

-- 
Trent Mick
TrentM@ActiveState.com


From fdrake@beopen.com  Thu Aug 17 22:35:14 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 17:35:14 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>
References: <20000817231942.O376@xs4all.nl>
Message-ID: <14748.23186.372772.48426@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > really test them :P I *think* they are fine, though, and they aren't really
 > complicated. Should I upload a patch for them, so Fred or someone else can
 > look at them, or just check them in ?

  Just check them in; I'll catch problems before anyone else tries to
format the stuff at any rate.
  With regard to your semantics question, I think your proposed
solution is fine.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Thu Aug 17 22:38:21 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:38:21 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 11:19:42PM +0200
References: <20000817231942.O376@xs4all.nl>
Message-ID: <20000817233821.P376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:19:42PM +0200, Thomas Wouters wrote:

> This also means that a 'global' statement now has effect on objects
> 'imported from' a module, *except* those imported by '*'.

And while I was checking my documentation patches, I found this:

Names bound by \keyword{import} statements may not occur in
\keyword{global} statements in the same scope.
\stindex{global}

But there doesn't seem to be anything to prevent it ! On my RedHat supplied
Python 1.5.2:

>>> def test():
...     global sys
...     import sys
... 
>>> test()
>>> sys
<module 'sys' (built-in)>

And on a few weeks old CVS Python:

>>> def test():
...     global sys
...     import sys
...
>>> test()
>>> sys
<module 'sys' (built-in)>

Also, mixing 'global' and 'from-import' wasn't illegal, it was just
ineffective. (That is, it didn't make the variable 'global', but it didn't
raise an exception either!)

How about making 'from module import *' a special case in this regard, and
letting 'global' operate fine on normal 'import' and 'from-import'
statements ? I can definately see a use for it, anyway. Is this workable
(and relevant) for JPython / #Py ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From trentm@ActiveState.com  Thu Aug 17 22:41:04 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 14:41:04 -0700
Subject: [Python-Dev] [Fwd: segfault in sre on 64-bit plats]
In-Reply-To: <399B3D36.6921271@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Thu, Aug 17, 2000 at 09:17:42AM +0800
References: <399B3D36.6921271@per.dem.csiro.au>
Message-ID: <20000817144104.B7658@ActiveState.com>

On Thu, Aug 17, 2000 at 09:17:42AM +0800, Mark Favas wrote:
> [Trent]
> > This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> > SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> > 7500. I don't want to just willy-nilly drop the recursion limit down to make
> > the problem go away.
> > 
> 
> Sorry for the delay - yes, I had these segfaults due to exceeding the
> stack size on Tru64 Unix (which, by default, is 2048 kbytes) before
> Fredrick introduced the recusrion limit of 10000 in _sre.c. You'd expect
> a 64-bit OS to use a bit more bytes of the stack when handling recursive
> calls, but your 7500 down from 10000 sounds a bit much - unless the

Actually with pointers being twice the size the stack will presumably get
comsumed more quickly (right?), so all other things being equal the earlier
stack overflow is expected.

> stack size limit you're using on Linux64 is smaller than that for
> Linux32 - what are they?

------------------- snip --------- snip ----------------------
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>

int main(void)
{
    struct rlimit lims;
    if (getrlimit(RLIMIT_STACK, &lims) != 0) {
        printf("error in getrlimit\n");
        exit(1);
    }
    printf("cur stack limit = %d, max stack limit = %d\n",
        lims.rlim_cur, lims.rlim_max);
    return 0;
}
------------------- snip --------- snip ----------------------

On Linux32:

    cur stack limit = 8388608, max stack limit = 2147483647

On Linux64:

    cur stack limit = 8388608, max stack limit = -1


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From cgw@fnal.gov  Thu Aug 17 22:43:38 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:43:38 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <20000817125903.2C29E1D0F5@dinsdale.python.org>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
Message-ID: <14748.23690.632808.944375@buffalo.fnal.gov>

This has probably been noted by somebody else already - somehow a
config.h showed up in the Include directory when I did a cvs update
today.  I assume this is an error.  It certainly keeps Python from
building on my system!


From thomas@xs4all.net  Thu Aug 17 22:46:07 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:46:07 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817233821.P376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 11:38:21PM +0200
References: <20000817231942.O376@xs4all.nl> <20000817233821.P376@xs4all.nl>
Message-ID: <20000817234607.Q376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:38:21PM +0200, Thomas Wouters wrote:
> On Thu, Aug 17, 2000 at 11:19:42PM +0200, Thomas Wouters wrote:
> 
> > This also means that a 'global' statement now has effect on objects
> > 'imported from' a module, *except* those imported by '*'.
> 
> And while I was checking my documentation patches, I found this:

> Names bound by \keyword{import} statements may not occur in
> \keyword{global} statements in the same scope.
> \stindex{global}

And about five lines lower, I saw this:

(The current implementation does not enforce the latter two
restrictions, but programs should not abuse this freedom, as future
implementations may enforce them or silently change the meaning of the
program.)

My only excuse is that all that TeX stuff confuzzles my eyes ;) In any case,
my point still stands: 1) can we change this behaviour even if it's
documented to be impossible, and 2) should it be documented differently,
allowing mixing of 'global' and 'import' ?

Multip-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Thu Aug 17 22:52:28 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 17:52:28 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.23690.632808.944375@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
 <14748.23690.632808.944375@buffalo.fnal.gov>
Message-ID: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > This has probably been noted by somebody else already - somehow a
 > config.h showed up in the Include directory when I did a cvs update
 > today.  I assume this is an error.  It certainly keeps Python from
 > building on my system!

  This doesn't appear to be in CVS.  If you delete the file and the do
a CVS update, does it reappear?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From cgw@fnal.gov  Thu Aug 17 22:56:55 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:56:55 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
 <14748.23690.632808.944375@buffalo.fnal.gov>
 <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
Message-ID: <14748.24487.903334.663705@buffalo.fnal.gov>


And it's not that sticky date, either (no idea how that got set!)

buffalo:Include$  cvs update -A
cvs server: Updating .
U config.h

buffalo:Include$ cvs status config.h 
===================================================================
File: config.h          Status: Up-to-date

   Working revision:    2.1
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)



From cgw@fnal.gov  Thu Aug 17 22:58:40 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:58:40 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
 <14748.23690.632808.944375@buffalo.fnal.gov>
 <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
Message-ID: <14748.24592.448009.515511@buffalo.fnal.gov>


Fred L. Drake, Jr. writes:
 > 
 >   This doesn't appear to be in CVS.  If you delete the file and the do
 > a CVS update, does it reappear?
 > 

Yes.

buffalo:src$ pwd
/usr/local/src/Python-CVS/python/dist/src

buffalo:src$ cd Include/

buffalo:Include$ cvs update
cvs server: Updating .
U config.h

buffalo:Include$ cvs status config.h
===================================================================
File: config.h          Status: Up-to-date

   Working revision:    2.1
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v
   Sticky Tag:          (none)
   Sticky Date:         2000.08.17.05.00.00
   Sticky Options:      (none)



From fdrake@beopen.com  Thu Aug 17 23:02:39 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 18:02:39 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24487.903334.663705@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
 <14748.23690.632808.944375@buffalo.fnal.gov>
 <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
 <14748.24487.903334.663705@buffalo.fnal.gov>
Message-ID: <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > And it's not that sticky date, either (no idea how that got set!)

  -sigh-  Is there an entry for config.h in the CVS/entries file?  If
so, surgically remove it, then delete the config.h, then try the
update again.
  *This* is getting mysterious.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From cgw@fnal.gov  Thu Aug 17 23:07:28 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 17:07:28 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
 <14748.23690.632808.944375@buffalo.fnal.gov>
 <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
 <14748.24487.903334.663705@buffalo.fnal.gov>
 <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
Message-ID: <14748.25120.807735.628798@buffalo.fnal.gov>

Fred L. Drake, Jr. writes:

 >   -sigh-  Is there an entry for config.h in the CVS/entries file?  If
 > so, surgically remove it, then delete the config.h, then try the
 > update again.

Yes, this entry was present, I removed it as you suggested.

Now, when I do cvs update the config.h doesn't reappear, but I still
see "needs checkout" if I ask for cvs status:


buffalo:Include$ cvs status config.h
===================================================================
File: no file config.h          Status: Needs Checkout

   Working revision:    No entry for config.h
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v

I keep my local CVS tree updated daily, I never use any kind of sticky
tags, and haven't seen this sort of problem at all, up until today.
Today I also noticed the CVS server responding very slowly, so I
suspect that something may be wrong with the server.




From fdrake@beopen.com  Thu Aug 17 23:13:28 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 18:13:28 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.25120.807735.628798@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
 <14748.23690.632808.944375@buffalo.fnal.gov>
 <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
 <14748.24487.903334.663705@buffalo.fnal.gov>
 <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
 <14748.25120.807735.628798@buffalo.fnal.gov>
Message-ID: <14748.25480.976849.825016@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > Now, when I do cvs update the config.h doesn't reappear, but I still
 > see "needs checkout" if I ask for cvs status:
[...output elided...]

  I get exactly the same output from "cvs status", and "cvs update"
doesn't produce the file.
  Now, if I say "cvs update config.h", it shows up and doesn't get
deleted by "cvs update", but after removing the line from CVS/Entries
and removing the file, it doesn't reappear.  So you're probably set
for now.

 > I keep my local CVS tree updated daily, I never use any kind of sticky
 > tags, and haven't seen this sort of problem at all, up until today.
 > Today I also noticed the CVS server responding very slowly, so I
 > suspect that something may be wrong with the server.

  This is weird, but that doesn't sound like the problem; the SF
servers can be very slow some days, but we suspect it's just load.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From trentm@ActiveState.com  Thu Aug 17 23:15:08 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 15:15:08 -0700
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000817085541.K376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 08:55:41AM +0200
References: <20000816165542.D29260@ActiveState.com> <20000817085541.K376@xs4all.nl>
Message-ID: <20000817151508.C7658@ActiveState.com>

On Thu, Aug 17, 2000 at 08:55:41AM +0200, Thomas Wouters wrote:
> On Wed, Aug 16, 2000 at 04:55:42PM -0700, Trent Mick wrote:
> 
> > I am currently trying to port Python to Monterey (64-bit AIX) and I need
> > to add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> > whatever appropriate variables for all 'cc' and 'ld' invocations) but it
> > is not obvious *at all* how to do that in configure.in. Can anybody helpme
> > on that?
> 
> You'll have to write a shell 'case' for AIX Monterey, checking to make sure
> it is monterey, and setting LDFLAGS accordingly. If you look around in
> configure.in, you'll see a few other 'special cases', all to tune the
> way the compiler is called. Depending on what you need to do to detect
> monterey, you could fit it in one of those. Just search for 'Linux' or
> 'bsdos' to find a couple of those cases.

Right, thanks. I was looking at first to modify CFLAGS and LDFLAGS (as I
thought would be cleaner) but I have got it working by just modifying CC and
LINKCC instead (following the crowd on that one).



[Trent blames placing *.a on the cc command line for his problems and Thomas
and Barry, etc. tell Trent that that cannot be]

Okay, I don't know what I was on. I think I was flailing for things to blame.
I have got it working with simply listing the .a on the command line.



Thanks,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From bwarsaw@beopen.com  Thu Aug 17 23:26:42 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 18:26:42 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
 <14748.21382.305979.784637@anthem.concentric.net>
 <20000817171352.B26730@kronos.cnri.reston.va.us>
Message-ID: <14748.26274.949428.733639@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin@mems-exchange.org> writes:

    AK> Fine.  Shall I just add it as-is?  (Opinion was generally
    AK> positive as I recall, unless the BDFL wants to exercise his
    AK> veto for some reason.)

Could you check and see if there are any substantial differences
between the version you've got and the version in the Mailman tree?
If there are none, then I'm +1.

Let me know if you want me to email it to you.
-Barry


From MarkH@ActiveState.com  Fri Aug 18 00:07:38 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 09:07:38 +1000
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.57236.264324.165612@beluga.mojam.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEFODFAA.MarkH@ActiveState.com>

> I don't realize that because a bug wasn't noticed for a long time was any
> reason not to fix it.  Guido was also involved in the repair of
> the bug, and

I think most people agreed that the new semantics were preferable to the
old.  I believe Tim was just having a dig at the fact the documentation was
not changed, and also wearing his grumpy-conservative hat (well, it is
election fever in the US!)

But remember - the original question was if the new semantics should return
the trailing "\\" as part of the common prefix, due to the demonstrated
fact that at least _some_ code out there depends on it.

Tim wanted a bug filed, but a few other people have chimed in saying
nothing needs fixing.

So what is it?  Do I file the bug as Tim requested?   Maybe I should just
do it, and assign the bug to Guido - at least that way he can make a quick
decision?

At-least-my-code-works-again ly,

Mark.



From akuchlin@mems-exchange.org  Fri Aug 18 00:27:06 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 19:27:06 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <14748.26274.949428.733639@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 06:26:42PM -0400
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us> <14748.21382.305979.784637@anthem.concentric.net> <20000817171352.B26730@kronos.cnri.reston.va.us> <14748.26274.949428.733639@anthem.concentric.net>
Message-ID: <20000817192706.A28225@newcnri.cnri.reston.va.us>

On Thu, Aug 17, 2000 at 06:26:42PM -0400, Barry A. Warsaw wrote:
>Could you check and see if there are any substantial differences
>between the version you've got and the version in the Mailman tree?
>If there are none, then I'm +1.

If you're referring to misc/Cookie.py in Mailman, the two files are
vastly different (though not necessarily incompatible).  The Mailman
version derives from a version of Cookie.py dating from 1998,
according to the CVS tree.  Timo's current version has three different
flavours of cookie, the Mailman version doesn't, so you wind up with a
1000-line long diff between the two.

--amk



From tim_one@email.msn.com  Fri Aug 18 00:29:16 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 19:29:16 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEFODFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>

[Skip, as quoted by MarkH]
> I don't realize that because a bug wasn't noticed for a long
> time was any reason not to fix it.  Guido was also involved in the
> repair of the bug, and

[MarkH]
> I think most people agreed that the new semantics were preferable to the
> old.  I believe Tim was just having a dig at the fact the  documentation
> was not changed, and also wearing his grumpy-conservative hat (well, it is
> election fever in the US!)

Not at all, I meant it.  When the code and the docs have matched for more
than 6 years, there is no bug by any rational definition of the term, and
you can be certain that changing the library semantics then will break
existing code.  Presuming to change it anyway is developer arrogance of the
worst kind, no matter how many developers cheer it on.  The docs are a
contract, and if they were telling the truth, we have a responsibility to
stand by them -- and whether we like it or not (granted, I am overly
sensitive to contractual matters these days <0.3 wink>).

The principled solution is to put the new functionality in a new function.
Then nobody's code breaks, no user feels abused, and everyone gets what they
want.  If you despise what the old function did, that's fine too, deprecate
it -- but don't screw people who were using it happily for what it was
documented to do.

> But remember - the original question was if the new semantics
> should return the trailing "\\" as part of the common prefix, due
> to the demonstrated fact that at least _some_ code out there
> depends on it.
>
> Tim wanted a bug filed, but a few other people have chimed in saying
> nothing needs fixing.
>
> So what is it?  Do I file the bug as Tim requested?   Maybe I should just
> do it, and assign the bug to Guido - at least that way he can make a quick
> decision?

By my count, Unix and Windows people have each voted for both answers, and
the Mac contingent is silently laughing <wink>.

hell-stick-in-fifty-new-functions-if-that's-what-it-takes-but-leave-
    the-old-one-alone-ly y'rs  - tim




From gstein@lyra.org  Fri Aug 18 00:41:37 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 16:41:37 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 11:34:12PM -0400
References: <20000816172425.A32338@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>
Message-ID: <20000817164137.U17689@lyra.org>

On Wed, Aug 16, 2000 at 11:34:12PM -0400, Tim Peters wrote:
>...
> So one of two things can be done:
> 
> 1. Bite the bullet and do it correctly.  For example, maintain a static
>    dict mapping the native pthread_self() return value to Python ints,
>    and return the latter as Python's thread.get_ident() value.  Much
>    better would to implement a x-platform thread-local storage
>    abstraction, and use that to hold a Python-int ident value.
> 
> 2. Continue in the tradition already established <wink>, and #ifdef the
>    snot out of it for Monterey.
> 
> In favor of #2, the code is already so hosed that making it hosier won't be
> a significant relative increase in its inherent hosiness.

The x-plat thread-local storage idea is the best thing to do. That will be
needed for some of the free-threading work in Python.

IOW, an x-plat TLS is going to be done at some point. If you need it now,
then please do it now. That will help us immeasurably in the long run.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From MarkH@ActiveState.com  Fri Aug 18 00:59:18 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 09:59:18 +1000
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817164137.U17689@lyra.org>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>

> IOW, an x-plat TLS is going to be done at some point. If you need it now,
> then please do it now. That will help us immeasurably in the long run.

I just discovered the TLS code in the Mozilla source tree.  This could be a
good place to start.

The definitions are in mozilla/nsprpub/pr/include/prthread.h, and I include
some of this file below...  I can confirm this code works _with_ Python -
but I have no idea how hard it would be to distill it _into_ Python!

Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

Mark.

/*
 * The contents of this file are subject to the Netscape Public License
 * Version 1.1 (the "NPL"); you may not use this file except in
 * compliance with the NPL.  You may obtain a copy of the NPL at
 * http://www.mozilla.org/NPL/
 *

[MarkH - it looks good to me - very open license]
...

/*
** This routine returns a new index for per-thread-private data table.
** The index is visible to all threads within a process. This index can
** be used with the PR_SetThreadPrivate() and PR_GetThreadPrivate()
routines
** to save and retrieve data associated with the index for a thread.
**
** Each index is associationed with a destructor function ('dtor'). The
function
** may be specified as NULL when the index is created. If it is not NULL,
the
** function will be called when:
**      - the thread exits and the private data for the associated index
**        is not NULL,
**      - new thread private data is set and the current private data is
**        not NULL.
**
** The index independently maintains specific values for each binding
thread.
** A thread can only get access to its own thread-specific-data.
**
** Upon a new index return the value associated with the index for all
threads
** is NULL, and upon thread creation the value associated with all indices
for
** that thread is NULL.
**
** Returns PR_FAILURE if the total number of indices will exceed the
maximun
** allowed.
*/
typedef void (PR_CALLBACK *PRThreadPrivateDTOR)(void *priv);

NSPR_API(PRStatus) PR_NewThreadPrivateIndex(
    PRUintn *newIndex, PRThreadPrivateDTOR destructor);

/*
** Define some per-thread-private data.
**     "tpdIndex" is an index into the per-thread private data table
**     "priv" is the per-thread-private data
**
** If the per-thread private data table has a previously registered
** destructor function and a non-NULL per-thread-private data value,
** the destructor function is invoked.
**
** This can return PR_FAILURE if the index is invalid.
*/
NSPR_API(PRStatus) PR_SetThreadPrivate(PRUintn tpdIndex, void *priv);

/*
** Recover the per-thread-private data for the current thread. "tpdIndex"
is
** the index into the per-thread private data table.
**
** The returned value may be NULL which is indistinguishable from an error
** condition.
**
** A thread can only get access to its own thread-specific-data.
*/
NSPR_API(void*) PR_GetThreadPrivate(PRUintn tpdIndex);



From gstein@lyra.org  Fri Aug 18 01:19:17 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 17:19:17 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 18, 2000 at 09:59:18AM +1000
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
Message-ID: <20000817171917.V17689@lyra.org>

On Fri, Aug 18, 2000 at 09:59:18AM +1000, Mark Hammond wrote:
> > IOW, an x-plat TLS is going to be done at some point. If you need it now,
> > then please do it now. That will help us immeasurably in the long run.
> 
> I just discovered the TLS code in the Mozilla source tree.  This could be a
> good place to start.
> 
> The definitions are in mozilla/nsprpub/pr/include/prthread.h, and I include
> some of this file below...  I can confirm this code works _with_ Python -
> but I have no idea how hard it would be to distill it _into_ Python!
> 
> Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

The NPL is not compatible with the Python license. While we could use their
API as a guide for our own code, we cannot use their code.


The real question is whether somebody has the time/inclination to sit down
now and write an x-plat TLS for Python. Always the problem :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From tim_one@email.msn.com  Fri Aug 18 01:18:08 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 20:18:08 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGIHAAA.tim_one@email.msn.com>

[MarkH]
> I just discovered the TLS code in the Mozilla source tree.  This
> could be a good place to start.
> ...
> Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

Jesus, Mark, I haven't even been able to figure what the license means by
"you" yet:

    1. Definitions
    ...
    1.12. "You'' (or "Your") means an individual or a legal entity
    exercising rights under, and complying with all of the terms of,
    this License or a future version of this License issued under
    Section 6.1. For legal entities, "You'' includes any entity which
    controls, is controlled by, or is under common control with You.
    For purposes of this definition, "control'' means (a) the power,
    direct or indirect, to cause the direction or management of such
    entity, whether by contract or otherwise, or (b) ownership of more
    than fifty percent (50%) of the outstanding shares or beneficial
    ownership of such entity.

at-least-they-left-little-doubt-about-the-meaning-of-"fifty-percent"-ly
    y'rs  - tim (tee eye em, and neither you nor You.  I think.)




From bwarsaw@beopen.com  Fri Aug 18 01:18:34 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:18:34 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
 <14748.21382.305979.784637@anthem.concentric.net>
 <20000817171352.B26730@kronos.cnri.reston.va.us>
 <14748.26274.949428.733639@anthem.concentric.net>
 <20000817192706.A28225@newcnri.cnri.reston.va.us>
Message-ID: <14748.32986.835733.255687@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin@cnri.reston.va.us> writes:

    >> Could you check and see if there are any substantial
    >> differences between the version you've got and the version in
    >> the Mailman tree?  If there are none, then I'm +1.

    AK> If you're referring to misc/Cookie.py in Mailman,

That's the one.
    
    AK> the two files are vastly different (though not necessarily
    AK> incompatible).  The Mailman version derives from a version of
    AK> Cookie.py dating from 1998, according to the CVS tree.  Timo's
    AK> current version has three different flavours of cookie, the
    AK> Mailman version doesn't, so you wind up with a 1000-line long
    AK> diff between the two.

Okay, don't sweat it.  If the new version makes sense to you, I'll
just be sure to make any Mailman updates that are necessary.  I'll
take a look once it's been checked in.

-Barry


From tim_one@email.msn.com  Fri Aug 18 01:24:04 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 20:24:04 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817171917.V17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEGIHAAA.tim_one@email.msn.com>

[Gret Stein]
> The NPL is not compatible with the Python license.

Or human comprehensibility either, far as I can tell.

> While we could use their API as a guide for our own code, we cannot
> use their code.
>
> The real question is whether somebody has the time/inclination to sit
> down now and write an x-plat TLS for Python. Always the problem :-)

The answer to Trent's original question is determined by whether he wants to
get a Monterey hack in as a bugfix for 2.0, or can wait a few years <0.9
wink> (the 2.0 feature set is frozen now).

If somebody wants to *buy* the time/inclination to get x-plat TLS, I'm sure
BeOpen or ActiveState would be keen to cash the check.  Otherwise ... don't
know.

all-it-takes-is-50-people-to-write-50-one-platform-packages-and-
    then-50-years-to-iron-out-their-differences-ly y'rs  - tim




From bwarsaw@beopen.com  Fri Aug 18 01:26:10 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:26:10 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000817164137.U17689@lyra.org>
 <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
 <20000817171917.V17689@lyra.org>
Message-ID: <14748.33442.7609.588513@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

    GS> The NPL is not compatible with the Python license. While we
    GS> could use their API as a guide for our own code, we cannot use
    GS> their code.

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> Jesus, Mark, I haven't even been able to figure what the
    TP> license means by "you" yet:

Is the NPL compatible with /anything/? :)

-Barry


From trentm@ActiveState.com  Fri Aug 18 01:41:37 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 17:41:37 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <14748.33442.7609.588513@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 08:26:10PM -0400
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com> <20000817171917.V17689@lyra.org> <14748.33442.7609.588513@anthem.concentric.net>
Message-ID: <20000817174137.B18811@ActiveState.com>

On Thu, Aug 17, 2000 at 08:26:10PM -0400, Barry A. Warsaw wrote:
> 
> Is the NPL compatible with /anything/? :)
> 


Mozilla will be dual licenced with the GPL. But you already read that.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From gstein@lyra.org  Fri Aug 18 01:55:56 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 17:55:56 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <14748.33442.7609.588513@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 08:26:10PM -0400
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com> <20000817171917.V17689@lyra.org> <14748.33442.7609.588513@anthem.concentric.net>
Message-ID: <20000817175556.Y17689@lyra.org>

On Thu, Aug 17, 2000 at 08:26:10PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "GS" == Greg Stein <gstein@lyra.org> writes:
> 
>     GS> The NPL is not compatible with the Python license. While we
>     GS> could use their API as a guide for our own code, we cannot use
>     GS> their code.
> 
> >>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:
> 
>     TP> Jesus, Mark, I haven't even been able to figure what the
>     TP> license means by "you" yet:
> 
> Is the NPL compatible with /anything/? :)

All kinds of stuff. It is effectively a non-viral GPL. Any changes to the
NPL/MPL licensed stuff must be released. It does not affect the stuff that
it is linked/dist'd with.

However, I was talking about the Python source code base. The Python license
and the NPL/MPL are definitely compatible. I mean that we don't want both
licenses in the Python code base.

Hmm. Should have phrased that differently.

And one nit: the NPL is very different from the MPL. NPL x.x is nasty, while
MPL 1.1 is very nice.

Note the whole MPL/GPL dual-license stuff that you see (Expat and now
Mozilla) is not because they are trying to be nice, but because they are
trying to compensate for the GPL's nasty viral attitude. You cannot use MPL
code in a GPL product because the *GPL* says so. The MPL would be perfectly
happy, but no... Therefore, people dual-license so that you can choose the
GPL when linking with GPL code.

Ooops. I'll shut up now. :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From bwarsaw@beopen.com  Fri Aug 18 01:49:17 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:49:17 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000817164137.U17689@lyra.org>
 <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
 <20000817171917.V17689@lyra.org>
 <14748.33442.7609.588513@anthem.concentric.net>
 <20000817174137.B18811@ActiveState.com>
Message-ID: <14748.34829.130052.124407@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:

    TM> Mozilla will be dual licenced with the GPL. But you already
    TM> read that.

Yup, but it'll still be a big hurdle to include any GPL'd code in
Python.

-Barry


From MarkH@ActiveState.com  Fri Aug 18 01:55:02 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 10:55:02 +1000
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817175556.Y17689@lyra.org>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEGDDFAA.MarkH@ActiveState.com>

[Greg]
> However, I was talking about the Python source code base. The
> Python license
> and the NPL/MPL are definitely compatible.

Phew.  Obviously IANAL, but I thought I was going senile.  I didn't seek
clarification for fear of further demonstrating my ignorance :-)

> I mean that we don't want both licenses in the Python code base.

That makes much more sense to me!

Thanks for the clarification.

Mark.



From greg@cosc.canterbury.ac.nz  Fri Aug 18 02:01:17 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:01:17 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <20000817090942.L376@xs4all.nl>
Message-ID: <200008180101.NAA15496@s454.cosc.canterbury.ac.nz>

Thomas Wouters:

> Bzzzt. This is unfortunately not true. Observe:
>
> daemon2:~/python > rmdir perl/
> rmdir: perl/: Is a directory

I'd say that's a bug in rmdir in whatever Unix you're using.
Solaris doesn't have the problem:

s454% cd ~/tmp
s454% mkdir foo
s454% rmdir foo/
s454% 

There's always room for a particular program to screw up.  However,
the usual principle in Unices is that trailing slashes are optional.

> Note that the trailing slash is added by all tab-completing shells that I
> know.

This is for the convenience of the user, who is probably going to type
another pathname component, and also to indicate that the object found
is a directory. It makes sense in an interactive tool, but not
necessarily in other places.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Fri Aug 18 02:27:33 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:27:33 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <1245608987-154490918@hypernet.com>
Message-ID: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>

Gordon:

> os.chdir(os.pardir)

Ah, I missed that somehow. Probably I was looking in os.path
instead of os.

Shouldn't everything to do with pathname semantics be in os.path?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Fri Aug 18 02:52:32 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:52:32 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.55394.783997.167234@beluga.mojam.com>
Message-ID: <200008180152.NAA15507@s454.cosc.canterbury.ac.nz>

Skip:

> Since we already have os.path.commonprefix and it's not going away,

If it's to stay the way it is, we need another function to
do what it should have been designed to do in the first place.
That means two new functions, one to find a common prefix,
and one to remove a given prefix.

But it's not clear exactly what a function such as

   removeprefix(prefix, path)

should do. What happens, for instance, if 'prefix' is not actually a
prefix of 'path', or only part of it is a prefix?

A reasonable definition might be that however much of 'prefix' is
a prefix of 'path' is removed. But that requires finding the common
prefix of the prefix and the path, which is intruding on commonprefix's 
territory!

This is what led me to think of combining the two operations
into one, which would have a clear, unambiguous definition
covering all cases.

> there's nothing in the name factorize that suggests that it would
> split the paths at the common prefix.

I'm not particularly attached to that name. Call it
'splitcommonprefix' or something if you like.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Fri Aug 18 03:02:09 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:02:09 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.57236.264324.165612@beluga.mojam.com>
Message-ID: <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>

Skip:

> maybe we should move it to some other module
> that has no directory path implications.

I agree!

> Perhaps string?  Oh, that's deprecated.

Is the whole string module deprecated, or only those parts
which are now available as string methods? I think trying to
eliminate the string module altogether would be a mistake,
since it would leave nowhere for string operations that don't
make sense as methods of a string.

The current version of commonprefix is a case in point,
since it operates symmetrically on a collection of strings.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+




From gmcm@hypernet.com  Fri Aug 18 03:07:04 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 22:07:04 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>
Message-ID: <1245558070-157553278@hypernet.com>

Thomas Wouters wrote:

> The other issue is the change in semantics for 'from-import'.

Um, maybe I'm not seeing something, but isn't the effect of 
"import goom.bah as snarf" the same as "from goom import 
bah as snarf"? Both forms mean that we don't end up looking 
for (the aliased) bah in another namespace, (thus both forms 
fall prey to the circular import problem).

Why not just disallow "from ... import ... as ..."?



- Gordon


From fdrake@beopen.com  Fri Aug 18 03:13:25 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:13:25 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>
References: <14747.57236.264324.165612@beluga.mojam.com>
 <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.39877.3411.744665@cj42289-a.reston1.va.home.com>

Skip:
 > Perhaps string?  Oh, that's deprecated.

Greg Ewing writes:
 > Is the whole string module deprecated, or only those parts
 > which are now available as string methods? I think trying to

  I wasn't aware of any actual deprecation, just a shift of
preference.  There's not a notice of the deprecation in the docs.  ;)
In fact, there are things that are in the module that are not
available as string methods.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Fri Aug 18 03:38:06 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:38:06 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
References: <1245608987-154490918@hypernet.com>
 <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.41358.61606.202184@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > Gordon:
 > 
 > > os.chdir(os.pardir)
 > 
 > Ah, I missed that somehow. Probably I was looking in os.path
 > instead of os.
 > 
 > Shouldn't everything to do with pathname semantics be in os.path?

  Should be, yes.  I'd vote that curdir, pardir, sep, altsep, and
pathsep be added to the *path modules, and os could pick them up from
there.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From akuchlin@mems-exchange.org  Fri Aug 18 03:46:32 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Thu, 17 Aug 2000 22:46:32 -0400
Subject: [Python-Dev] Request for help w/ bsddb module
Message-ID: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>

[CC'ed to python-dev, python-list]

I've started writing a straight C version of Greg Smith's BSDDB3
module (http://electricrain.com/greg/python/bsddb3/), which currently
uses SWIG.  The code isn't complete enough to do anything yet, though
it does compile.  

Now I'm confronted with writing around 60 different methods for 3
different types; the job doesn't look difficult, but it does look
tedious and lengthy.  Since the task will parallelize well, I'm asking
if anyone wants to help out by writing the code for one of the types.

If you want to help, grab Greg's code from the above URL, and my
incomplete module from
ftp://starship.python.net/pub/crew/amk/new/_bsddb.c.  Send me an
e-mail telling me which set of methods (those for the DB, DBC, DB_Env
types) you want to implement before starting to avoid duplicating
work.  I'll coordinate, and will debug the final product.

(Can this get done in time for Python 2.0?  Probably.  Can it get
tested in time for 2.0?  Ummm....)

--amk








From greg@cosc.canterbury.ac.nz  Fri Aug 18 03:45:46 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:45:46 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>
Message-ID: <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>

Tim Peters:

> The principled solution is to put the new functionality in a new
> function.

I agree with that.

> By my count, Unix and Windows people have each voted for both answers, and
> the Mac contingent is silently laughing <wink>.

The Mac situation is somewhat complicated. Most of the time
a single trailing colon makes no difference, but occasionally
it does. For example, "abc" is a relative pathname, but
"abc:" is an absolute pathname!

The best way to resolve this, I think, is to decree that it
should do the same as what os.path.split does, on all
platforms. That function seems to know how to deal with 
all the tricky cases correctly.

Don't-even-think-of-asking-about-VMS-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From fdrake@beopen.com  Fri Aug 18 03:55:59 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:55:59 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>
References: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>
 <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.42431.165537.946022@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > Don't-even-think-of-asking-about-VMS-ly,

  Really!  I looked at some docs for the path names on that system,
and didn't come away so much as convinced DEC/Compaq knew what they
looked like.  Or where they stopped.  Or started.
  I think a fully general path algebra will be *really* hard to do,
but it's something I've thought about a little.  Don't know when I'll
have time to dig back into it.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From greg@cosc.canterbury.ac.nz  Fri Aug 18 03:57:34 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:57:34 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399B94EB.E95260EE@lemburg.com>
Message-ID: <200008180257.OAA15523@s454.cosc.canterbury.ac.nz>

M.-A. Lemburg:

> By dropping the trailing slash from the path
> you are removing important information from the path information.

No, you're not. A trailing slash on a Unix pathname doesn't
tell you anything about whether it refers to a directory.
Actually, it doesn't tell you anything at all. Slashes
simply delimit pathname components, nothing more.

A demonstration of this:

s454% cat > foo/
asdf
s454% cat foo/
asdf
s454% 

A few utilites display pathnames with trailing slashes in
order to indicate that they refer to directories, but that's
a special convention confined to those tools. It doesn't
apply in general.

The only sure way to find out whether a given pathname refers 
to a directory or not is to ask the filesystem. And if the 
object referred to doesn't exist, the question of whether it's 
a directory is meaningless.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Fri Aug 18 04:34:57 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 15:34:57 +1200 (NZST)
Subject: [Python-Dev] 'import as'
In-Reply-To: <1245558070-157553278@hypernet.com>
Message-ID: <200008180334.PAA15543@s454.cosc.canterbury.ac.nz>

Gordon McMillan <gmcm@hypernet.com>:

> isn't the effect of "import goom.bah as snarf" the same as "from goom
> import bah as snarf"?

Only if goom.bah is a submodule or subpackage, I think.
Otherwise "import goom.bah" doesn't work in the first place.

I'm not sure that "import goom.bah as snarf" should
be allowed, even if goom.bah is a module. Should the
resulting object be referred to as snarf, snarf.bah
or goom.snarf?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Fri Aug 18 04:39:29 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 23:39:29 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817142207.A5592@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>

[Trent Mick]
> ...
> I'm all for being a hoser then.

Canadian <wink>.

> #ifdef's a-comin' down the pipe.
> One thing, the only #define that I know I have a handle on for
> Monterey is '_LP64'. Do you have an objection to that (seeing at
> is kind of misleading)? I will accompany it with an explicative
> comment of course.

Hmm!  I hate "mystery #defines", even when they do make sense.  In my last
commerical project, we had a large set of #defines in its equivalent of
pyport.h, along the lines of Py_COMPILER_MSVC, Py_COMPILER_GCC, Py_ARCH_X86,
Py_ARCH_KATMAI, etc etc.  Over time, *nobody* can remember what goofy
combinations of mystery preprocessor symbols vendors define, and vendors
come and go, and you're left with piles of code you can't make head or tail
of.  "#ifdef __SC__" -- what?

So there was A Rule that vendor-supplied #defines could *only* appear in
(that project's version of) pyport.h, used there to #define symbols whose
purpose was clear from extensive comments and naming conventions.  That
proved to be an excellent idea over years of practice!

So I'll force Python to do that someday too.  In the meantime, _LP64 is a
terrible name for this one, because its true *meaning* (the way you want to
use it) appears to be "sizeof(pthread_t) < sizeof(long)", and that's
certainly not a property of all LP64 platforms.  So how about a runtime test
for what's actually important (and it's not Monterey!)?

	if (sizeof(threadid) <= sizeof(long))
		return (long)threadid;

End of problem, right?  It's a cheap runtime test in a function whose speed
isn't critical anyway.  And it will leave the God-awful casting to the one
platform where it appears to be needed -- while also (I hope) making it
clearer that that's absolutely the wrong thing to be doing on that platform
(throwing away half the bits in the threadid value is certain to make
get_ident return the same value for two distinct threads sooner or later
...).

less-preprocessor-more-sense-ly y'rs  - tim




From tim_one@email.msn.com  Fri Aug 18 04:58:13 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 23:58:13 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180257.OAA15523@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHFHAAA.tim_one@email.msn.com>

[Greg Ewing]
> ...
> A trailing slash on a Unix pathname doesn't tell you anything
> about whether it refers to a directory.

It does if it's also the only character in the pathname <0.5 wink>.  The
same thing bites people on Windows, except even worse, because in UNC
pathnames the leading

   \\machine\volume

"acts like a root", and the presence or absence of a trailing backslash
there makes a world of difference too.

> ...
> The only sure way to find out whether a given pathname refers
> to a directory or not is to ask the filesystem.

On Windows again,

>>> from os import path
>>> path.exists("/python16")
1
>>> path.exists("/python16/")
0
>>>

This insane behavior is displayed by the MS native APIs too, but isn't
documented (at least not last time I peed away hours looking for it).

just-more-evidence-that-windows-weenies-shouldn't-get-a-vote!-ly
    y'rs  - tim




From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 18 05:39:18 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 18 Aug 2000 07:39:18 +0300 (IDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <E13PWSp-0006w9-00@kronos.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008180738100.23483-100000@sundial>

On Thu, 17 Aug 2000, Andrew Kuchling wrote:

> Tim O'Malley finally mailed me the correct URL for the latest version
> of the cookie module: http://www.timo-tasi.org/python/Cookie.py 
> 
> *However*...  I think the Web support in Python needs more work in
> generally, and certainly more than can be done for 2.0.

This is certainly true, but is that reason enough to keep Cookie.py 
out of 2.0?

(+1 on enhancing the Python standard library, of course)
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tim_one@email.msn.com  Fri Aug 18 06:26:51 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 01:26:51 -0400
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <399BE124.9920B0B6@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>

Note that Guido rejected all the loop-gimmick proposals ("indexing",
indices(), irange(), and list.items()) on Thursday, so let's stifle this
debate until after 2.0 (or, even better, until after I'm dead <wink>).

hope-springs-eternal-ly y'rs  - tim




From tim_one@email.msn.com  Fri Aug 18 06:43:14 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 01:43:14 -0400
Subject: [Python-Dev] PyErr_NoMemory
In-Reply-To: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHJHAAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> The current PyErr_NoMemory() function reads:
>
> PyObject *
> PyErr_NoMemory(void)
> {
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
>         else
>                 /* this will probably fail since there's no
> memory and hee,
>                    hee, we have to instantiate this class
>                 */
>                 PyErr_SetNone(PyExc_MemoryError);
>
>         return NULL;
> }
>
> thus overriding any previous exceptions unconditionally. This is a
> problem when the current exception already *is* PyExc_MemoryError,
> notably when we have a chain (cascade) of memory errors. It is a
> problem because the original memory error and eventually its error
> message is lost.
>
> I suggest to make this code look like:
>
> PyObject *
> PyErr_NoMemory(void)
> {
> 	if (PyErr_ExceptionMatches(PyExc_MemoryError))
> 		/* already current */
> 		return NULL;
>
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
> ...
>
>
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

Looks good to me.  And if it breaks something, it will be darned hard to
tell <wink>.




From nowonder@nowonder.de  Fri Aug 18 09:06:23 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Fri, 18 Aug 2000 08:06:23 +0000
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
Message-ID: <399CEE7F.F2B865D2@nowonder.de>

Tim Peters wrote:
> 
> Note that Guido rejected all the loop-gimmick proposals ("indexing",
> indices(), irange(), and list.items()) on Thursday, so let's stifle this
> debate until after 2.0 (or, even better, until after I'm dead <wink>).

That's sad. :-/

One of the reasons I implemented .items() is that I wanted
to increase the probability that at least *something* is
available instead of:

  for i in range(len(list):
      e = list[i]
      ...

or

  for i, e in zip(range(len(list)), list):
      ...

I'm going to teach Python to a lot of newbies (ca. 30) in
October. From my experience (I already tried my luck on two
individuals from that group) 'range(len(list))' is one
of the harder concepts to get across. Even indices(list)
would help here.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 18 07:12:39 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 18 Aug 2000 09:12:39 +0300 (IDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.GSO.4.10.10008180911430.23483-100000@sundial>

On Fri, 18 Aug 2000, Greg Ewing wrote:

> Gordon:
> 
> > os.chdir(os.pardir)
> 
> Ah, I missed that somehow. Probably I was looking in os.path
> instead of os.
> 
> Shouldn't everything to do with pathname semantics be in os.path?

Nope, os.path is just for "abstract pathname algebra". Anything
dealing with real paths on a real machine belongs elsewhere (os, probably)
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tim_one@email.msn.com  Fri Aug 18 07:30:40 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 02:30:40 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817164137.U17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>

[Greg Stein]
> ...
> IOW, an x-plat TLS is going to be done at some point. If you need it now,
> then please do it now. That will help us immeasurably in the long run.

It appears that a correct thread.get_ident() for DEC threads needed it 6
years ago (or at least that would have been-- and remains --the most elegant
way to solve it).  Trent doesn't need it to fix Monterey, though -- his only
problem there is that the Alpha hack doesn't work on his platform, due to
the former's utter bogosity.  From Trent's POV, I bet the one-liner
workaround sounds more appealing.




From cgw@fnal.gov  Fri Aug 18 08:01:59 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 02:01:59 -0500 (CDT)
Subject: [Python-Dev] [Bug #111620] lots of use of send() without verifying amount
 of data sent
Message-ID: <14748.57191.25642.168078@buffalo.fnal.gov>

I'm jumping in late to this discussion to mention to mention that even
for sockets in blocking mode, you can do sends with the MSG_DONTWAIT
flag:

sock.send(msg, socket.MSG_DONTWAIT)

and this will send only as much data as can be written immediately.
I.E., a per-message non-blocking write, without putting the socket
into blocking mode.

So if somebody decides to raise an exception on short TCP writes, they
need to be aware of this.  Personally I think it's a bad idea to be
raising an exception at all for short writes.



From thomas@xs4all.net  Fri Aug 18 08:07:43 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:07:43 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <1245558070-157553278@hypernet.com>; from gmcm@hypernet.com on Thu, Aug 17, 2000 at 10:07:04PM -0400
References: <20000817231942.O376@xs4all.nl> <1245558070-157553278@hypernet.com>
Message-ID: <20000818090743.S376@xs4all.nl>

On Thu, Aug 17, 2000 at 10:07:04PM -0400, Gordon McMillan wrote:
> Thomas Wouters wrote:

> > The other issue is the change in semantics for 'from-import'.

> Um, maybe I'm not seeing something, but isn't the effect of "import
> goom.bah as snarf" the same as "from goom import bah as snarf"?

I don't understand what you're saying here. 'import goom.bah' imports goom,
then bah, and the resulting module in the local namespace is 'goom'. That's
existing behaviour (which I find perplexing, but I've never ran into before
;) which has changed in a reliable way: the local name being stored,
whatever it would have been in a normal import, is changed into the
"as-name" by "as <name>".

If you're saying that 'import goom.bah.baz as b' won't do what people
expect, I agree. (But neither does 'import goom.bah.baz', I think :-)

> Both forms mean that we don't end up looking for (the aliased) bah in
> another namespace, (thus both forms fall prey to the circular import
> problem).

Maybe it's the early hour, but I really don't understand the problem here.
Ofcourse we end up looking 'bah' in the other namespace, we have to import
it. And I don't know what it has to do with circular import either ;P

> Why not just disallow "from ... import ... as ..."?

That would kind of defeat the point of this change. I don't see any
unexpected behaviour with 'from .. import .. as ..'; the object mentioned
after 'import' and before 'as' is the object stored with the local name
which follows 'as'. Why would we want to disallow that ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Aug 18 08:17:03 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:17:03 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 11:39:29PM -0400
References: <20000817142207.A5592@ActiveState.com> <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>
Message-ID: <20000818091703.T376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:

> So how about a runtime test for what's actually important (and it's not
> Monterey!)?
> 
> 	if (sizeof(threadid) <= sizeof(long))
> 		return (long)threadid;
> 
> End of problem, right?  It's a cheap runtime test in a function whose speed
> isn't critical anyway.

Note that this is what autoconf is for. It also helps to group all that
behaviour-testing code together, in one big lump noone pretends to
understand ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Fredrik Lundh" <effbot@telia.com  Fri Aug 18 08:35:17 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 09:35:17 +0200
Subject: [Python-Dev] 'import as'
References: <20000817231942.O376@xs4all.nl>
Message-ID: <001901c008e6$dc222760$f2a6b5d4@hagrid>

thomas wrote:
> I have two remaining issues regarding the 'import as' statement, which I'm
> just about ready to commit.

has this been tested with import hooks?

what's passed to the __import__ function's fromlist argument
if you do "from spam import egg as bacon"?

</F>



From thomas@xs4all.net  Fri Aug 18 08:30:49 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:30:49 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <001901c008e6$dc222760$f2a6b5d4@hagrid>; from effbot@telia.com on Fri, Aug 18, 2000 at 09:35:17AM +0200
References: <20000817231942.O376@xs4all.nl> <001901c008e6$dc222760$f2a6b5d4@hagrid>
Message-ID: <20000818093049.I27945@xs4all.nl>

On Fri, Aug 18, 2000 at 09:35:17AM +0200, Fredrik Lundh wrote:
> thomas wrote:
> > I have two remaining issues regarding the 'import as' statement, which I'm
> > just about ready to commit.

> has this been tested with import hooks?

Not really, I'm afraid. I don't know how to use import hooks ;-P But nothing
substantial changed, and I took care to make sure 'find_from_args' gave the
same information, still. For what it's worth, the test suite passed fine,
but I don't know if there's a test for import hooks in there.

> what's passed to the __import__ function's fromlist argument
> if you do "from spam import egg as bacon"?

The same as 'from spam import egg', currently. Better ideas are welcome, of
course, especially if you know how to use import hooks, and how they
generally are used. Pointers towards the right sections are also welcome.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Fri Aug 18 09:02:40 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 04:02:40 -0400
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]  Lockstep iteration - eureka!)
In-Reply-To: <399CEE7F.F2B865D2@nowonder.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>

I'm stifling it, but, FWIW, I've been trying to sell "indexing" for most of
my adult life <wink -- but yes, in my experience too range(len(seq)) is
extraordinarly hard to get across to newbies at first; and I *expect*
[:len(seq)] to be at least as hard>.


> -----Original Message-----
> From: nowonder@stud.ntnu.no [mailto:nowonder@stud.ntnu.no]On Behalf Of
> Peter Schneider-Kamp
> Sent: Friday, August 18, 2000 4:06 AM
> To: Tim Peters
> Cc: python-dev@python.org
> Subject: Re: indexing, indices(), irange(), list.items() (was RE:
> [Python-Dev] Lockstep iteration - eureka!)
>
>
> Tim Peters wrote:
> >
> > Note that Guido rejected all the loop-gimmick proposals ("indexing",
> > indices(), irange(), and list.items()) on Thursday, so let's stifle this
> > debate until after 2.0 (or, even better, until after I'm dead <wink>).
>
> That's sad. :-/
>
> One of the reasons I implemented .items() is that I wanted
> to increase the probability that at least *something* is
> available instead of:
>
>   for i in range(len(list):
>       e = list[i]
>       ...
>
> or
>
>   for i, e in zip(range(len(list)), list):
>       ...
>
> I'm going to teach Python to a lot of newbies (ca. 30) in
> October. From my experience (I already tried my luck on two
> individuals from that group) 'range(len(list))' is one
> of the harder concepts to get across. Even indices(list)
> would help here.
>
> Peter
> --
> Peter Schneider-Kamp          ++47-7388-7331
> Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
> N-7050 Trondheim              http://schneider-kamp.de




From mal@lemburg.com  Fri Aug 18 09:05:30 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 10:05:30 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <200008180101.NAA15496@s454.cosc.canterbury.ac.nz>
Message-ID: <399CEE49.8E646DC3@lemburg.com>

Greg Ewing wrote:
> 
> > Note that the trailing slash is added by all tab-completing shells that I
> > know.
> 
> This is for the convenience of the user, who is probably going to type
> another pathname component, and also to indicate that the object found
> is a directory. It makes sense in an interactive tool, but not
> necessarily in other places.

Oh, C'mon Greg... haven't you read my reply to this ?

The trailing slash contains important information which might
otherwise not be regainable or only using explicit queries to
the storage system.

The "/" tells the program that the last path component is
a directory. Removing the slash will also remove that information
from the path (and yes: files without extension are legal).

Now, since e.g. posixpath is also used as basis for fiddling
with URLs and other tools using Unix style paths, removing
the slash will result in problems... just look at what your
browser does when you request http://www.python.org/search ...
the server redirects you to search/ to make sure that the 
links embedded in the page are relative to search/ and not
www.python.org/.

Skip, have you already undone that change in CVS ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Fri Aug 18 09:10:01 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 04:10:01 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818091703.T376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>

-0 on autoconf for this.

I doubt that Trent ever needs to know more than in this one place the
relative sizes of threadid and a long, and this entire function is braindead
(hence will be gutted someday) anyway.  Using the explicit test makes it
obvious to everyone; winding back thru layers of autoconf crust makes it A
Project and yet another goofy preprocessor symbol cluttering the code.

> -----Original Message-----
> From: Thomas Wouters [mailto:thomas@xs4all.net]
> Sent: Friday, August 18, 2000 3:17 AM
> To: Tim Peters
> Cc: Trent Mick; python-dev@python.org
> Subject: Re: [Python-Dev] pthreads question: typedef ??? pthread_t and
> hacky return statements
>
>
> On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
>
> > So how about a runtime test for what's actually important (and it's not
> > Monterey!)?
> >
> > 	if (sizeof(threadid) <= sizeof(long))
> > 		return (long)threadid;
> >
> > End of problem, right?  It's a cheap runtime test in a function
> > whose speed isn't critical anyway.
>
> Note that this is what autoconf is for. It also helps to group all that
> behaviour-testing code together, in one big lump noone pretends to
> understand ;)
>
> --
> Thomas Wouters <thomas@xs4all.net>
>
> Hi! I'm a .signature virus! copy me into your .signature file to
> help me spread!




From mal@lemburg.com  Fri Aug 18 09:30:51 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 10:30:51 +0200
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
Message-ID: <399CF43A.478D7165@lemburg.com>

Tim Peters wrote:
> 
> Note that Guido rejected all the loop-gimmick proposals ("indexing",
> indices(), irange(), and list.items()) on Thursday, so let's stifle this
> debate until after 2.0 (or, even better, until after I'm dead <wink>).

Hey, we still have mxTools which gives you most of those goodies 
and lots more ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From nowonder@nowonder.de  Fri Aug 18 12:07:43 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Fri, 18 Aug 2000 11:07:43 +0000
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
Message-ID: <399D18FF.BD807ED5@nowonder.de>

What about 'indexing' xor 'in' ? Like that:

for i indexing sequence:      # good
for e in sequence:            # good
for i indexing e in sequence: # BAD!

This might help Guido to understand what it does in the
'indexing' case. I admit that the third one may be a
bit harder to parse, so why not *leave it out*?

But then I'm sure this has also been discussed before.
Nevertheless I'll mail Barry and volunteer for a PEP
on this.

[Tim Peters about his life]
> I've been trying to sell "indexing" for most of my adult life

then-I'll-have-to-spend-another-life-on-it-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From sjoerd@oratrix.nl  Fri Aug 18 10:42:38 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Fri, 18 Aug 2000 11:42:38 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
Message-ID: <20000818094239.A3A1931047C@bireme.oratrix.nl>

Your changes for the import X as Y feature introduced a serious bug:
I can no longer run Python at all.

The problem is that in line 2150 of compile.c com_addopname is called
with a NULL last argument, and the firat thing com_addopname does is
indirect off of that very argument.  On my machine (and on many other
machines) that results in a core dump.

In case it helps, here is the stack trace.  The crash happens when
importing site.py.  I have not made any changes to my site.py.

>  0 com_addopname(c = 0x7fff1e20, op = 90, n = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":738, 0x1006cb58]
   1 com_import_stmt(c = 0x7fff1e20, n = 0x101e2ad0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2150, 0x10071ecc]
   2 com_node(c = 0x7fff1e20, n = 0x101e2ad0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2903, 0x10074764]
   3 com_node(c = 0x7fff1e20, n = 0x101eaf68) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2855, 0x10074540]
   4 com_node(c = 0x7fff1e20, n = 0x101e2908) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2864, 0x100745b0]
   5 com_node(c = 0x7fff1e20, n = 0x1020d450) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2855, 0x10074540]
   6 com_file_input(c = 0x7fff1e20, n = 0x101e28f0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3137, 0x10075324]
   7 compile_node(c = 0x7fff1e20, n = 0x101e28f0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3241, 0x100759c0]
   8 jcompile(n = 0x101e28f0, filename = 0x7fff2430 = "./../Lib/site.py", base = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3400, 0x10076058]
   9 PyNode_Compile(n = 0x101e28f0, filename = 0x7fff2430 = "./../Lib/site.py") ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3378, 0x10075f7c]
   10 parse_source_module(pathname = 0x7fff2430 = "./../Lib/site.py", fp = 0xfb563c8) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":632, 0x100151a4]
   11 load_source_module(name = 0x7fff28d8 = "site", pathname = 0x7fff2430 = "./../Lib/site.py", fp = 0xfb563c8) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":722, 0x100154c8]
   12 load_module(name = 0x7fff28d8 = "site", fp = 0xfb563c8, buf = 0x7fff2430 = "./../Lib/site.py", type = 1) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1199, 0x1001629c]
   13 import_submodule(mod = 0x101b8478, subname = 0x7fff28d8 = "site", fullname = 0x7fff28d8 = "site") ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1727, 0x10017dc4]
   14 load_next(mod = 0x101b8478, altmod = 0x101b8478, p_name = 0x7fff2d04, buf = 0x7fff28d8 = "site", p_buflen = 0x7fff28d0) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1583, 0x100174c0]
   15 import_module_ex(name = (nil), globals = (nil), locals = (nil), fromlist = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1434, 0x10016d04]
   16 PyImport_ImportModuleEx(name = 0x101d9450 = "site", globals = (nil), locals = (nil), fromlist = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1475, 0x10016fe0]
   17 PyImport_ImportModule(name = 0x101d9450 = "site") ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1408, 0x10016c64]
   18 initsite() ["/ufs/sjoerd/src/Python/dist/src/Python/pythonrun.c":429, 0x10053148]
   19 Py_Initialize() ["/ufs/sjoerd/src/Python/dist/src/Python/pythonrun.c":166, 0x100529c8]
   20 Py_Main(argc = 1, argv = 0x7fff2ec4) ["/ufs/sjoerd/src/Python/dist/src/Modules/main.c":229, 0x10013690]
   21 main(argc = 1, argv = 0x7fff2ec4) ["/ufs/sjoerd/src/Python/dist/src/Modules/python.c":10, 0x10012f24]
   22 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x10012ec8]

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From fredrik@pythonware.com  Fri Aug 18 11:42:54 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 12:42:54 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000816172425.A32338@ActiveState.com>
Message-ID: <003001c00901$11fd8ae0$0900a8c0@SPIFF>

trent mick wrote:
>     return (long) *(long *) &threadid;

from what I can tell, pthread_t is a pointer under OSF/1.

I've been using OSF/1 since the early days, and as far as I can
remember, you've never needed to use stupid hacks like that
to convert a pointer to a long integer. an ordinary (long) cast
should be sufficient.

> Could this be changed to
>   return threadid;
> safely?

safely, yes.  but since it isn't a long on all platforms, you might
get warnings from the compiler (see Mark's mail).

:::

from what I can tell, it's compatible with a long on all sane plat-
forms (Win64 doesn't support pthreads anyway ;-), so I guess the
right thing here is to remove volatile and simply use:

    return (long) threadid;

(Mark: can you try this out on your box?  setting up a Python 2.0
environment on our alphas would take more time than I can spare
right now...)

</F>



From gstein@lyra.org  Fri Aug 18 12:00:34 2000
From: gstein@lyra.org (Greg Stein)
Date: Fri, 18 Aug 2000 04:00:34 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 04:10:01AM -0400
References: <20000818091703.T376@xs4all.nl> <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>
Message-ID: <20000818040034.F17689@lyra.org>

That is a silly approach, Tim. This is exactly what autoconf is for. Using
run-time logic to figure out something that is compile-time is Badness.

And the "but it will eventually be fixed" rationale is bogus. Gee, should we
just start loading bogus patches into Python, knowing that everything will
be fixed in the next version? Whoops. We forgot some. Oh, we can't change
those now. Well, gee. Maybe Py3K will fix it.

I realize that you're only -0 on this, but it should be at least +0...

Cheers,
-g

On Fri, Aug 18, 2000 at 04:10:01AM -0400, Tim Peters wrote:
> -0 on autoconf for this.
> 
> I doubt that Trent ever needs to know more than in this one place the
> relative sizes of threadid and a long, and this entire function is braindead
> (hence will be gutted someday) anyway.  Using the explicit test makes it
> obvious to everyone; winding back thru layers of autoconf crust makes it A
> Project and yet another goofy preprocessor symbol cluttering the code.
> 
> > -----Original Message-----
> > From: Thomas Wouters [mailto:thomas@xs4all.net]
> > Sent: Friday, August 18, 2000 3:17 AM
> > To: Tim Peters
> > Cc: Trent Mick; python-dev@python.org
> > Subject: Re: [Python-Dev] pthreads question: typedef ??? pthread_t and
> > hacky return statements
> >
> >
> > On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
> >
> > > So how about a runtime test for what's actually important (and it's not
> > > Monterey!)?
> > >
> > > 	if (sizeof(threadid) <= sizeof(long))
> > > 		return (long)threadid;
> > >
> > > End of problem, right?  It's a cheap runtime test in a function
> > > whose speed isn't critical anyway.
> >
> > Note that this is what autoconf is for. It also helps to group all that
> > behaviour-testing code together, in one big lump noone pretends to
> > understand ;)
> >
> > --
> > Thomas Wouters <thomas@xs4all.net>
> >
> > Hi! I'm a .signature virus! copy me into your .signature file to
> > help me spread!
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/


From gmcm@hypernet.com  Fri Aug 18 13:35:42 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 08:35:42 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818090743.S376@xs4all.nl>
References: <1245558070-157553278@hypernet.com>; from gmcm@hypernet.com on Thu, Aug 17, 2000 at 10:07:04PM -0400
Message-ID: <1245520353-159821909@hypernet.com>

Thomas Wouters wrote:
> On Thu, Aug 17, 2000 at 10:07:04PM -0400, Gordon McMillan wrote:

> > Um, maybe I'm not seeing something, but isn't the effect of
> > "import goom.bah as snarf" the same as "from goom import bah as
> > snarf"?
> 
> I don't understand what you're saying here. 'import goom.bah'
> imports goom, then bah, and the resulting module in the local
> namespace is 'goom'. That's existing behaviour (which I find
> perplexing, but I've never ran into before ;) which has changed
> in a reliable way: the local name being stored, whatever it would
> have been in a normal import, is changed into the "as-name" by
> "as <name>".

A whole lot rides on what you mean by "resulting" above. If by 
"resulting" you mean "goom", then "import goom.bah as snarf" 
would result in my namespace having "snarf" as an alias for 
"goom", and I would use "bah" as "snarf.bah". In which case 
Greg Ewing is right, and it's "import <dotted name> as ..." 
that should be outlawed, (since that's not what anyone would 
expect).

OTOH, if by "resulting" you meant "bah", things are much 
worse, because it means you must patched code you didn't 
understand ;-b.

> If you're saying that 'import goom.bah.baz as b' won't do what
> people expect, I agree. (But neither does 'import goom.bah.baz',
> I think :-)

I disagree with paranthetical comment. Maybe some don't 
understand the first time they see it, but it has precedent 
(Java), and it's the only form that works in circular imports.
 
> Maybe it's the early hour, but I really don't understand the
> problem here. Ofcourse we end up looking 'bah' in the other
> namespace, we have to import it. And I don't know what it has to
> do with circular import either ;P

"goom.bah" ends up looking in "goom" when *used*. If, in a 
circular import situation, "goom" is already being imported, an 
"import goom.bah" will succeed, even though it can't access 
"bah" in "goom". The code sees it in sys.modules, sees that 
it's being imported, and says, "Oh heck, lets keep going, it'll 
be there before it gets used".

But "from goom import bah" will fail with an import error 
because "goom" is an empty shell, so there's no way to grab 
"bah" and bind it into the local namespace.
 


- Gordon


From bwarsaw@beopen.com  Fri Aug 18 14:27:05 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 09:27:05 -0400 (EDT)
Subject: [Python-Dev] 'import as'
References: <1245558070-157553278@hypernet.com>
 <1245520353-159821909@hypernet.com>
Message-ID: <14749.14761.275554.898385@anthem.concentric.net>

>>>>> "Gordo" == Gordon McMillan <gmcm@hypernet.com> writes:

    Gordo> A whole lot rides on what you mean by "resulting" above. If
    Gordo> by "resulting" you mean "goom", then "import goom.bah as
    Gordo> snarf" would result in my namespace having "snarf" as an
    Gordo> alias for "goom", and I would use "bah" as "snarf.bah". In
    Gordo> which case Greg Ewing is right, and it's "import <dotted
    Gordo> name> as ..."  that should be outlawed, (since that's not
    Gordo> what anyone would expect).

Right.

    Gordo> OTOH, if by "resulting" you meant "bah", things are much 
    Gordo> worse, because it means you must patched code you didn't 
    Gordo> understand ;-b.

But I think it /is/ useful behavior for "import <dotted name> as" to
bind the rightmost attribute to the local name.  I agree though that
if that can't be done in a sane way, it has to raise an exception.
But that will frustrate users.

-Barry


From gmcm@hypernet.com  Fri Aug 18 14:28:00 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 09:28:00 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399CEE49.8E646DC3@lemburg.com>
Message-ID: <1245517214-160010723@hypernet.com>

M.-A. Lemburg wrote:

> ... just look at what your browser does
> when you request http://www.python.org/search ... the server
> redirects you to search/ to make sure that the links embedded in
> the page are relative to search/ and not www.python.org/.

While that seems to be what Apache does, I get 40x's from 
IIS and Netscape server. Greg Ewing's demonstrated a Unix 
where the trailing slash indicates nothing useful, Tim's 
demonstrated that Windows gets confused by a trailing slash 
unless we're talking about the root directory on a drive (and 
BTW, same results if you use backslash).

On WIndows, os.path.commonprefix doesn't use normcase 
and normpath, so it's completely useless anyway. (That is, it's 
really a "string" function and has nothing to do with paths).

- Gordon


From nascheme@enme.ucalgary.ca  Fri Aug 18 14:55:41 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 18 Aug 2000 07:55:41 -0600
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <399CF43A.478D7165@lemburg.com>; from M.-A. Lemburg on Fri, Aug 18, 2000 at 10:30:51AM +0200
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com> <399CF43A.478D7165@lemburg.com>
Message-ID: <20000818075541.A14919@keymaster.enme.ucalgary.ca>

On Fri, Aug 18, 2000 at 10:30:51AM +0200, M.-A. Lemburg wrote:
> Hey, we still have mxTools which gives you most of those goodies 
> and lots more ;-)

Yes, I don't understand what's wrong with a function.  It would be nice
if it was a builtin.  IMHO, all this new syntax is a bad idea.

  Neil


From fdrake@beopen.com  Fri Aug 18 15:12:24 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 10:12:24 -0400 (EDT)
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]  Lockstep iteration - eureka!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
References: <399CEE7F.F2B865D2@nowonder.de>
 <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
Message-ID: <14749.17480.153311.549655@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > I'm stifling it, but, FWIW, I've been trying to sell "indexing" for most of
 > my adult life <wink -- but yes, in my experience too range(len(seq)) is
 > extraordinarly hard to get across to newbies at first; and I *expect*
 > [:len(seq)] to be at least as hard>.

  And "for i indexing o in ...:" is the best proposal I've seen to
resolve the whole problem in what *I* would describe as a Pythonic
way.  And it's not a new proposal.
  I haven't read the specific patch, but bugs can be fixed.  I guess a
lot of us will just have to disagree with the Guido on this one.  ;-(
  Linguistic coup, anyone?  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Fri Aug 18 15:17:46 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 16:17:46 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818094239.A3A1931047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Fri, Aug 18, 2000 at 11:42:38AM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
Message-ID: <20000818161745.U376@xs4all.nl>

On Fri, Aug 18, 2000 at 11:42:38AM +0200, Sjoerd Mullender wrote:

> Your changes for the import X as Y feature introduced a serious bug:
> I can no longer run Python at all.

> The problem is that in line 2150 of compile.c com_addopname is called
> with a NULL last argument, and the firat thing com_addopname does is
> indirect off of that very argument.  On my machine (and on many other
> machines) that results in a core dump.

Hm. That's very strange. Line 2150 of compile.c calls com_addopname with
'CHILD(CHILD(subn, 0), 0)' as argument. 'subn' is supposed to be a
'dotted_as_name', which always has at least one child (a dotted_name), which
also always has at least one child (a NAME). I don't see how dotted_as_name
and dotted_name can be valid nodes, but the first child of dotted_name be
NULL.

Can you confirm that the tree is otherwise unmodified ? If you have local
patches, can you try to compile and test a 'clean' tree ? I can't reproduce
this on the machines I have access to, so if you could find out what
statement exactly is causing this behaviour, I'd be very grateful. Something
like this should do the trick, changing:

                        } else
                                com_addopname(c, STORE_NAME,
                                              CHILD(CHILD(subn, 0),0));

into

                        } else {
                                if (CHILD(CHILD(subn, 0), 0) == NULL) {
                                        com_error(c, PyExc_SyntaxError,
                                                  "NULL name for import");
                                        return;
                                }
                                com_addopname(c, STORE_NAME,
                                              CHILD(CHILD(subn, 0),0));
                        }

And then recompile, and remove site.pyc if there is one. (Unlikely, if a
crash occured while compiling site.py, but possible.) This should raise a
SyntaxError on or about the appropriate line, at least identifying what the
problem *could* be ;)

If that doesn't yield anything obvious, and you have the time for it (and
want to do it), some 'print' statements in the debugger might help. (I'm
assuming it works more or less like GDB, in which case 'print n', 'print
n->n_child[1]', 'print subn', 'print subn->n_child[0]' and 'print
subn->n_child[1]' would be useful. I'm also assuming there isn't an easier
way to debug this, like you sending me a corefile, because corefiles
normally aren't very portable :P If it *is* portable, that'd be great.)

> In case it helps, here is the stack trace.  The crash happens when
> importing site.py.  I have not made any changes to my site.py.

Oh, it's probably worth it to re-make the Grammar (just to be sure) and
remove Lib/*.pyc. The bytecode magic changes in the patch, so that last
measure should be unecessary, but who knows :P

breaky-breaky-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Fri Aug 18 15:21:20 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 10:21:20 -0400 (EDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
In-Reply-To: <399D18FF.BD807ED5@nowonder.de>
References: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
 <399D18FF.BD807ED5@nowonder.de>
Message-ID: <14749.18016.323403.295212@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > What about 'indexing' xor 'in' ? Like that:
 > 
 > for i indexing sequence:      # good
 > for e in sequence:            # good
 > for i indexing e in sequence: # BAD!
 > 
 > This might help Guido to understand what it does in the
 > 'indexing' case. I admit that the third one may be a
 > bit harder to parse, so why not *leave it out*?

  I hadn't considered *not* using an "in" clause, but that is actually
pretty neat.  I'd like to see all of these allowed; disallowing "for i
indexing e in ...:" reduces the intended functionality substantially.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gmcm@hypernet.com  Fri Aug 18 15:42:20 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 10:42:20 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <14749.14761.275554.898385@anthem.concentric.net>
Message-ID: <1245512755-160278942@hypernet.com>

Barry "5 String" Warsaw wrote:

> But I think it /is/ useful behavior for "import <dotted name> as"
> to bind the rightmost attribute to the local name. 

That is certainly what I would expect (and thus the confusion 
created by my original post).

> I agree
> though that if that can't be done in a sane way, it has to raise
> an exception. But that will frustrate users.

"as" is minorly useful in dealing with name clashes between 
packages, and with reallyreallylongmodulename.

Unfortunately, it's also yet another way of screwing up circular 
imports and reloads, (unless you go whole hog, and use Jim 
Fulton's idea of an assoctiation object).

Then there's all the things that can go wrong with relative 
imports (loading the same module twice; masking anything 
outside the package with the same name).

It's not surprising that most import problems posted to c.l.py 
get more wrong answers than right. Unfortunately, there's no 
way to fix it in a backwards compatible way.

So I'm -0: it just adds complexity to an already overly complex 
area.

- Gordon


From bwarsaw@beopen.com  Fri Aug 18 15:55:18 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 10:55:18 -0400 (EDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
 <399CF43A.478D7165@lemburg.com>
 <20000818075541.A14919@keymaster.enme.ucalgary.ca>
Message-ID: <14749.20054.495550.467507@anthem.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme@enme.ucalgary.ca> writes:

    NS> On Fri, Aug 18, 2000 at 10:30:51AM +0200, M.-A. Lemburg wrote:
    >> Hey, we still have mxTools which gives you most of those
    >> goodies and lots more ;-)

    NS> Yes, I don't understand what's wrong with a function.  It
    NS> would be nice if it was a builtin.  IMHO, all this new syntax
    NS> is a bad idea.

I agree, but Guido nixed even the builtin.  Let's move on; there's
always Python 2.1.

-Barry


From akuchlin@mems-exchange.org  Fri Aug 18 16:00:37 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 11:00:37 -0400
Subject: [Python-Dev] Adding insint() function
Message-ID: <20000818110037.C27419@kronos.cnri.reston.va.us>

Four modules define insint() functions to insert an integer into a
dictionary in order to initialize constants in their module
dictionaries:

kronos Modules>grep -l insint *.c
pcremodule.c
shamodule.c
socketmodule.c
zlibmodule.c
kronos Modules>          

(Hm... I was involved with 3 of them...)  Other modules don't use a
helper function, but just do PyDict_SetItemString(d, "foo",
PyInt_FromLong(...)) directly.  

This duplication bugs me.  Shall I submit a patch to add an API
convenience function to do this, and change the modules to use it?
Suggested prototype and name: PyDict_InsertInteger( dict *, string,
long)

--amk





From bwarsaw@beopen.com  Fri Aug 18 16:06:11 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 11:06:11 -0400 (EDT)
Subject: [Python-Dev] 'import as'
References: <14749.14761.275554.898385@anthem.concentric.net>
 <1245512755-160278942@hypernet.com>
Message-ID: <14749.20707.347217.763385@anthem.concentric.net>

>>>>> "Gordo" == Gordon "Punk Cellist" McMillan <gmcm@hypernet.com> writes:

    Gordo> So I'm -0: it just adds complexity to an already overly
    Gordo> complex area.

I agree, -0 from me too.


From sjoerd@oratrix.nl  Fri Aug 18 16:06:37 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Fri, 18 Aug 2000 17:06:37 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Fri, 18 Aug 2000 16:17:46 +0200.
 <20000818161745.U376@xs4all.nl>
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
 <20000818161745.U376@xs4all.nl>
Message-ID: <20000818150639.6685C31047C@bireme.oratrix.nl>

Ok, problem solved.

The problem was that because of your (I think it was you :-) earlier
change to have a Makefile in Grammar, I had an old graminit.c lying
around in my build directory.  I don't build in the source directory
and the changes for a Makefile in Grammar resulted in a file
graminit.c in the wrong place.  My subsequent change to this part of
the build process resulted in a different place for graminit.c and I
never removed the bogus graminit.c (because I didn't know about it).
However, the compiler found the bogus one so that's why python
crashed.

On Fri, Aug 18 2000 Thomas Wouters wrote:

> On Fri, Aug 18, 2000 at 11:42:38AM +0200, Sjoerd Mullender wrote:
> 
> > Your changes for the import X as Y feature introduced a serious bug:
> > I can no longer run Python at all.
> 
> > The problem is that in line 2150 of compile.c com_addopname is called
> > with a NULL last argument, and the firat thing com_addopname does is
> > indirect off of that very argument.  On my machine (and on many other
> > machines) that results in a core dump.
> 
> Hm. That's very strange. Line 2150 of compile.c calls com_addopname with
> 'CHILD(CHILD(subn, 0), 0)' as argument. 'subn' is supposed to be a
> 'dotted_as_name', which always has at least one child (a dotted_name), which
> also always has at least one child (a NAME). I don't see how dotted_as_name
> and dotted_name can be valid nodes, but the first child of dotted_name be
> NULL.
> 
> Can you confirm that the tree is otherwise unmodified ? If you have local
> patches, can you try to compile and test a 'clean' tree ? I can't reproduce
> this on the machines I have access to, so if you could find out what
> statement exactly is causing this behaviour, I'd be very grateful. Something
> like this should do the trick, changing:
> 
>                         } else
>                                 com_addopname(c, STORE_NAME,
>                                               CHILD(CHILD(subn, 0),0));
> 
> into
> 
>                         } else {
>                                 if (CHILD(CHILD(subn, 0), 0) == NULL) {
>                                         com_error(c, PyExc_SyntaxError,
>                                                   "NULL name for import");
>                                         return;
>                                 }
>                                 com_addopname(c, STORE_NAME,
>                                               CHILD(CHILD(subn, 0),0));
>                         }
> 
> And then recompile, and remove site.pyc if there is one. (Unlikely, if a
> crash occured while compiling site.py, but possible.) This should raise a
> SyntaxError on or about the appropriate line, at least identifying what the
> problem *could* be ;)
> 
> If that doesn't yield anything obvious, and you have the time for it (and
> want to do it), some 'print' statements in the debugger might help. (I'm
> assuming it works more or less like GDB, in which case 'print n', 'print
> n->n_child[1]', 'print subn', 'print subn->n_child[0]' and 'print
> subn->n_child[1]' would be useful. I'm also assuming there isn't an easier
> way to debug this, like you sending me a corefile, because corefiles
> normally aren't very portable :P If it *is* portable, that'd be great.)
> 
> > In case it helps, here is the stack trace.  The crash happens when
> > importing site.py.  I have not made any changes to my site.py.
> 
> Oh, it's probably worth it to re-make the Grammar (just to be sure) and
> remove Lib/*.pyc. The bytecode magic changes in the patch, so that last
> measure should be unecessary, but who knows :P
> 
> breaky-breaky-ly y'rs,
> -- 
> Thomas Wouters <thomas@xs4all.net>
> 
> Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
> 

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From bwarsaw@beopen.com  Fri Aug 18 16:27:41 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 11:27:41 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <14749.21997.872741.463566@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin@mems-exchange.org> writes:

    AK> Four modules define insint() functions to insert an integer
    AK> into a dictionary in order to initialize constants in their
    AK> module dictionaries:

    | kronos Modules>grep -l insint *.c
    | pcremodule.c
    | shamodule.c
    | socketmodule.c
    | zlibmodule.c
    | kronos Modules>          

    AK> (Hm... I was involved with 3 of them...)  Other modules don't
    AK> use a helper function, but just do PyDict_SetItemString(d,
    AK> "foo", PyInt_FromLong(...)) directly.

    AK> This duplication bugs me.  Shall I submit a patch to add an
    AK> API convenience function to do this, and change the modules to
    AK> use it?  Suggested prototype and name: PyDict_InsertInteger(
    AK> dict *, string, long)

+0, but it should probably be called PyDict_SetItemSomething().  It
seems more related to the other PyDict_SetItem*() functions, even
though in those cases the `*' refers to the type of the key, not the
value.

-Barry


From akuchlin@mems-exchange.org  Fri Aug 18 16:29:47 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 11:29:47 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.21997.872741.463566@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:27:41AM -0400
References: <20000818110037.C27419@kronos.cnri.reston.va.us> <14749.21997.872741.463566@anthem.concentric.net>
Message-ID: <20000818112947.F27419@kronos.cnri.reston.va.us>

On Fri, Aug 18, 2000 at 11:27:41AM -0400, Barry A. Warsaw wrote:
>+0, but it should probably be called PyDict_SetItemSomething().  It
>seems more related to the other PyDict_SetItem*() functions, even
>though in those cases the `*' refers to the type of the key, not the
>value.

PyDict_SetItemInteger seems misleading; PyDict_SetItemStringToInteger 
is simply long.  PyDict_SetIntegerItem, maybe?  :)

Anyway, I'll start working on a patch and change the name later once
there's a good suggestion.

--amk


From mal@lemburg.com  Fri Aug 18 16:41:14 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 17:41:14 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <1245517214-160010723@hypernet.com>
Message-ID: <399D591A.F909CF9D@lemburg.com>

Gordon McMillan wrote:
> 
> M.-A. Lemburg wrote:
> 
> > ... just look at what your browser does
> > when you request http://www.python.org/search ... the server
> > redirects you to search/ to make sure that the links embedded in
> > the page are relative to search/ and not www.python.org/.
> 
> While that seems to be what Apache does, I get 40x's from
> IIS and Netscape server. Greg Ewing's demonstrated a Unix
> where the trailing slash indicates nothing useful, Tim's
> demonstrated that Windows gets confused by a trailing slash
> unless we're talking about the root directory on a drive (and
> BTW, same results if you use backslash).
> 
> On WIndows, os.path.commonprefix doesn't use normcase
> and normpath, so it's completely useless anyway. (That is, it's
> really a "string" function and has nothing to do with paths).

I still don't get it: what's the point in carelessly dropping
valid and useful information for no obvious reason at all ?

Besides the previous behaviour was documented and most probably
used in some apps. Why break those ?

And last not least: what if the directory in question doesn't
even exist anywhere and is only encoded in the path by the fact
that there is a slash following it ?

Puzzled by needless discussions ;-),
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fdrake@beopen.com  Fri Aug 18 16:42:59 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 11:42:59 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.21997.872741.463566@anthem.concentric.net>
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
 <14749.21997.872741.463566@anthem.concentric.net>
Message-ID: <14749.22915.717712.613834@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > +0, but it should probably be called PyDict_SetItemSomething().  It
 > seems more related to the other PyDict_SetItem*() functions, even
 > though in those cases the `*' refers to the type of the key, not the
 > value.

  Hmm... How about PyDict_SetItemStringInt() ?  It's still long, but I
don't think that's actually a problem.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Fri Aug 18 17:22:46 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 18:22:46 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <14749.20707.347217.763385@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:06:11AM -0400
References: <14749.14761.275554.898385@anthem.concentric.net> <1245512755-160278942@hypernet.com> <14749.20707.347217.763385@anthem.concentric.net>
Message-ID: <20000818182246.V376@xs4all.nl>

On Fri, Aug 18, 2000 at 11:06:11AM -0400, Barry A. Warsaw wrote:

>     Gordo> So I'm -0: it just adds complexity to an already overly
>     Gordo> complex area.

> I agree, -0 from me too.

What are we voting on, here ?

import <name> as <localname> (in general)

or

import <name1>.<nameN>+ as <localname> (where localname turns out an alias
for name1)

or

import <name1>.<nameN>*.<nameX> as <localname> (where localname turns out an
alias for nameX, that is, the last part of the dotted name that's being
imported)

? 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gmcm@hypernet.com  Fri Aug 18 17:28:49 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 12:28:49 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399D591A.F909CF9D@lemburg.com>
Message-ID: <1245506365-160663281@hypernet.com>

M.-A. Lemburg wrote:
> Gordon McMillan wrote:
> > 
> > M.-A. Lemburg wrote:
> > 
> > > ... just look at what your browser does
> > > when you request http://www.python.org/search ... the server
> > > redirects you to search/ to make sure that the links embedded
> > > in the page are relative to search/ and not www.python.org/.
> > 
> > While that seems to be what Apache does, I get 40x's from
> > IIS and Netscape server. Greg Ewing's demonstrated a Unix
> > where the trailing slash indicates nothing useful, Tim's
> > demonstrated that Windows gets confused by a trailing slash
> > unless we're talking about the root directory on a drive (and
> > BTW, same results if you use backslash).
> > 
> > On WIndows, os.path.commonprefix doesn't use normcase
> > and normpath, so it's completely useless anyway. (That is, it's
> > really a "string" function and has nothing to do with paths).
> 
> I still don't get it: what's the point in carelessly dropping
> valid and useful information for no obvious reason at all ?

I wasn't advocating anything. I was pointing out that it's not 
necessarily "valid" or "useful" information in all contexts.
 
> Besides the previous behaviour was documented and most probably
> used in some apps. Why break those ?

I don't think commonprefix should be changed, precisely 
because it might break apps. I also think it should not live in 
os.path, because it is not an abstract path operation. It's just 
a string operation. But it's there, so the best I can advise is 
not to use it.
 
> And last not least: what if the directory in question doesn't
> even exist anywhere and is only encoded in the path by the fact
> that there is a slash following it ?

If it doesn't exist, it's not a directory with or without a slash 
following it. The fact that Python largely successfully reuses 
os.path code to deal with URLs does not mean that the 
syntax of URLs should be mistaken for the syntax of file 
systems, even at an abstract level. At the level where 
commonprefix operates, abstraction isn't even a concept.

- Gordon


From gmcm@hypernet.com  Fri Aug 18 17:33:12 2000
From: gmcm@hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 12:33:12 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
References: <14749.20707.347217.763385@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:06:11AM -0400
Message-ID: <1245506103-160679086@hypernet.com>

Thomas Wouters wrote:
> On Fri, Aug 18, 2000 at 11:06:11AM -0400, Barry A. Warsaw wrote:
> 
> >     Gordo> So I'm -0: it just adds complexity to an already
> >     overly Gordo> complex area.
> 
> > I agree, -0 from me too.
> 
> What are we voting on, here ?
> 
> import <name> as <localname> (in general)

 -0

> import <name1>.<nameN>+ as <localname> (where localname turns out
> an alias for name1)

-1000

> import <name1>.<nameN>*.<nameX> as <localname> (where localname
> turns out an alias for nameX, that is, the last part of the
> dotted name that's being imported)

-0



- Gordon


From thomas@xs4all.net  Fri Aug 18 17:41:31 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 18:41:31 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818150639.6685C31047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Fri, Aug 18, 2000 at 05:06:37PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl>
Message-ID: <20000818184131.W376@xs4all.nl>

On Fri, Aug 18, 2000 at 05:06:37PM +0200, Sjoerd Mullender wrote:
> Ok, problem solved.

> The problem was that because of your (I think it was you :-) earlier
> change to have a Makefile in Grammar, I had an old graminit.c lying
> around in my build directory. 

Right. That patch was mine, and I think we should remove it again :P We
aren't changing Grammar *that* much, and we'll just have to 'make Grammar'
individually. Grammar now also gets re-made much too often (though that
doesn't really present any problems, it's just a tad sloppy.) Do we really
want that in the released package ?

The Grammar dependency can't really be solved until dependencies in general
are handled better (or at all), especially between directories. It'll only
create more Makefile spaghetti :P
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Fri Aug 18 17:41:35 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 12:41:35 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <1245506365-160663281@hypernet.com>
References: <399D591A.F909CF9D@lemburg.com>
 <1245506365-160663281@hypernet.com>
Message-ID: <14749.26431.198802.970572@cj42289-a.reston1.va.home.com>

Gordon McMillan writes:
 > I don't think commonprefix should be changed, precisely 
 > because it might break apps. I also think it should not live in 
 > os.path, because it is not an abstract path operation. It's just 
 > a string operation. But it's there, so the best I can advise is 
 > not to use it.

  This works.  Let's accept (some variant) or Skip's desired
functionality as os.path.splitprefix(); this avoid breaking existing
code and uses a name that's consistent with the others.  The result
can be (prefix, [list of suffixes]).  Trailing slashes should be
handled so that os.path.join(prefix, suffix) does the "right thing".


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Fri Aug 18 17:37:02 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 12:37:02 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
References: <14749.14761.275554.898385@anthem.concentric.net>
 <1245512755-160278942@hypernet.com>
 <14749.20707.347217.763385@anthem.concentric.net>
 <20000818182246.V376@xs4all.nl>
Message-ID: <14749.26158.777771.458507@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > What are we voting on, here ?

  We should be really clear about this, since it is confusing.

 > import <name> as <localname> (in general)

  +1 for this basic usage.

 > import <name1>.<nameN>+ as <localname> (where localname turns out an alias
 > for name1)

  -1, because it's confusing for users

 > import <name1>.<nameN>*.<nameX> as <localname> (where localname turns out an
 > alias for nameX, that is, the last part of the dotted name that's being
 > imported)

  +1 on the idea, but the circular import issue is very real and I'm
not sure of the best way to solve it.
  For now, let's support:

	import name1 as localname

where neither name1 nor localname can be dotted.  The dotted-name1
case can be added when the circular import issue can be dealt with.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From trentm@ActiveState.com  Fri Aug 18 17:54:12 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 09:54:12 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 02:30:40AM -0400
References: <20000817164137.U17689@lyra.org> <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>
Message-ID: <20000818095412.C11316@ActiveState.com>

On Fri, Aug 18, 2000 at 02:30:40AM -0400, Tim Peters wrote:
> [Greg Stein]
> > ...
> > IOW, an x-plat TLS is going to be done at some point. If you need it now,
> > then please do it now. That will help us immeasurably in the long run.
> 
> the former's utter bogosity.  From Trent's POV, I bet the one-liner
> workaround sounds more appealing.
> 

Yes.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From trentm@ActiveState.com  Fri Aug 18 17:56:24 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 09:56:24 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818040034.F17689@lyra.org>; from gstein@lyra.org on Fri, Aug 18, 2000 at 04:00:34AM -0700
References: <20000818091703.T376@xs4all.nl> <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com> <20000818040034.F17689@lyra.org>
Message-ID: <20000818095624.D11316@ActiveState.com>

On Fri, Aug 18, 2000 at 04:00:34AM -0700, Greg Stein wrote:
> That is a silly approach, Tim. This is exactly what autoconf is for. Using
> run-time logic to figure out something that is compile-time is Badness.
> 
> > > On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
> > >
> > > > So how about a runtime test for what's actually important (and it's not
> > > > Monterey!)?
> > > >
> > > > 	if (sizeof(threadid) <= sizeof(long))
> > > > 		return (long)threadid;
> > > >
> > > > End of problem, right?  It's a cheap runtime test in a function
> > > > whose speed isn't critical anyway.
> > >

I am inclined to agrre with Thomas and Greg on this one. Why not check for
sizeof(pthread_t) if pthread.h exists and test:

#if SIZEOF_PTHREAD_T < SIZEOF_LONG
    return (long)threadid;
#endif


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From tim_one@email.msn.com  Fri Aug 18 18:09:05 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:09:05 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818040034.F17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEIPHAAA.tim_one@email.msn.com>

[Greg Stein]
> That is a silly approach, Tim. This is exactly what autoconf is for.
> Using run-time logic to figure out something that is compile-time
> is Badness.

Remain -0.  autoconf may work slick as snot on Unix derivatives, but each
new symbol it introduces also serves to make people on other platforms
scratch their heads about what it means and what they're supposed to do with
it in their manual config files.  In this case, the alternative is an
obvious and isolated 1-liner that's transparent on inspection regardless of
the reader's background.  You haven't noted a *downside* to that approach
that I can see, and your technical objection is incorrect:  sizeof is not a
compile-time operation (try it in an #if, but make very sure it does what
you think it does <wink>).




From tim_one@email.msn.com  Fri Aug 18 18:09:07 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:09:07 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <003001c00901$11fd8ae0$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIPHAAA.tim_one@email.msn.com>

[/F]
> from what I can tell, it's compatible with a long on all sane plat-
> forms (Win64 doesn't support pthreads anyway ;-), so I guess the
> right thing here is to remove volatile and simply use:
>
>     return (long) threadid;

That's what the code originally did, and the casting was introduced in
version 2.5.  As for the "volatile", Vladimir reported that he needed that.

This isn't worth the brain cell it's getting.  Put in the hack and move on
already!




From trentm@ActiveState.com  Fri Aug 18 18:23:39 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 10:23:39 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <200008180501.WAA28237@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Thu, Aug 17, 2000 at 10:01:22PM -0700
References: <200008180501.WAA28237@slayer.i.sourceforge.net>
Message-ID: <20000818102339.E11316@ActiveState.com>

On Thu, Aug 17, 2000 at 10:01:22PM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/dist/src/Objects
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv28173
> 
> Modified Files:
> 	object.c 
> Log Message:
> make_pair(): When comparing the pointers, they must be cast to integer
> types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t).  ANSI
> specifies that pointer compares other than == and != to non-related
> structures are undefined.  This quiets an Insure portability warning.
> 
> 
> Index: object.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Objects/object.c,v
> retrieving revision 2.95
> retrieving revision 2.96
> diff -C2 -r2.95 -r2.96
> *** object.c	2000/08/16 12:24:51	2.95
> --- object.c	2000/08/18 05:01:19	2.96
> ***************
> *** 372,375 ****
> --- 372,377 ----
>   {
>   	PyObject *pair;
> + 	Py_uintptr_t iv = (Py_uintptr_t)v;
> + 	Py_uintptr_t iw = (Py_uintptr_t)w;
>   
>   	pair = PyTuple_New(2);
> ***************
> *** 377,381 ****
>   		return NULL;
>   	}
> ! 	if (v <= w) {
>   		PyTuple_SET_ITEM(pair, 0, PyLong_FromVoidPtr((void *)v));
>   		PyTuple_SET_ITEM(pair, 1, PyLong_FromVoidPtr((void *)w));
> --- 379,383 ----
>   		return NULL;
>   	}
> ! 	if (iv <= iw) {
>   		PyTuple_SET_ITEM(pair, 0, PyLong_FromVoidPtr((void *)v));
>   		PyTuple_SET_ITEM(pair, 1, PyLong_FromVoidPtr((void *)w));
> ***************
> *** 488,492 ****
>   	}
>   	if (vtp->tp_compare == NULL) {
> ! 		return (v < w) ? -1 : 1;
>   	}
>   	_PyCompareState_nesting++;
> --- 490,496 ----
>   	}
>   	if (vtp->tp_compare == NULL) {
> ! 		Py_uintptr_t iv = (Py_uintptr_t)v;
> ! 		Py_uintptr_t iw = (Py_uintptr_t)w;
> ! 		return (iv < iw) ? -1 : 1;
>   	}
>   	_PyCompareState_nesting++;
> 

Can't you just do the cast for the comparison instead of making new
variables?

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From bwarsaw@beopen.com  Fri Aug 18 18:41:50 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 13:41:50 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
References: <200008180501.WAA28237@slayer.i.sourceforge.net>
 <20000818102339.E11316@ActiveState.com>
Message-ID: <14749.30046.345520.779328@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:

    TM> Can't you just do the cast for the comparison instead of
    TM> making new variables?

Does it matter?


From trentm@ActiveState.com  Fri Aug 18 18:47:52 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 10:47:52 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <14749.30046.345520.779328@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 01:41:50PM -0400
References: <200008180501.WAA28237@slayer.i.sourceforge.net> <20000818102339.E11316@ActiveState.com> <14749.30046.345520.779328@anthem.concentric.net>
Message-ID: <20000818104752.A15002@ActiveState.com>

On Fri, Aug 18, 2000 at 01:41:50PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:
> 
>     TM> Can't you just do the cast for the comparison instead of
>     TM> making new variables?
> 
> Does it matter?

No, I guess not. Just being a nitpicker first thing in the morning. Revving
up for real work. :)

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From tim_one@email.msn.com  Fri Aug 18 18:52:20 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:52:20 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>

[Andrew Kuchling]
> Four modules define insint() functions to insert an integer into a

  Five                         or macro

> dictionary in order to initialize constants in their module
> dictionaries:
>
> kronos Modules>grep -l insint *.c
> pcremodule.c
> shamodule.c
> socketmodule.c
> zlibmodule.c
> kronos Modules>

It's actually a macro in shamodule.  Missing is _winreg.c (in the PC
directory).  The perils of duplication manifest in subtle differences among
these guys (like _winreg's inserts a long while the others insert an int --
and _winreg is certainly more correct here because a Python int *is* a C
long; and they differ in treatment of errors, and it's not at all clear
that's intentional).

> ...
> This duplication bugs me.  Shall I submit a patch to add an API
> convenience function to do this, and change the modules to use it?
> Suggested prototype and name: PyDict_InsertInteger( dict *, string,
> long)

+1, provided the treatment of errors is clearly documented.




From akuchlin@mems-exchange.org  Fri Aug 18 18:58:33 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 13:58:33 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 01:52:20PM -0400
References: <20000818110037.C27419@kronos.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>
Message-ID: <20000818135833.K27419@kronos.cnri.reston.va.us>

On Fri, Aug 18, 2000 at 01:52:20PM -0400, Tim Peters wrote:
>+1, provided the treatment of errors is clearly documented.

The treatment of errors in module init functions seems to be simply
charge ahead and do the inserts, and then do 'if (PyErr_Occurred())
Py_FatalError())'.  The new function will probably return NULL if
there's an error, but I doubt anyone will check it; it's too ungainly
to write 
  if ( (PyDict_SetItemStringInt(d, "foo", FOO)) == NULL ||
       (PyDict_SetItemStringInt(d, "bar", BAR)) == NULL || 
       ... repeat for 187 more constants ...

--amk
       



From Vladimir.Marangozov@inrialpes.fr  Fri Aug 18 19:17:53 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 18 Aug 2000 20:17:53 +0200 (CEST)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <20000818135833.K27419@kronos.cnri.reston.va.us> from "Andrew Kuchling" at Aug 18, 2000 01:58:33 PM
Message-ID: <200008181817.UAA07799@python.inrialpes.fr>

Andrew Kuchling wrote:
> 
> On Fri, Aug 18, 2000 at 01:52:20PM -0400, Tim Peters wrote:
> >+1, provided the treatment of errors is clearly documented.
> 
> The treatment of errors in module init functions seems to be simply
> charge ahead and do the inserts, and then do 'if (PyErr_Occurred())
> Py_FatalError())'.  The new function will probably return NULL if
> there's an error, but I doubt anyone will check it; it's too ungainly
> to write 
>   if ( (PyDict_SetItemStringInt(d, "foo", FOO)) == NULL ||
>        (PyDict_SetItemStringInt(d, "bar", BAR)) == NULL || 
>        ... repeat for 187 more constants ...

:-)

So name it PyModule_AddConstant(module, name, constant),
which fails with "can't add constant to module" err msg.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tim_one@email.msn.com  Fri Aug 18 19:24:57 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 14:24:57 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818095624.D11316@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEJGHAAA.tim_one@email.msn.com>

[Trent Mick]
> I am inclined to agrre with Thomas and Greg on this one. Why not check for
> sizeof(pthread_t) if pthread.h exists and test:
>
> #if SIZEOF_PTHREAD_T < SIZEOF_LONG
>     return (long)threadid;
> #endif

Change "<" to "<=" and I won't gripe.




From fdrake@beopen.com  Fri Aug 18 19:40:48 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 14:40:48 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <200008181817.UAA07799@python.inrialpes.fr>
References: <20000818135833.K27419@kronos.cnri.reston.va.us>
 <200008181817.UAA07799@python.inrialpes.fr>
Message-ID: <14749.33584.683341.684523@cj42289-a.reston1.va.home.com>

Vladimir Marangozov writes:
 > So name it PyModule_AddConstant(module, name, constant),
 > which fails with "can't add constant to module" err msg.

  Even better!  I expect there should be at least a couple of these;
one for ints, one for strings.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Fri Aug 18 19:37:19 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 14:37:19 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <20000818102339.E11316@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJGHAAA.tim_one@email.msn.com>

[Trent Mick]
> > ...
> >   	if (vtp->tp_compare == NULL) {
> > ! 		Py_uintptr_t iv = (Py_uintptr_t)v;
> > ! 		Py_uintptr_t iw = (Py_uintptr_t)w;
> > ! 		return (iv < iw) ? -1 : 1;
> >   	}
> Can't you just do the cast for the comparison instead of making new
> variables?

Any compiler worth beans will optimize them out of existence.  In the
meantime, it makes each line (to my eyes) short, clear, and something I can
set a debugger breakpoint on in debug mode if I suspect the cast isn't
working as intended.




From Fredrik Lundh" <effbot@telia.com  Fri Aug 18 19:42:34 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 20:42:34 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>             <20000818161745.U376@xs4all.nl>  <20000818150639.6685C31047C@bireme.oratrix.nl>
Message-ID: <000001c00945$a8d37e40$f2a6b5d4@hagrid>

sjoerd wrote:

> The problem was that because of your (I think it was you :-) earlier
> change to have a Makefile in Grammar, I had an old graminit.c lying
> around in my build directory.  I don't build in the source directory
> and the changes for a Makefile in Grammar resulted in a file
> graminit.c in the wrong place.

is the Windows build system updated to generate new
graminit files if the Grammar are updated?

or is python development a unix-only thingie these days?

</F>



From tim_one@email.msn.com  Fri Aug 18 20:05:29 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:05:29 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJIHAAA.tim_one@email.msn.com>

[/F]
> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?

No, not yet.

> or is python development a unix-only thingie these days?

It pretty much always has been!  Just ask Jack <wink>.  It's unreasonable to
expect Unix(tm) weenies to keep the other builds working (although vital
that they tell Python-Dev when they do something "new & improved"), and
counterproductive to imply that their inability to do so should deter them
from improving the build process on their platform.  In some ways, building
is more pleasant under Windows, and if turnabout is fair play the Unix
droids could insist we build them a honking powerful IDE <wink>.




From m.favas@per.dem.csiro.au  Fri Aug 18 20:08:36 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 03:08:36 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky
 return statements
References: <20000816172425.A32338@ActiveState.com> <003001c00901$11fd8ae0$0900a8c0@SPIFF>
Message-ID: <399D89B4.476FB5EF@per.dem.csiro.au>

OK - 

return (long) threadid;

compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
Removing the "volatile" is also fine for me, but may affect Vladimir.
I'm still a bit (ha!) confused by Tim's comments that the function is
bogus for OSF/1 because it throws away half the bits, and will therefore
result in id collisions - this will only happen on platforms where
sizeof(long) is less than sizeof(pointer), which is not OSF/1 (but is
Win64). Also, one of the suggested tests only cast the pointer to a long
SIZEOF_PTHREAD_T < SIZEOF_LONG - that should surely be <= ...

In summary, whatever issue there was for OSF/1 six (or so) years ago
appears to be no longer relevant - but there will be the truncation
issue for Win64-like platforms.

Mark

Fredrik Lundh wrote:
> 
> trent mick wrote:
> >     return (long) *(long *) &threadid;
> 
> from what I can tell, pthread_t is a pointer under OSF/1.
> 
> I've been using OSF/1 since the early days, and as far as I can
> remember, you've never needed to use stupid hacks like that
> to convert a pointer to a long integer. an ordinary (long) cast
> should be sufficient.
> 
> > Could this be changed to
> >   return threadid;
> > safely?
> 
> safely, yes.  but since it isn't a long on all platforms, you might
> get warnings from the compiler (see Mark's mail).
> 
> :::
> 
> from what I can tell, it's compatible with a long on all sane plat-
> forms (Win64 doesn't support pthreads anyway ;-), so I guess the
> right thing here is to remove volatile and simply use:
> 
>     return (long) threadid;
> 
> (Mark: can you try this out on your box?  setting up a Python 2.0
> environment on our alphas would take more time than I can spare
> right now...)
> 
> </F>

-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913


From Vladimir.Marangozov@inrialpes.fr  Fri Aug 18 20:09:48 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 18 Aug 2000 21:09:48 +0200 (CEST)
Subject: [Python-Dev] Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHJHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 18, 2000 01:43:14 AM
Message-ID: <200008181909.VAA08003@python.inrialpes.fr>

[Tim, on PyErr_NoMemory]
>
> Looks good to me.  And if it breaks something, it will be darned hard to
> tell <wink>.

It's easily demonstrated with the memprof.c module I'd like to introduce
quickly here.

Note: I'll be out of town next week and if someone wants to
play with this, tell me what to do quickly: upload a (postponed) patch
which goes in pair with obmalloc.c, put it in a web page or remain quiet.

The object allocator is well tested, the memory profiler is not so
thouroughly tested... The interface needs more work IMHO, but it's
already quite useful and fun in it's current state <wink>.


Demo:


~/python/dev>python -p
Python 2.0b1 (#9, Aug 18 2000, 20:11:29)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> 
>>> # Note the -p option -- it starts any available profilers through
... # a newly introduced Py_ProfileFlag. Otherwise you'll get funny results
... # if you start memprof in the middle of an execution
... 
>>> import memprof
>>> memprof.__doc__
'This module provides access to the Python memory profiler.'
>>> 
>>> dir(memprof)
['ALIGNMENT', 'ERROR_ABORT', 'ERROR_IGNORE', 'ERROR_RAISE', 'ERROR_REPORT', 'ERROR_STOP', 'MEM_CORE', 'MEM_OBCLASS', 'MEM_OBJECT', '__doc__', '__name__', 'geterrlevel', 'getpbo', 'getprofile', 'getthreshold', 'isprofiling', 'seterrlevel', 'setpbo', 'setproftype', 'setthreshold', 'start', 'stop']
>>> 
>>> memprof.isprofiling()
1
>>> # It's running -- cool. We're now ready to get the current memory profile
... 
>>> print memprof.getprofile.__doc__
getprofile([type]) -> object

Return a snapshot of the current memory profile of the interpreter.
An optional type argument may be provided to request the profile of
a specific memory layer. It must be one of the following constants:

        MEM_CORE    - layer 1: Python core memory
        MEM_OBJECT  - layer 2: Python object memory
        MEM_OBCLASS - layer 3: Python object-specific memory 

If a type argument is not specified, the default profile is returned.
The default profile type can be set with the setproftype() function.
>>> 
>>> mp = memprof.getprofile()
>>> mp
<global memory profile, layer 2, detailed in 33 block size classes>
>>> 
>>> # now see how much mem we're using, it's a 3 tuple
... # (requested mem, minimum allocated mem, estimated mem)
... 
>>> mp.memory
(135038, 142448, 164792)
>>> mp.peakmemory
(137221, 144640, 167032)
>>> 
>>> # indeed, peak values are important. Now let's see what this gives in
... # terms of memory blocks
... 
>>> mp.blocks
(2793, 2793)
>>> mp.peakblocks
(2799, 2799)
>>> 
>>> # Again this is a 2-tuple (requested blocks, allocated blocks)
... # Now let's see the stats of the calls to the allocator.
... 
>>> mp.malloc
(4937, 0, 0)
>>> mp.calloc
(0, 0, 0)
>>> mp.realloc
(43, 0, 0)
>>> mp.free
(2144, 0, 0)
>>> 
>>> # A 3-tuple (nb of calls, nb of errors, nb of warnings by memprof)
... #
... # Good. Now let's see the memory profile detailed by size classes
... they're memory profile objects too, similar to the global profile:
>>>
>>> mp.sizeclass[0]
<size class memory profile, layer 2, block size range [1..8]>
>>> mp.sizeclass[1]
<size class memory profile, layer 2, block size range [9..16]>
>>> mp.sizeclass[2]
<size class memory profile, layer 2, block size range [17..24]>
>>> len(mp.sizeclass)
33
>>> mp.sizeclass[-1]
<size class memory profile, layer 2, block size range [257..-1]>
>>> 
>>> # The last one is for big blocks: 257 bytes and up.
... # Now let's see ithe detailed memory picture:
>>>
>>> for s in mp.sizeclass:                                                     
...     print "%.2d - " % s.sizeclass, "%8d %8d %8d" % s.memory
... 
00 -         0        0        0
01 -      3696     3776     5664
02 -       116      120      160
03 -     31670    34464    43080
04 -     30015    32480    38976
05 -     10736    11760    13720
06 -     10846    11200    12800
07 -      2664     2816     3168
08 -      1539     1584     1760
09 -      1000     1040     1144
10 -      2048     2112     2304
11 -      1206     1248     1352
12 -       598      624      672
13 -       109      112      120
14 -       575      600      640
15 -       751      768      816
16 -       407      408      432
17 -       144      144      152
18 -       298      304      320
19 -       466      480      504
20 -       656      672      704
21 -       349      352      368
22 -       542      552      576
23 -       188      192      200
24 -       392      400      416
25 -       404      416      432
26 -       640      648      672
27 -       441      448      464
28 -         0        0        0
29 -       236      240      248
30 -       491      496      512
31 -       501      512      528
32 -     31314    31480    31888
>>>
>>> for s in mp.sizeclass:
...     print "%.2d - " % s.sizeclass, "%8d %8d" % s.blocks
... 
00 -         0        0
01 -       236      236
02 -         5        5
03 -      1077     1077
04 -       812      812
05 -       245      245
06 -       200      200
07 -        44       44
08 -        22       22
09 -        13       13
10 -        24       24
11 -        13       13
12 -         6        6
13 -         1        1
14 -         5        5
15 -         6        6
16 -         3        3
17 -         1        1
18 -         2        2
19 -         3        3
20 -         4        4
21 -         2        2
22 -         3        3
23 -         1        1
24 -         2        2
25 -         2        2
26 -         3        3
27 -         2        2
28 -         0        0
29 -         1        1
30 -         2        2
31 -         2        2
32 -        51       51
>>>
>>> # Note that just started the interpreter and analysed it's initial
... # memory profile. You can repeat this game at any point of time,
... # look at the stats and enjoy a builtin memory profiler.
... #
... # Okay, now to the point on PyErr_NoMemory: but we need to restart
... # Python without "-p"
>>>
~/python/dev>python 
Python 2.0b1 (#9, Aug 18 2000, 20:11:29)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> 
>>> import memprof
>>> memprof.isprofiling()
0
>>> memprof.start()
memprof: freeing unknown block (0x40185e60)
memprof: freeing unknown block (0x40175098)
memprof: freeing unknown block (0x40179288)
>>>
>>> # See? We're freeing unknown blocks for memprof.
... # Okay, enough. See the docs for more:
... 
>>> print memprof.seterrlevel.__doc__
seterrlevel(flags) -> None

Set the error level of the profiler. The provided argument instructs the
profiler on how tolerant it should be against any detected simple errors
or memory corruption. The following non-exclusive values are recognized:

    ERROR_IGNORE - ignore silently any detected errors
    ERROR_REPORT - report all detected errors to stderr
    ERROR_STOP   - stop the profiler on the first detected error
    ERROR_RAISE  - raise a MemoryError exception for all detected errors
    ERROR_ABORT  - report the first error as fatal and abort immediately

The default error level is ERROR_REPORT.
>>> 
>>> # So here's you're PyErr_NoMemory effect:
... 
>>> memprof.seterrlevel(memprof.ERROR_REPORT | memprof.ERROR_RAISE)
>>> 
>>> import test.regrtest
memprof: resizing unknown block (0x82111b0)
memprof: raised MemoryError.
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "./Lib/test/regrtest.py", line 39, in ?
    import random
  File "./Lib/random.py", line 23, in ?
    import whrandom
  File "./Lib/whrandom.py", line 40, in ?
    class whrandom:
MemoryError: memprof: resizing unknown block (0x82111b0)
>>> 
>>> # Okay, gotta run. There are no docs for the moment. Just the source
... and function docs. (and to avoid another exception...)
>>>
>>> memprof.seterrlevel(memprof.ERROR_IGNORE)
>>>
>>> for i in dir(memprof):
...     x = memprof.__dict__[i]
...     if hasattr(x, "__doc__"):
...             print ">>>>>>>>>>>>>>>>>>>>>>>>>>>>> [%s]" % i
...             print x.__doc__
...             print '='*70
... 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [geterrlevel]
geterrlevel() -> errflags

Get the current error level of the profiler.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getpbo]
getpbo() -> int

Return the fixed per block overhead (pbo) used for estimations.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getprofile]
getprofile([type]) -> object

Return a snapshot of the current memory profile of the interpreter.
An optional type argument may be provided to request the profile of
a specific memory layer. It must be one of the following constants:

        MEM_CORE    - layer 1: Python core memory
        MEM_OBJECT  - layer 2: Python object memory
        MEM_OBCLASS - layer 3: Python object-specific memory 

If a type argument is not specified, the default profile is returned.
The default profile type can be set with the setproftype() function.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getthreshold]
getthreshold() -> int

Return the size threshold (in bytes) between small and big blocks.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [isprofiling]
isprofiling() -> 1 if profiling is currently in progress, 0 otherwise.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [seterrlevel]
seterrlevel(flags) -> None

Set the error level of the profiler. The provided argument instructs the
profiler on how tolerant it should be against any detected simple errors
or memory corruption. The following non-exclusive values are recognized:

    ERROR_IGNORE - ignore silently any detected errors
    ERROR_REPORT - report all detected errors to stderr
    ERROR_STOP   - stop the profiler on the first detected error
    ERROR_RAISE  - raise a MemoryError exception for all detected errors
    ERROR_ABORT  - report the first error as fatal and abort immediately

The default error level is ERROR_REPORT.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setpbo]
setpbo(int) -> None

Set the fixed per block overhead (pbo) used for estimations.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setproftype]
setproftype(type) -> None

Set the default profile type returned by getprofile() without arguments.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setthreshold]
setthreshold(int) -> None

Set the size threshold (in bytes) between small and big blocks.
The maximum is 256. The argument is rounded up to the ALIGNMENT.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [start]
start() -> None

Start the profiler. If it has been started, this function has no effect.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [stop]
stop() -> None

Stop the profiler. If it has been stopped, this function has no effect.
======================================================================


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tim_one@email.msn.com  Fri Aug 18 20:11:11 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:11:11 -0400
Subject: [Python-Dev] RE: Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <200008181909.VAA08003@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEJJHAAA.tim_one@email.msn.com>

[Tim, on PyErr_NoMemory]
> Looks good to me.  And if it breaks something, it will be darned hard to
> tell <wink>.

[Vladimir Marangozov]
> It's easily demonstrated with the memprof.c module I'd like to introduce
> quickly here.
>
> Note: I'll be out of town next week and if someone wants to
> play with this, tell me what to do quickly: upload a (postponed) patch
> which goes in pair with obmalloc.c, put it in a web page or remain
> quiet.
>
> The object allocator is well tested, the memory profiler is not so
> thouroughly tested... The interface needs more work IMHO, but it's
> already quite useful and fun in it's current state <wink>.
> ...

My bandwidth is consumed by 2.0 issues, so I won't look at it.  On the
chance that Guido gets hit by a bus, though, and I have time to kill at his
funeral, it would be nice to have it available on SourceForge.  Uploading a
postponed patch sounds fine!




From tim_one@email.msn.com  Fri Aug 18 20:26:31 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:26:31 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky  return statements
In-Reply-To: <399D89B4.476FB5EF@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com>

[Mark Favas]
> return (long) threadid;
>
> compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
> Removing the "volatile" is also fine for me, but may affect Vladimir.
> I'm still a bit (ha!) confused by Tim's comments that the function is
> bogus for OSF/1 because it throws away half the bits, and will therefore
> result in id collisions - this will only happen on platforms where
> sizeof(long) is less than sizeof(pointer), which is not OSF/1

Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
bits were being lost.  Are you running this on an Alpha?  The comment in the
code specifically names "Alpha OSF/1" as the culprit.  I don't know anything
about OSF/1; perhaps "Alpha" is implied.

> ...
> In summary, whatever issue there was for OSF/1 six (or so) years ago
> appears to be no longer relevant - but there will be the truncation
> issue for Win64-like platforms.

And there's Vladimir's "volatile" hack.




From Fredrik Lundh" <effbot@telia.com  Fri Aug 18 20:37:36 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 21:37:36 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <LNBBLJKPBEHFEDALKOLCOEJIHAAA.tim_one@email.msn.com>
Message-ID: <00e501c0094b$c9ee2ac0$f2a6b5d4@hagrid>

tim peters wrote:


> [/F]
> > is the Windows build system updated to generate new
> > graminit files if the Grammar are updated?
> 
> No, not yet.
> 
> > or is python development a unix-only thingie these days?
> 
> It pretty much always has been!  Just ask Jack <wink>.  It's unreasonable to
> expect Unix(tm) weenies to keep the other builds working (although vital
> that they tell Python-Dev when they do something "new & improved"), and
> counterproductive to imply that their inability to do so should deter them
> from improving the build process on their platform. 

well, all I expect from them is that the repository should
be in a consistent state at all time.

(in other words, never assume that just because generated
files are rebuilt by the unix makefiles, they don't have to be
checked into the repository).

for a moment, sjoerd's problem report made me think that
someone had messed up here...  but I just checked, and
he hadn't ;-)

</F>



From thomas@xs4all.net  Fri Aug 18 20:43:34 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 21:43:34 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <000001c00945$a8d37e40$f2a6b5d4@hagrid>; from effbot@telia.com on Fri, Aug 18, 2000 at 08:42:34PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <20000818214333.X376@xs4all.nl>

On Fri, Aug 18, 2000 at 08:42:34PM +0200, Fredrik Lundh wrote:
> sjoerd wrote:

> > The problem was that because of your (I think it was you :-) earlier
> > change to have a Makefile in Grammar, I had an old graminit.c lying
> > around in my build directory.  I don't build in the source directory
> > and the changes for a Makefile in Grammar resulted in a file
> > graminit.c in the wrong place.

> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?

No, and that's one more reason to reverse my patch ! :-) Note that I didn't
*add* the Makefile, I only added Grammar to the
directories-to-run-make-in-by-default. If the Grammar is changed, you
already need a way to run pgen (of which the source rests in Parser/) to
generate the new graminit.c/graminit.h files. I have no way of knowing
whether that is the case for the windows build files. The CVS tree should
always contain up to date graminit.c/.h files!

The reason it was added was the multitude of Grammar-changing patches on SF,
and the number of people that forgot to run 'make' in Grammar/ after
applying them. I mentioned adding Grammar/ to the directories to be made,
Guido said it was a good idea, and noone complained to it until after it was
added ;P I think we can drop the idea, though, at least for (alpha, beta,
final) releases.

> or is python development a unix-only thingie these days?

Well, *my* python development is a unix-only thingie, mostly because I don't
have a compiler for under Windows. If anyone wants to send me or point me to
the canonical Windows compiler & debugger and such, in a way that won't set
me back a couple of megabucks, I'd be happy to test & debug windows as well.
Gives me *two* uses for Windows: games and Python ;)

My-boss-doesn't-pay-me-to-work-on-Windows-ly y'rs,

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From m.favas@per.dem.csiro.au  Fri Aug 18 21:33:21 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 04:33:21 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky
 return statements
References: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com>
Message-ID: <399D9D91.3E76ED8D@per.dem.csiro.au>

Tim Peters wrote:
> 
> [Mark Favas]
> > return (long) threadid;
> >
> > compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
> > Removing the "volatile" is also fine for me, but may affect Vladimir.
> > I'm still a bit (ha!) confused by Tim's comments that the function is
> > bogus for OSF/1 because it throws away half the bits, and will therefore
> > result in id collisions - this will only happen on platforms where
> > sizeof(long) is less than sizeof(pointer), which is not OSF/1
> 
> Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
> bits were being lost.  Are you running this on an Alpha?  The comment in the
> code specifically names "Alpha OSF/1" as the culprit.  I don't know anything
> about OSF/1; perhaps "Alpha" is implied.

Yep - I'm running on an Alpha. The name of the OS has undergone a couple
of, um, appellation transmogrifications since the first Alpha was
produced by DEC: OSF/1 -> Digital Unix -> Tru64 Unix, although uname has
always reported "OSF1". (I don't think that there's any other
implementation of OSF/1 left these days... not that uses the name,
anyway.)

> 
> > ...
> > In summary, whatever issue there was for OSF/1 six (or so) years ago
> > appears to be no longer relevant - but there will be the truncation
> > issue for Win64-like platforms.
> 
> And there's Vladimir's "volatile" hack.

Wonder if that also is still relevant (was it required because of the
long * long * cast?)...

-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913


From bwarsaw@beopen.com  Fri Aug 18 21:45:03 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 16:45:03 -0400 (EDT)
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
 <20000818161745.U376@xs4all.nl>
 <20000818150639.6685C31047C@bireme.oratrix.nl>
 <000001c00945$a8d37e40$f2a6b5d4@hagrid>
 <20000818214333.X376@xs4all.nl>
Message-ID: <14749.41039.166847.942483@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> No, and that's one more reason to reverse my patch ! :-) Note
    TW> that I didn't *add* the Makefile, I only added Grammar to the
    TW> directories-to-run-make-in-by-default. If the Grammar is
    TW> changed, you already need a way to run pgen (of which the
    TW> source rests in Parser/) to generate the new
    TW> graminit.c/graminit.h files. I have no way of knowing whether
    TW> that is the case for the windows build files. The CVS tree
    TW> should always contain up to date graminit.c/.h files!

I don't think you need to reverse your patch because of this, although
I haven't been following this thread closely.  Just make sure that if
you commit a Grammar change, you must commit the newly generated
graminit.c and graminit.h files.

This is no different than if you change configure.in; you must commit
both that file and the generated configure file.

-Barry


From akuchlin@mems-exchange.org  Fri Aug 18 21:48:37 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 16:48:37 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101055] Cookie.py
In-Reply-To: <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 18, 2000 at 04:06:20PM -0400
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
Message-ID: <20000818164837.A8423@kronos.cnri.reston.va.us>

[Overquoting for the sake of python-dev readers]

On Fri, Aug 18, 2000 at 04:06:20PM -0400, Fred L. Drake, Jr. wrote:
>amk writes:
> > I have a copy of Tim O'Malley's Cookie.py,v file (in order to preserve the
> > version history).  I can either ask the SF admins to drop it into
> > the right place in the CVS repository, but will that screw up the
> > Python 1.6 tagging in some way?  (I would expect not, but who
> > knows?)
>
>  That would have no effect on any of the Python tagging.  It's
>probably worthwhile making sure there are no tags in the ,v file, but
>that can be done after it gets dropped in place.
>  Now, Greg Stein will tell us that dropping this into place is the
>wrong thing to do.  What it *will* screw up is people asking for the
>state of Python at a specific date before the file was actually added;
>they'll get this file even for when it wasn't in the Python CVS tree.
>I can live with that, but we should make a policy decision for the
>Python tree regarding this sort of thing.

Excellent point.  GvR's probably the only person whose ruling matters
on this point; I'll try to remember it and bring it up whenever he
gets back (next week, right?).

>  Don't -- it's not worth it.

I hate throwing away information that stands even a tiny chance of
being useful; good thing the price of disk storage keeps dropping, eh?
The version history might contain details that will be useful in
future debugging or code comprehension, so I dislike losing it
forever.

(My minimalist side is saying that the enhanced Web tools should be a
separately downloadable package.  But you probably guessed that
already...)

--amk


From thomas@xs4all.net  Fri Aug 18 21:56:07 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 22:56:07 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <14749.41039.166847.942483@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 04:45:03PM -0400
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000818214333.X376@xs4all.nl> <14749.41039.166847.942483@anthem.concentric.net>
Message-ID: <20000818225607.Z376@xs4all.nl>

On Fri, Aug 18, 2000 at 04:45:03PM -0400, Barry A. Warsaw wrote:

> This is no different than if you change configure.in; you must commit
> both that file and the generated configure file.

Yes, but more critically so, since it'll screw up more than a couple of
defines on a handful of systems :-) However, this particular change in the
make process doesn't adress this at all. It would merely serve to mask this
problem, in the event of someone commiting a change to Grammar but not to
graminit.*. The reasoning behind the change was "if you change
Grammar/Grammar, and then type 'make', graminit.* should be regenerated
automatically, before they are used in other files." I thought the change
was a small and reasonable one, but now I don't think so, anymore ;P On the
other hand, perhaps the latest changes (not mine) fixed it for real.

But I still think that if this particular makefile setup is used in
releases, 'pgen' should at least be made a tad less verbose.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Aug 18 22:04:01 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 23:04:01 +0200
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: <20000818164837.A8423@kronos.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Aug 18, 2000 at 04:48:37PM -0400
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com> <20000818164837.A8423@kronos.cnri.reston.va.us>
Message-ID: <20000818230401.A376@xs4all.nl>

On Fri, Aug 18, 2000 at 04:48:37PM -0400, Andrew Kuchling wrote:

[ About adding Cookie.py including CVS history ]

> I hate throwing away information that stands even a tiny chance of
> being useful; good thing the price of disk storage keeps dropping, eh?
> The version history might contain details that will be useful in
> future debugging or code comprehension, so I dislike losing it
> forever.

It would be moderately nice to have the versioning info, though I think Fred
has a point when he says that it might be confusing for people: it would
look like the file had been in the CVS repository the whole time, and it
would be very hard to see where the file had been added to Python. And what
about new versions ? Would this file be adopted by Python, would changes by
the original author be incorporated ? How about version history for those
changes ? The way it usually goes (as far as my experience goes) is that
such files are updated only periodically. Would those updates incorporate
the history of those changes as well ?

Unless Cookie.py is really split off, and we're going to maintain a separate
version, I don't think it's worth worrying about the version history as
such. Pointing to the 'main' version and it's history should be enough.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From bwarsaw@beopen.com  Fri Aug 18 22:13:31 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:13:31 -0400 (EDT)
Subject: [Python-Dev] gettext in the standard library
Message-ID: <14749.42747.411862.940207@anthem.concentric.net>

Apologies for duplicates to those of you already on python-dev...

I've been working on merging all the various implementations of Python
interfaces to the GNU gettext libraries.  I've worked from code
contributed by Martin, James, and Peter.  I now have something that
seems to work fairly well so I thought I'd update you all.

After looking at all the various wizzy and experimental stuff in these
implementations, I opted for simplicity, mostly just so I could get my
head around what was needed.  My goal was to build a fast C wrapper
module around the C library, and to provide a pure Python
implementation of an identical API for platforms without GNU gettext.

I started with Martin's libintlmodule, renamed it _gettext and cleaned
up the C code a bit.  This provides gettext(), dgettext(),
dcgettext(), textdomain(), and bindtextdomain() functions.  The
gettext.py module imports these, and if it succeeds, it's done.

If that fails, then there's a bunch of code, mostly derived from
Peter's fintl.py module, that reads the binary .mo files and does the
look ups itself.  Note that Peter's module only supported the GNU
gettext binary format, and that's all mine does too.  It should be
easy to support other binary formats (Solaris?) by overriding one
method in one class, and contributions are welcome.

James's stuff looked cool too, what I grokked of it :) but I think
those should be exported as higher level features.  I didn't include
the ability to write .mo files or the exported Catalog objects.  I
haven't used the I18N services enough to know whether these are
useful.

I added one convenience function, gettext.install().  If you call
this, it inserts the gettext.gettext() function into the builtins
namespace as `_'.  You'll often want to do this, based on the I18N
translatable strings marking conventions.  Note that importing gettext
does /not/ install by default!

And since (I think) you'll almost always want to call bindtextdomain()
and textdomain(), you can pass the domain and localedir in as
arguments to install.  Thus, the simple and quick usage pattern is:

    import gettext
    gettext.install('mydomain', '/my/locale/dir')

    print _('this is a localized message')

I think it'll be easier to critique this stuff if I just check it in.
Before I do, I still need to write up a test module and hack together
docos.  In the meantime, here's the module docstring for gettext.py.
Talk amongst yourselves. :)

-Barry

"""Internationalization and localization support.

This module provides internationalization (I18N) and localization (L10N)
support for your Python programs by providing an interface to the GNU gettext
message catalog library.

I18N refers to the operation by which a program is made aware of multiple
languages.  L10N refers to the adaptation of your program, once
internationalized, to the local language and cultural habits.  In order to
provide multilingual messages for your Python programs, you need to take the
following steps:

    - prepare your program by specially marking translatable strings
    - run a suite of tools over your marked program files to generate raw
      messages catalogs
    - create language specific translations of the message catalogs
    - use this module so that message strings are properly translated

In order to prepare your program for I18N, you need to look at all the strings
in your program.  Any string that needs to be translated should be marked by
wrapping it in _('...') -- i.e. a call to the function `_'.  For example:

    filename = 'mylog.txt'
    message = _('writing a log message')
    fp = open(filename, 'w')
    fp.write(message)
    fp.close()

In this example, the string `writing a log message' is marked as a candidate
for translation, while the strings `mylog.txt' and `w' are not.

The GNU gettext package provides a tool, called xgettext that scans C and C++
source code looking for these specially marked strings.  xgettext generates
what are called `.pot' files, essentially structured human readable files
which contain every marked string in the source code.  These .pot files are
copied and handed over to translators who write language-specific versions for
every supported language.

For I18N Python programs however, xgettext won't work; it doesn't understand
the myriad of string types support by Python.  The standard Python
distribution provides a tool called pygettext that does though (usually in the
Tools/i18n directory).  This is a command line script that supports a similar
interface as xgettext; see its documentation for details.  Once you've used
pygettext to create your .pot files, you can use the standard GNU gettext
tools to generate your machine-readable .mo files, which are what's used by
this module and the GNU gettext libraries.

In the simple case, to use this module then, you need only add the following
bit of code to the main driver file of your application:

    import gettext
    gettext.install()

This sets everything up so that your _('...') function calls Just Work.  In
other words, it installs `_' in the builtins namespace for convenience.  You
can skip this step and do it manually by the equivalent code:

    import gettext
    import __builtin__
    __builtin__['_'] = gettext.gettext

Once you've done this, you probably want to call bindtextdomain() and
textdomain() to get the domain set up properly.  Again, for convenience, you
can pass the domain and localedir to install to set everything up in one fell
swoop:

    import gettext
    gettext.install('mydomain', '/my/locale/dir')

"""


From tim_one@email.msn.com  Fri Aug 18 22:13:29 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:13:29 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818214333.X376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKCHAAA.tim_one@email.msn.com>

[Thomas Wouters]
> No, and that's one more reason to reverse my patch ! :-) Note
> that I didn't *add* the Makefile, I only added Grammar to the
> directories-to-run-make-in-by-default.
> ...
> The reason it was added was the multitude of Grammar-changing
> patches on SF, and the number of people that forgot to run 'make'
> in Grammar/ after applying them. I mentioned adding Grammar/ to
> the directories to be made, Guido said it was a good idea, and
> noone complained to it until after it was added ;P

And what exactly is the complaint?  It's nice to build things that are out
of date;  I haven't used Unix(tm) for some time, but I vaguely recall that
was "make"'s purpose in life <wink>.  Or is it that the grammar files are
getting rebuilt unnecessarily?

> ...
> My-boss-doesn't-pay-me-to-work-on-Windows-ly y'rs,

Your boss *pays* you?!  Wow.  No wonder you get so much done.




From bwarsaw@beopen.com  Fri Aug 18 22:16:07 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:16:07 -0400 (EDT)
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
 <20000818161745.U376@xs4all.nl>
 <20000818150639.6685C31047C@bireme.oratrix.nl>
 <000001c00945$a8d37e40$f2a6b5d4@hagrid>
 <20000818214333.X376@xs4all.nl>
 <14749.41039.166847.942483@anthem.concentric.net>
 <20000818225607.Z376@xs4all.nl>
Message-ID: <14749.42903.342401.245594@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> Yes, but more critically so, since it'll screw up more than a
    TW> couple of defines on a handful of systems :-)

Yes, but a "cvs update" should always clue you in that those files
needs committing too.  Every always does a "cvs update" before
committing any files, right? :)
    
    TW> But I still think that if this particular makefile setup is
    TW> used in releases, 'pgen' should at least be made a tad less
    TW> verbose.

pgen also leaks like a sieve, but it's not worth patching. ;}

-Barry


From bwarsaw@beopen.com  Fri Aug 18 22:17:14 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:17:14 -0400 (EDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101055] Cookie.py
References: <200008181951.MAA30358@bush.i.sourceforge.net>
 <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
 <20000818164837.A8423@kronos.cnri.reston.va.us>
Message-ID: <14749.42970.845587.90980@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin@mems-exchange.org> writes:

    AK> I hate throwing away information that stands even a tiny
    AK> chance of being useful; good thing the price of disk storage
    AK> keeps dropping, eh?  The version history might contain details
    AK> that will be useful in future debugging or code comprehension,
    AK> so I dislike losing it forever.

I agree.  Let's try to keep the revision history for Cookie.py.

-Barry


From tim_one@email.msn.com  Fri Aug 18 22:18:56 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:18:56 -0400
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
Message-ID: <LNBBLJKPBEHFEDALKOLCIEKDHAAA.tim_one@email.msn.com>

For those of you who stopped reading comp.lang.python, note that this is
*typical* of the advice users get in the absence of syntactic "print" sugar.

it's-not-exactly-intuitive-that-a-"print"-stmt-isn't-meant-to-
    do-output<wink>-ly y'rs  - tim

-----Original Message-----
From: python-list-admin@python.org
[mailto:python-list-admin@python.org]On Behalf Of Jeff Kunce
Sent: Friday, August 18, 2000 3:41 PM
To: python-list@python.org
Subject: Re: sys.stndout syntax


> on win32, to get printed output to a file i use
> >>> sys.stndout = open('file.txt', 'w')
> how do i get it to go back to the command line

#Here is the specific solution for those who remember the secret password:
sys.stdout = open('file.txt', 'w')
print 'this is written to file.txt'
sys.stdout.close()
sys.stdout = sys.__stdout__
print 'this is written to console'

#Here is the general solution if you can't be bothered with remembering
secret passwords:
original_stdout = sys.stdout
sys.stdout = open('file.txt', 'w')
print 'this is written to file.txt'
sys.stdout.close()
sys.stdout = original_stdout
print 'this is written to console'


  --Jeff


--
http://www.python.org/mailman/listinfo/python-list




From mal@lemburg.com  Fri Aug 18 22:21:23 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 23:21:23 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
Message-ID: <399DA8D3.70E85C58@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> Apologies for duplicates to those of you already on python-dev...
> 
> I've been working on merging all the various implementations of Python
> interfaces to the GNU gettext libraries.  I've worked from code
> contributed by Martin, James, and Peter.  I now have something that
> seems to work fairly well so I thought I'd update you all.
> 
> After looking at all the various wizzy and experimental stuff in these
> implementations, I opted for simplicity, mostly just so I could get my
> head around what was needed.  My goal was to build a fast C wrapper
> module around the C library, and to provide a pure Python
> implementation of an identical API for platforms without GNU gettext.

Sounds cool.

I know that gettext is a standard, but from a technology POV
I would have implemented this as codec wich can then be plugged
wherever l10n is needed, since strings have the new .encode()
method which could just as well be used to convert not only the
string into a different encoding, but also a different language.
Anyway, just a thought...

What I'm missing in your doc-string is a reference as to how
well gettext works together with Unicode. After all, i18n is
among other things about international character sets.
Have you done any experiments in this area ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From bwarsaw@beopen.com  Fri Aug 18 22:19:12 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:19:12 -0400 (EDT)
Subject: [Python-Dev] [Patch #101055] Cookie.py
References: <200008181951.MAA30358@bush.i.sourceforge.net>
 <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
 <20000818164837.A8423@kronos.cnri.reston.va.us>
 <20000818230401.A376@xs4all.nl>
Message-ID: <14749.43088.855537.355621@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:
    TW> It would be moderately nice to have the versioning info,
    TW> though I think Fred has a point when he says that it might be
    TW> confusing for people: it would look like the file had been in
    TW> the CVS repository the whole time, and it would be very hard
    TW> to see where the file had been added to Python.

I don't think that's true, because the file won't have the tag
information in it.  That could be a problem in and of itself, but I
dunno.

-Barry


From tim_one@email.msn.com  Fri Aug 18 22:41:18 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:41:18 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <14749.42903.342401.245594@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEKEHAAA.tim_one@email.msn.com>

>     TW> But I still think that if this particular makefile setup is
>     TW> used in releases, 'pgen' should at least be made a tad less
>     TW> verbose.

[Barry]
> pgen also leaks like a sieve, but it's not worth patching. ;}

Huh!  And here I thought Thomas was suggesting we shorten its name to "pge".

or-even-"p"-if-we-wanted-it-a-lot-less-verbose-ly y'rs  - tim




From bwarsaw@beopen.com  Fri Aug 18 22:49:23 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:49:23 -0400 (EDT)
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
 <399DA8D3.70E85C58@lemburg.com>
Message-ID: <14749.44899.573649.483154@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> I know that gettext is a standard, but from a technology POV I
    M> would have implemented this as codec wich can then be plugged
    M> wherever l10n is needed, since strings have the new .encode()
    M> method which could just as well be used to convert not only the
    M> string into a different encoding, but also a different
    M> language.  Anyway, just a thought...

That might be cool to play with, but I haven't done anything with
Python's Unicode stuff (and painfully little with gettext too) so
right now I don't see how they'd fit together.  My gut reaction is
that gettext could be the lower level interface to
string.encode(language).

    M> What I'm missing in your doc-string is a reference as to how
    M> well gettext works together with Unicode. After all, i18n is
    M> among other things about international character sets.
    M> Have you done any experiments in this area ?

No, but I've thought about it, and I don't think the answer is good.
The GNU gettext functions take and return char*'s, which probably
isn't very compatible with Unicode.  _gettext therefore takes and
returns PyStringObjects.

We could do better with the pure-Python implementation, and that might
be a good reason to forgo any performance gains or platform-dependent
benefits you'd get with _gettext.  Of course the trick is using the
Unicode-unaware tools to build .mo files containing Unicode strings.
I don't track GNU gettext developement close enough to know whether
they are addressing Unicode issues or not.

-Barry


From Fredrik Lundh" <effbot@telia.com  Fri Aug 18 23:06:35 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 19 Aug 2000 00:06:35 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky   return statements
References: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com> <399D9D91.3E76ED8D@per.dem.csiro.au>
Message-ID: <006801c00960$944da200$f2a6b5d4@hagrid>

tim wrote:
> > Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
> > bits were being lost.

the compiler doesn't warn about bits being lost -- it complained
because the code was returning a pointer from a function declared
to return a long integer.

(explicitly casting the pthread_t to a long gets rid of the warning).

mark wrote:
> > > In summary, whatever issue there was for OSF/1 six (or so) years ago
> > > appears to be no longer relevant - but there will be the truncation
> > > issue for Win64-like platforms.
> > 
> > And there's Vladimir's "volatile" hack.
> 
> Wonder if that also is still relevant (was it required because of the
> long * long * cast?)...

probably.  messing up when someone abuses pointer casts is one thing,
but if the AIX compiler cannot cast a long to a long, it's broken beyond
repair ;-)

frankly, the code is just plain broken.  instead of adding even more dumb
hacks, just fix it.  here's how it should be done:

    return (long) pthread_self(); /* look! no variables! */

or change

 /* Jump through some hoops for Alpha OSF/1 */

to

 /* Jump through some hoops because Tim Peters wants us to ;-) */

</F>



From bwarsaw@beopen.com  Fri Aug 18 23:03:24 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 18:03:24 -0400 (EDT)
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
References: <LNBBLJKPBEHFEDALKOLCIEKDHAAA.tim_one@email.msn.com>
Message-ID: <14749.45740.432586.615745@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> For those of you who stopped reading comp.lang.python, note
    TP> that this is *typical* of the advice users get in the absence
    TP> of syntactic "print" sugar.

Which is of course broken, if say, you print an object that has a
str() that raises an exception.


From tim_one@email.msn.com  Fri Aug 18 23:08:55 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 18:08:55 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky   return statements
In-Reply-To: <006801c00960$944da200$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEKIHAAA.tim_one@email.msn.com>

[/F]
> the compiler doesn't warn about bits being lost -- it complained
> because the code was returning a pointer from a function declared
> to return a long integer.
>
> (explicitly casting the pthread_t to a long gets rid of the warning).

For the umpty-umpth time, the code with the simple cast to long is what was
there originally.  The convoluted casting was added later to stop "Alpha
OSF/1" compiler complaints.  Perhaps the compiler no longer complains,
though, or perhaps the one or two people who have tried it since don't have
a version of the compiler that cares about it.

> ...
> frankly, the code is just plain broken.  instead of adding even more dumb
> hacks, just fix it.  here's how it should be done:
>
>     return (long) pthread_self(); /* look! no variables! */

Fine by me, provided that works on all current platforms, and it's
understood that the function is inherently hosed anyway (the cast to long is
inherently unsafe, and we're still doing nothing to meet the promise in the
docs that this function returns a non-zero integer).

> or change
>
>  /* Jump through some hoops for Alpha OSF/1 */
>
> to
>
>  /* Jump through some hoops because Tim Peters wants us to ;-) */

Also fine by me, provided that works on all current platforms, and it's
understood that the function is inherently hosed anyway (the cast to long is
inherently unsafe, and we're still doing nothing to meet the promise in the
docs that this function returns a non-zero integer).




From tim_one@email.msn.com  Fri Aug 18 23:14:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 18:14:25 -0400
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
In-Reply-To: <14749.45740.432586.615745@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEKJHAAA.tim_one@email.msn.com>

>     TP> For those of you who stopped reading comp.lang.python, note
>     TP> that this is *typical* of the advice users get in the absence
>     TP> of syntactic "print" sugar.

[Barry]
> Which is of course broken, if say, you print an object that has a
> str() that raises an exception.

Yes, and if you follow that thread on c.l.py, you'll find that it's also
typical for the suggestions to get more and more convoluted (for that and
other reasons).




From barry@scottb.demon.co.uk  Fri Aug 18 23:36:28 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Fri, 18 Aug 2000 23:36:28 +0100
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000814094440.0BC7F303181@snelboot.oratrix.nl>
Message-ID: <000501c00964$c00e0de0$060210ac@private>


> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Jack Jansen
> Sent: 14 August 2000 10:45
> To: Guido van Rossum
> Cc: Vladimir Marangozov; Python core developers
> Subject: Re: [Python-Dev] Preventing recursion core dumps
> 
> 
> Isn't the solution to this problem to just implement PyOS_CheckStack() for 
> unix?

	And for Windows...

	I still want to control the recursion depth for other reasons than
	preventing crashes. Especially when I have embedded Python inside my
	app. (CUrrently I have to defend against a GPF under windows when
	def x(): x() is called.)

		Barry



From barry@scottb.demon.co.uk  Fri Aug 18 23:50:39 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Fri, 18 Aug 2000 23:50:39 +0100
Subject: [Python-Dev] Terminal programs (was: Python-dev summary: Jul 1-15)
In-Reply-To: <20000718124144.M29590@lyra.org>
Message-ID: <000601c00966$bac6f890$060210ac@private>

I can second Tera Term Pro. It is one of the few VT100 emulators that gets the
emulation right. Many term program get the emulation wrong, often
badly. If you do not have the docs for the VT series terminals a devo will not
know the details of how the escape sequences should work and apps will fail.

	BArry (Ex DEC VT expert)

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Greg Stein
> Sent: 18 July 2000 20:42
> To: python-dev@python.org
> Subject: [Python-Dev] Terminal programs (was: Python-dev summary: Jul
> 1-15)
> 
> 
> On Tue, Jul 18, 2000 at 10:13:21AM -0400, Andrew Kuchling wrote:
> > Thanks to everyone who made some suggestions.  The more minor
> > edits have been made, but I haven't added any of the threads I missed
> > because doing a long stretch of Emacs editing in this lame Windows terminal
> > program will drive me insane, so I just posted the summary to python-list.
> > 
> > <rant>How is it possible for Microsoft to not get a VT100-compatible
> > terminal program working correctly?  VT100s have been around since,
> > when, the 1970s?  Can anyone suggest a Windows terminal program that
> > *doesn't* suck dead bunnies through a straw?</rant>
> 
> yes.
> 
> I use "Tera Term Pro" with the SSH extensions. That gives me an excellent
> Telnet app, and it gives me SSH. I have never had a problem with it.
> 
> [ initially, there is a little tweakiness with creating the "known_hosts"
>   file, but you just hit "continue" and everything is fine after that. ]
> 
> Tera Term Pro can be downloaded from some .jp address. I think there is a
> 16-bit vs 32-bit program. I use the latter. The SSL stuff is located in Oz,
> me thinks.
> 
> I've got it on the laptop. Great stuff.
> 
> Cheers,
> -g
> 
> -- 
> Greg Stein, http://www.lyra.org/
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


From james@daa.com.au  Sat Aug 19 01:54:30 2000
From: james@daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 08:54:30 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399DA8D3.70E85C58@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008190846110.25020-100000@james.daa.com.au>

On Fri, 18 Aug 2000, M.-A. Lemburg wrote:

> What I'm missing in your doc-string is a reference as to how
> well gettext works together with Unicode. After all, i18n is
> among other things about international character sets.
> Have you done any experiments in this area ?

At the C level, the extent to which gettext supports unicode is if the
catalog was encoded in the UTF8 encoding.

As an example, GTK (a GUI toolkit) is moving to pango (a library used to
allow display of multiple languages at once), all the catalogs for GTK
programs will have to be reencoded in UTF8.

I don't know if it is worth adding explicit support to the python gettext
module though.

James.

-- 
Email: james@daa.com.au
WWW:   http://www.daa.com.au/~james/




From fdrake@beopen.com  Sat Aug 19 02:16:33 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 21:16:33 -0400 (EDT)
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: <14749.43088.855537.355621@anthem.concentric.net>
References: <200008181951.MAA30358@bush.i.sourceforge.net>
 <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
 <20000818164837.A8423@kronos.cnri.reston.va.us>
 <20000818230401.A376@xs4all.nl>
 <14749.43088.855537.355621@anthem.concentric.net>
Message-ID: <14749.57329.966314.171906@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > I don't think that's true, because the file won't have the tag
 > information in it.  That could be a problem in and of itself, but I
 > dunno.

  The confusion isn't from the tags, but the dates; if the ,v was
created 2 years ago, asking for the python tree as of a year ago
(using -D <date>) will include the file, even though it wasn't part of
our repository then.  Asking for a specific tag (using -r <tag>) will
properly not include it unless there's a matching tag there.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From james@daa.com.au  Sat Aug 19 02:26:44 2000
From: james@daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 09:26:44 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <14749.42747.411862.940207@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0008190854480.25020-100000@james.daa.com.au>

On Fri, 18 Aug 2000, Barry A. Warsaw wrote:

> 
> Apologies for duplicates to those of you already on python-dev...
> 
> I've been working on merging all the various implementations of Python
> interfaces to the GNU gettext libraries.  I've worked from code
> contributed by Martin, James, and Peter.  I now have something that
> seems to work fairly well so I thought I'd update you all.
> 
> After looking at all the various wizzy and experimental stuff in these
> implementations, I opted for simplicity, mostly just so I could get my
> head around what was needed.  My goal was to build a fast C wrapper
> module around the C library, and to provide a pure Python
> implementation of an identical API for platforms without GNU gettext.

Sounds good.  Most of the experimental stuff in my module turned out to
not be very useful.  Having a simple gettext module plus your pyxgettext
script should be enough.

> 
> I started with Martin's libintlmodule, renamed it _gettext and cleaned
> up the C code a bit.  This provides gettext(), dgettext(),
> dcgettext(), textdomain(), and bindtextdomain() functions.  The
> gettext.py module imports these, and if it succeeds, it's done.
> 
> If that fails, then there's a bunch of code, mostly derived from
> Peter's fintl.py module, that reads the binary .mo files and does the
> look ups itself.  Note that Peter's module only supported the GNU
> gettext binary format, and that's all mine does too.  It should be
> easy to support other binary formats (Solaris?) by overriding one
> method in one class, and contributions are welcome.

I think support for Solaris big endian format .po files would probably be
a good idea.  It is not very difficult and doesn't really add to the
complexity.

> 
> James's stuff looked cool too, what I grokked of it :) but I think
> those should be exported as higher level features.  I didn't include
> the ability to write .mo files or the exported Catalog objects.  I
> haven't used the I18N services enough to know whether these are
> useful.

As I said above, most of that turned out not to be very useful.  Did you
include any of the language selection code in the last version of my
gettext module?  It gave behaviour very close to C gettext in this
respect.  It expands the locale name given by the user using the
locale.alias files found on the systems, then decomposes that into the
simpler forms.  For instance, if LANG=en_GB, then my gettext module would
search for catalogs by names:
  ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']

This also allows things like expanding LANG=catalan to:
  ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
(provided the appropriate locale.alias files are found)

If you missed that that version I sent you I can send it again.  It
stripped out a lot of the experimental code giving a much simpler module.

> 
> I added one convenience function, gettext.install().  If you call
> this, it inserts the gettext.gettext() function into the builtins
> namespace as `_'.  You'll often want to do this, based on the I18N
> translatable strings marking conventions.  Note that importing gettext
> does /not/ install by default!

That sounds like a good idea that will make things a lot easier in the
common case.

> 
> And since (I think) you'll almost always want to call bindtextdomain()
> and textdomain(), you can pass the domain and localedir in as
> arguments to install.  Thus, the simple and quick usage pattern is:
> 
>     import gettext
>     gettext.install('mydomain', '/my/locale/dir')
> 
>     print _('this is a localized message')
> 
> I think it'll be easier to critique this stuff if I just check it in.
> Before I do, I still need to write up a test module and hack together
> docos.  In the meantime, here's the module docstring for gettext.py.
> Talk amongst yourselves. :)
> 
> -Barry

James.

-- 
Email: james@daa.com.au
WWW:   http://www.daa.com.au/~james/




From Vladimir.Marangozov@inrialpes.fr  Sat Aug 19 04:27:20 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 05:27:20 +0200 (CEST)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.33584.683341.684523@cj42289-a.reston1.va.home.com> from "Fred L. Drake, Jr." at Aug 18, 2000 02:40:48 PM
Message-ID: <200008190327.FAA10001@python.inrialpes.fr>

Fred L. Drake, Jr. wrote:
> 
> 
> Vladimir Marangozov writes:
>  > So name it PyModule_AddConstant(module, name, constant),
>  > which fails with "can't add constant to module" err msg.
> 
>   Even better!  I expect there should be at least a couple of these;
> one for ints, one for strings.
> 

What about something like this (untested):

------------------------------------------------------------------------
int
PyModule_AddObject(PyObject *m, char *name, PyObject *o)
{
        if (!PyModule_Check(m) || o == NULL)
                return -1;
        if (PyDict_SetItemString(((PyModuleObject *)m)->md_dict, name, o))
                return -1;
        Py_DECREF(o);
        return 0;
}

#define PyModule_AddConstant(m, x) \
        PyModule_AddObject(m, #x, PyInt_FromLong(x))

#define PyModule_AddString(m, x) \  
        PyModule_AddObject(m, x, PyString_FromString(x))

------------------------------------------------------------------------
void 
initmymodule(void)
{
        int CONSTANT = 123456;
        char *STR__doc__  = "Vlad";

        PyObject *m = Py_InitModule4("mymodule"...);


 
        if (PyModule_AddString(m, STR__doc__) ||
            PyModule_AddConstant(m, CONSTANT) ||
            ... 
        {
            Py_FatalError("can't init mymodule");
        }
}           


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From cgw@fnal.gov  Sat Aug 19 04:55:21 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 22:55:21 -0500 (CDT)
Subject: [Python-Dev] RE: compile.c: problem with duplicate argument bugfix
Message-ID: <14750.1321.978274.117748@buffalo.fnal.gov>

I'm catching up on the python-dev archives and see your message.

Note that I submitted a patch back in May to fix this same problem:

 http://www.python.org/pipermail/patches/2000-May/000638.html

There you will find a working patch, and a detailed discussion which
explains why your approach results in a core-dump.

I submitted this patch back before Python moved over to SourceForge,
there was a small amount of discussion about it and then the word from
Guido was "I'm too busy to look at this now", and the patch got
dropped on the floor.



From tim_one@email.msn.com  Sat Aug 19 05:11:41 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 00:11:41 -0400
Subject: [Python-Dev] RE: [Patches] [Patch #101055] Cookie.py
In-Reply-To: <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELAHAAA.tim_one@email.msn.com>

Moving this over from patches to python-dev.

My 2 cents:  The primary job of a source control system is to maintain an
accurate and easily retrieved historical record of a project.  Tim
O'Malley's ,v file records the history of his project, Python's should
record the history of its.  While a handful of people at CNRI have been able
to (or could, if they were of a common mind to) keep track of a handful of
exceptions in their heads, Python's CVS tree is available to the world now,
and approximately nobody looking at it will have any knowledge of this
discussion.  If they ask CVS for a date-based snapshot of the past, they're
using CVS for what it's *designed* for, and they should get back what they
asked for.

Have these kinds of tricks already been played in the CVS tree?  I'm mildly
concerned about that too, because whenever license or copyright issues are
in question, an accurate historical record is crucial ("Now, Mr. Kuchling,
isn't it true that you deliberately sabotaged the history of the Python
project in order to obscure your co-conspirators' theft of my client's
intellectual property?" <0.9 wink>).

let's-honor-the-past-by-keeping-it-honest-ly y'rs  - tim

> -----Original Message-----
> From: patches-admin@python.org [mailto:patches-admin@python.org]On
> Behalf Of Fred L. Drake, Jr.
> Sent: Friday, August 18, 2000 4:06 PM
> To: noreply@sourceforge.net
> Cc: akuchlin@mems-exchange.org; patches@python.org
> Subject: Re: [Patches] [Patch #101055] Cookie.py
>
>
>
> noreply@sourceforge.net writes:
>  > I have a copy of Tim O'Malley's ,v file (in order to preserve the
>  > version history).  I can either ask the SF admins to drop it into
>  > the right place in the CVS repository, but will that screw up the
>  > Python 1.6 tagging in some way?  (I would expect not, but who
>  > knows?)
>
>   That would have no effect on any of the Python tagging.  It's
> probably worthwhile making sure there are no tags in the ,v file, but
> that can be done after it gets dropped in place.
>   Now, Greg Stein will tell us that dropping this into place is the
> wrong thing to do.  What it *will* screw up is people asking for the
> state of Python at a specific date before the file was actually added;
> they'll get this file even for when it wasn't in the Python CVS tree.
> I can live with that, but we should make a policy decision for the
> Python tree regarding this sort of thing.
>
>  > The second option would be for me to retrace Cookie.py's
>  > development -- add revision 1.1, check in revision 1.2 with the
>  > right log message, check in revision 1.3, &c.  Obviously I'd prefer
>  > to not do this.
>
>   Don't -- it's not worth it.
>
>
>   -Fred
>
> --
> Fred L. Drake, Jr.  <fdrake at beopen.com>
> BeOpen PythonLabs Team Member
>
>
> _______________________________________________
> Patches mailing list
> Patches@python.org
> http://www.python.org/mailman/listinfo/patches




From cgw@fnal.gov  Sat Aug 19 05:31:06 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 23:31:06 -0500 (CDT)
Subject: [Python-Dev] Eureka! (Re: test_fork fails --with-thread)
Message-ID: <14750.3466.34096.504552@buffalo.fnal.gov>


Last month there was a flurry of discussion, around 

http://www.python.org/pipermail/python-dev/2000-July/014208.html

about problems arising when combining threading and forking.  I've
been reading through the python-dev archives and as far as I can tell
this problem has not yet been resolved.

Well, I think I understand what's going on and I have a patch that
fixes the problem.

Contrary to some folklore, you *can* use fork() in threaded code; you
just have to be a bit careful about locks...

Rather than write up a long-winded explanation myself, allow me to
quote:

-----------------------------------------------------------------
from "man pthread_atfork":

       ... recall that fork(2) duplicates the whole memory space,
       including mutexes in their current locking state, but only the
       calling thread: other threads are not running in the child
       process. Thus, if a mutex is locked by a thread other than
       the thread calling fork, that  mutex  will  remain  locked
       forever in the child process, possibly blocking the execu-
       tion of the child process. 

and from http://www.lambdacs.com/newsgroup/FAQ.html#Q120

  Q120: Calling fork() from a thread 

  > Can I fork from within a thread ?

  Absolutely.

  > If that is not explicitly forbidden, then what happens to the
  > other threads in the child process ?

  There ARE no other threads in the child process. Just the one that
  forked. If your application/library has background threads that need
  to exist in a forked child, then you should set up an "atfork" child
  handler (by calling pthread_atfork) to recreate them. And if you use
  mutexes, and want your application/library to be "fork safe" at all,
  you also need to supply an atfork handler set to pre-lock all your
  mutexes in the parent, then release them in the parent and child
  handlers.  Otherwise, ANOTHER thread might have a mutex locked when
  one thread forks -- and because the owning thread doesn't exist in
  the child, the mutex could never be released. (And, worse, whatever
  data is protected by the mutex is in an unknown and inconsistent
  state.)

-------------------------------------------------------------------

Below is a patch (I will also post this to SourceForge)

Notes on the patch:

1) I didn't make use of pthread_atfork, because I don't know how
   portable it is.  So, if somebody uses "fork" in a C extension there
   will still be trouble.

2) I'm deliberately not cleaning up the old lock before creating 
   the new one, because the lock destructors also do error-checking.
   It might be better to add a PyThread_reset_lock function to all the
   thread_*.h files, but I'm hesitant to do this because of the amount
   of testing required.


Patch:

Index: Modules/signalmodule.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Modules/signalmodule.c,v
retrieving revision 2.53
diff -c -r2.53 signalmodule.c
*** Modules/signalmodule.c	2000/08/03 02:34:44	2.53
--- Modules/signalmodule.c	2000/08/19 03:37:52
***************
*** 667,672 ****
--- 667,673 ----
  PyOS_AfterFork(void)
  {
  #ifdef WITH_THREAD
+ 	PyEval_ReInitThreads();
  	main_thread = PyThread_get_thread_ident();
  	main_pid = getpid();
  #endif
Index: Parser/intrcheck.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Parser/intrcheck.c,v
retrieving revision 2.39
diff -c -r2.39 intrcheck.c
*** Parser/intrcheck.c	2000/07/31 15:28:04	2.39
--- Parser/intrcheck.c	2000/08/19 03:37:54
***************
*** 206,209 ****
--- 206,212 ----
  void
  PyOS_AfterFork(void)
  {
+ #ifdef WITH_THREAD
+ 	PyEval_ReInitThreads();
+ #endif
  }
Index: Python/ceval.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/ceval.c,v
retrieving revision 2.191
diff -c -r2.191 ceval.c
*** Python/ceval.c	2000/08/18 19:53:25	2.191
--- Python/ceval.c	2000/08/19 03:38:06
***************
*** 142,147 ****
--- 142,165 ----
  		Py_FatalError("PyEval_ReleaseThread: wrong thread state");
  	PyThread_release_lock(interpreter_lock);
  }
+ 
+ /* This function is called from PyOS_AfterFork to ensure that newly
+    created child processes don't hold locks referring to threads which
+    are not running in the child process.  (This could also be done using
+    pthread_atfork mechanism, at least for the pthreads implementation) */
+ void
+ PyEval_ReInitThreads(void)
+ {
+ 	if (!interpreter_lock)
+ 		return;
+ 	/*XXX Can't use PyThread_free_lock here because it does too
+ 	  much error-checking.  Doing this cleanly would require
+ 	  adding a new function to each thread_*.h.  Instead, just
+ 	  create a new lock and waste a little bit of memory */
+ 	interpreter_lock = PyThread_allocate_lock();
+ 	PyThread_acquire_lock(interpreter_lock, 1);
+ 	main_thread = PyThread_get_thread_ident();
+ }
  #endif
  
  /* Functions save_thread and restore_thread are always defined so




From esr@thyrsus.com  Sat Aug 19 06:17:17 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 19 Aug 2000 01:17:17 -0400
Subject: [Python-Dev] Request for help w/ bsddb module
In-Reply-To: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>; from amk@s154.tnt3.ann.va.dialup.rcn.com on Thu, Aug 17, 2000 at 10:46:32PM -0400
References: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>
Message-ID: <20000819011717.L835@thyrsus.com>

A.M. Kuchling <amk@s154.tnt3.ann.va.dialup.rcn.com>:
> (Can this get done in time for Python 2.0?  Probably.  Can it get
> tested in time for 2.0?  Ummm....)

I have zero experience with writing C extensions, so I'm probably not
best deployed on this.  But I'm willing to help with testing.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"As to the species of exercise, I advise the gun. While this gives
[only] moderate exercise to the body, it gives boldness, enterprise,
and independence to the mind.  Games played with the ball and others
of that nature, are too violent for the body and stamp no character on
the mind. Let your gun, therefore, be the constant companion to your
walks."
        -- Thomas Jefferson, writing to his teenaged nephew.


From tim_one@email.msn.com  Sat Aug 19 06:11:28 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 01:11:28 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
Message-ID: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>

Note that a patch has been posted to SourceForge that purports to solve
*some* thread vs fork problems:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470

Since nobody has made real progress on figuring out why test_fork1 fails on
some systems, would somebody who is able to make it fail please just try
this patch & see what happens?

understanding-is-better-than-a-fix-but-i'll-settle-for-the-latter-ly
    y'rs  - tim




From cgw@fnal.gov  Sat Aug 19 06:26:33 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Sat, 19 Aug 2000 00:26:33 -0500 (CDT)
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
Message-ID: <14750.6793.342815.211141@buffalo.fnal.gov>

Tim Peters writes:

 > Since nobody has made real progress on figuring out why test_fork1 
 > fails on some systems,  would somebody who is able to make it fail
 > please just try this patch & see what happens?

Or try this program (based on Neil's example), which will fail almost
immediately unless you apply my patch:


import thread
import os, sys
import time

def doit(name):
    while 1:
        if os.fork()==0:
            print name, 'forked', os.getpid()
            os._exit(0)
        r = os.wait()

for x in range(50):
    name = 't%s'%x
    print 'starting', name
    thread.start_new_thread(doit, (name,))

time.sleep(300)



From tim_one@email.msn.com  Sat Aug 19 06:59:12 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 01:59:12 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <14750.6793.342815.211141@buffalo.fnal.gov>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELGHAAA.tim_one@email.msn.com>

[Tim]
> Since nobody has made real progress on figuring out why test_fork1
> fails on some systems,  would somebody who is able to make it fail
> please just try this patch & see what happens?

[Charles G Waldman]
> Or try this program (based on Neil's example), which will fail almost
> immediately unless you apply my patch:

Not "or", please, "both".  Without understanding the problem in detail, we
have no idea how many bugs are lurking here.  For example, Python allocates
at least two locks besides "the global lock", and "doing something" about
the latter alone may not help with all the failing test cases.  Note too
that the pthread_atfork docs were discussed earlier, and neither Guido nor I
were able to dream up a scenario that accounted for the details of most
failures people *saw*:  we both stumbled into another (and the same) failing
scenario, but it didn't match the stacktraces people posted (which showed
deadlocks/hangs in the *parent* thread; but at a fork, only the state of the
locks in the child "can" get screwed up).  The patch you posted should plug
the "deadlock in the child" scenario we did understand, but that scenario
didn't appear to be relevant in most cases.

The more info the better, let's just be careful to test *everything* that
failed before writing this off.




From ping@lfw.org  Sat Aug 19 07:43:18 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 18 Aug 2000 23:43:18 -0700 (PDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>

My $0.02.

+1 on:    import <modname> as <localmodname>
          import <pkgname> as <localpkgname>

+1 on:    from <modname> import <symname> as <localsymname>
          from <pkgname> import <modname> as <localmodname>

+1 on:    from <pkgname>.<modname> import <symname> as <localsymname>
          from <pkgname>.<pkgname> import <modname> as <localmodname>


-1 on *either* meaning of:

          import <pkgname>.<modname> as <localname>

...as it's not clear what the correct meaning is.

If the intent of this last form is to import a sub-module of a
package into the local namespace with an aliased name, then you
can just say

          from <pkgname> import <modname> as <localname>

and the meaning is then quite clear.



-- ?!ng



From ping@lfw.org  Sat Aug 19 07:38:10 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 18 Aug 2000 23:38:10 -0700 (PDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items()
In-Reply-To: <14749.18016.323403.295212@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008182336010.416-100000@skuld.lfw.org>

On Fri, 18 Aug 2000, Fred L. Drake, Jr. wrote:
>   I hadn't considered *not* using an "in" clause, but that is actually
> pretty neat.  I'd like to see all of these allowed; disallowing "for i
> indexing e in ...:" reduces the intended functionality substantially.

I like them all as well (and had previously assumed that the "indexing"
proposal included the "for i indexing sequence" case!).

While we're sounding off on the issue, i'm quite happy (+1) on both of:

          for el in seq:
          for i indexing seq:
          for i indexing el in seq:

and

          for el in seq:
          for i in indices(seq):
          for i, el in irange(seq):

with a slight preference for the former.


-- ?!ng



From loewis@informatik.hu-berlin.de  Sat Aug 19 08:25:20 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 19 Aug 2000 09:25:20 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399DA8D3.70E85C58@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com>
Message-ID: <200008190725.JAA26022@pandora.informatik.hu-berlin.de>

> What I'm missing in your doc-string is a reference as to how
> well gettext works together with Unicode. After all, i18n is
> among other things about international character sets.
> Have you done any experiments in this area ?

I have, to some degree. As others pointed out, gettext maps byte
arrays to byte arrays. However, in the GNU internationalization
project, it is convention to put an entry like

msgid ""
msgstr ""
"Project-Id-Version: GNU grep 2.4\n"
"POT-Creation-Date: 1999-11-13 11:33-0500\n"
"PO-Revision-Date: 1999-12-07 10:10+01:00\n"
"Last-Translator: Martin von L=F6wis <martin@mira.isdn.cs.tu-berlin.de>\n"
"Language-Team: German <de@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=3DISO-8859-1\n"
"Content-Transfer-Encoding: 8-bit\n"

into the catalog, which can be accessed as translation of the empty
string. It typically has a charset=3D element, which allows to analyse
what character set is used in the catalog. Of course, this is a
convention only, so it may not be present. If it is absent, and
conversion to Unicode is requested, it is probably a good idea to
assume UTF-8 (as James indicated, that will be the GNOME coded
character set for catalogs, for example).

In any case, I think it is a good idea to support retrieval of
translated strings as Unicode objects. I can think of two alternative
interfaces:

gettext.gettext(msgid, unicode=3D1)
#or
gettext.unigettext(msgid)

Of course, if applications install _, they'd know whether they want
unicode or byte strings, so _ would still take a single argument.

However, I don't think that this feature must be there at the first
checkin; I'd volunteer to work on a patch after Barry has installed
his code, and after I got some indication what the interface should
be.

Regards,
Martin


From tim_one@email.msn.com  Sat Aug 19 10:19:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 05:19:23 -0400
Subject: [Python-Dev] RE: Call for reviewer!
In-Reply-To: <B5BF7652.7B39%dgoodger@bigfoot.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com>

[David Goodger]
> I thought the "backwards compatibility" issue might be a sticking point ;>
> And I *can* see why.
>
> So, if I were to rework the patch to remove the incompatibility, would it
> fly or still be shot down?

I'm afraid "shot down" is the answer, but it's no reflection on the quality
of your work.  Guido simply doesn't want any enhancements of any kind to
getopt to be distributed in the standard library.  He made that very clear
in a conference call with the PythonLabs team, and as the interim 2.0
release manager du jour I pass that on in his absence.

This wasn't a surprise to me, as there's a very long history of rejected
getopt patches.  There are certainly users who *want* fancier getopt
capabilities!  The problem in making them happy is threefold:  (1) most
users don't (as the lack of positive response in this thread on Python-Dev
confirms); (2) users who do want them seem unable to agree on how they
should work (witness the bifurcation in your own patch set); and, (3) Guido
actively wants to keep the core getopt simple in the absence of both demand
for, and consensus on, more than it offers already.

This needn't mean your work is dead.  It will find users if it you make it
available on the web, and even in the core Andrew Kuchling pointed out that
the Distutils folks are keen to have a fancier getopt for their own
purposes:

[Andrew]
> Note that there's Lib/distutils/fancy_getopt.py.  The docstring reads:
>
> Wrapper around the standard getopt module that provides the following
> additional features:
>  * short and long options are tied together
>  * options have help strings, so fancy_getopt could potentially
>    create a complete usage summary
>  * options set attributes of a passed-in object

So you might want to talk to Gred Ward about that too (Greg is the Distuils
Dood).

[back to David]
> ...
> BUT WAIT, THERE'S MORE! As part of the deal, you get a free
> test_getopt.py regression test module! Act now; vote +1! (Actually,
> you'll get that no matter what you vote. I'll remove the getoptdict-
> specific stuff and resubmit it if this patch is rejected.)

We don't have to ask Guido abut *that*:  a test module for getopt would be
accepted with extreme (yet intangible <wink>) gratitude.  Thank you!




From mal@lemburg.com  Sat Aug 19 10:28:32 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:28:32 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de>
Message-ID: <399E5340.B00811EF@lemburg.com>

Martin von Loewis wrote:
> 
> In any case, I think it is a good idea to support retrieval of
> translated strings as Unicode objects. I can think of two alternative
> interfaces:
> 
> gettext.gettext(msgid, unicode=1)
> #or
> gettext.unigettext(msgid)
> 
> Of course, if applications install _, they'd know whether they want
> unicode or byte strings, so _ would still take a single argument.

Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
chars then the traditional API would have to raise encoding
errors -- probably not a good idea since the errors would be
hard to deal with in large applications.

Perhaps the return value type of .gettext() should be given on
the .install() call: e.g. encoding='utf-8' would have .gettext()
return a string using UTF-8 while encoding='unicode' would have
it return Unicode objects.
 
[Which makes me think: perhaps I should add a new codec which
does pretty much the same as the unicode() call: convert the
input data to Unicode ?!]

> However, I don't think that this feature must be there at the first
> checkin; I'd volunteer to work on a patch after Barry has installed
> his code, and after I got some indication what the interface should
> be.

Right.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Sat Aug 19 10:37:28 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:37:28 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
 <399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net>
Message-ID: <399E5558.C7B6029B@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal@lemburg.com> writes:
> 
>     M> I know that gettext is a standard, but from a technology POV I
>     M> would have implemented this as codec wich can then be plugged
>     M> wherever l10n is needed, since strings have the new .encode()
>     M> method which could just as well be used to convert not only the
>     M> string into a different encoding, but also a different
>     M> language.  Anyway, just a thought...
> 
> That might be cool to play with, but I haven't done anything with
> Python's Unicode stuff (and painfully little with gettext too) so
> right now I don't see how they'd fit together.  My gut reaction is
> that gettext could be the lower level interface to
> string.encode(language).

Oh, codecs are not just about Unicode. Normal string objects
also have an .encode() method which can be used for these
purposes as well.
 
>     M> What I'm missing in your doc-string is a reference as to how
>     M> well gettext works together with Unicode. After all, i18n is
>     M> among other things about international character sets.
>     M> Have you done any experiments in this area ?
> 
> No, but I've thought about it, and I don't think the answer is good.
> The GNU gettext functions take and return char*'s, which probably
> isn't very compatible with Unicode.  _gettext therefore takes and
> returns PyStringObjects.

Martin mentioned the possibility of using UTF-8 for the
catalogs and then decoding them into Unicode. That should be
a reasonable way of getting .gettext() to talk Unicode :-)
 
> We could do better with the pure-Python implementation, and that might
> be a good reason to forgo any performance gains or platform-dependent
> benefits you'd get with _gettext.  Of course the trick is using the
> Unicode-unaware tools to build .mo files containing Unicode strings.
> I don't track GNU gettext developement close enough to know whether
> they are addressing Unicode issues or not.

Just dreaming a little here: I would prefer that we use some
form of XML to write the catalogs. XML comes with Unicode support
and tools for writing XML are available too. We'd only need
a way to transform XML into catalog files of some Python
specific platform independent format (should be possible to
create .mo files from XML too).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Sat Aug 19 10:44:19 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:44:19 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <Pine.LNX.4.21.0008190854480.25020-100000@james.daa.com.au>
Message-ID: <399E56F3.53799860@lemburg.com>

James Henstridge wrote:
> 
> On Fri, 18 Aug 2000, Barry A. Warsaw wrote:
> 
> > I started with Martin's libintlmodule, renamed it _gettext and cleaned
> > up the C code a bit.  This provides gettext(), dgettext(),
> > dcgettext(), textdomain(), and bindtextdomain() functions.  The
> > gettext.py module imports these, and if it succeeds, it's done.
> >
> > If that fails, then there's a bunch of code, mostly derived from
> > Peter's fintl.py module, that reads the binary .mo files and does the
> > look ups itself.  Note that Peter's module only supported the GNU
> > gettext binary format, and that's all mine does too.  It should be
> > easy to support other binary formats (Solaris?) by overriding one
> > method in one class, and contributions are welcome.
> 
> I think support for Solaris big endian format .po files would probably be
> a good idea.  It is not very difficult and doesn't really add to the
> complexity.
> 
> >
> > James's stuff looked cool too, what I grokked of it :) but I think
> > those should be exported as higher level features.  I didn't include
> > the ability to write .mo files or the exported Catalog objects.  I
> > haven't used the I18N services enough to know whether these are
> > useful.
> 
> As I said above, most of that turned out not to be very useful.  Did you
> include any of the language selection code in the last version of my
> gettext module?  It gave behaviour very close to C gettext in this
> respect.  It expands the locale name given by the user using the
> locale.alias files found on the systems, then decomposes that into the
> simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> search for catalogs by names:
>   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> 
> This also allows things like expanding LANG=catalan to:
>   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> (provided the appropriate locale.alias files are found)
> 
> If you missed that that version I sent you I can send it again.  It
> stripped out a lot of the experimental code giving a much simpler module.

Uhm, can't you make some use of the new APIs in locale.py
for this ?

locale.py has a whole new set of encoding aware support for
LANG variables. It supports Unix and Windows (thanks to /F).
 
--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mwh21@cam.ac.uk  Sat Aug 19 10:52:00 2000
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 19 Aug 2000 10:52:00 +0100
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: "Fred L. Drake, Jr."'s message of "Fri, 18 Aug 2000 21:16:33 -0400 (EDT)"
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com> <20000818164837.A8423@kronos.cnri.reston.va.us> <20000818230401.A376@xs4all.nl> <14749.43088.855537.355621@anthem.concentric.net> <14749.57329.966314.171906@cj42289-a.reston1.va.home.com>
Message-ID: <m3itsxpohr.fsf@atrus.jesus.cam.ac.uk>

"Fred L. Drake, Jr." <fdrake@beopen.com> writes:

> Barry A. Warsaw writes:
>  > I don't think that's true, because the file won't have the tag
>  > information in it.  That could be a problem in and of itself, but I
>  > dunno.
> 
>   The confusion isn't from the tags, but the dates; if the ,v was
> created 2 years ago, asking for the python tree as of a year ago
> (using -D <date>) will include the file, even though it wasn't part of
> our repository then.  Asking for a specific tag (using -r <tag>) will
> properly not include it unless there's a matching tag there.

Is it feasible to hack the dates in the ,v file so that it looks like
all the revisions happened between say

2000.08.19.10.50.00

and

2000.08.19.10.51.00

?  This probably has problems too, but they will be more subtle...

Cheers,
M.

-- 
  That's why the smartest companies use Common Lisp, but lie about it
  so all their competitors think Lisp is slow and C++ is fast.  (This
  rumor has, however, gotten a little out of hand. :)
                                        -- Erik Naggum, comp.lang.lisp



From Vladimir.Marangozov@inrialpes.fr  Sat Aug 19 11:23:12 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 12:23:12 +0200 (CEST)
Subject: [Python-Dev] RE: Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEJJHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 18, 2000 03:11:11 PM
Message-ID: <200008191023.MAA11071@python.inrialpes.fr>

Tim Peters wrote:
> 
> My bandwidth is consumed by 2.0 issues, so I won't look at it.  On the
> chance that Guido gets hit by a bus, though, and I have time to kill at his
> funeral, it would be nice to have it available on SourceForge.  Uploading a
> postponed patch sounds fine!

Done. Both patches are updated and relative to current CVS:

Optional object malloc:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470

Optional memory profiler:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470

Let me insist again that these are totally optional and off by default
(lately, a recurrent wish of mine regarding proposed features). 

Since they're optional, off by default, and consitute a solid base for
further work on mem + GC, and despite the tiny imperfections I see in
the profiler, I think I'm gonna push a bit, given that I'm pretty
confident in the code and that it barely affects anything.

So while I'm out of town, my mailbox would be happy to register any
opinions that the python-dev crowd might have (I'm thinking of Barry
and Neil Schemenauer in particular). Also, when BDFL is back from
Palo Alto, give him a chance to emit a statement (although I know
he's not a memory fan <wink>).

I'll *try* to find some time for docs and test cases, but I'd like to
get some preliminary feedback first (especially if someone care to try
this on a 64 bit machine). That's it for now.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From fdrake@beopen.com  Sat Aug 19 13:44:00 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Sat, 19 Aug 2000 08:44:00 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>
References: <20000818182246.V376@xs4all.nl>
 <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>
Message-ID: <14750.33040.285051.600113@cj42289-a.reston1.va.home.com>

Ka-Ping Yee writes:
 > If the intent of this last form is to import a sub-module of a
 > package into the local namespace with an aliased name, then you
 > can just say
 > 
 >           from <pkgname> import <modname> as <localname>

  I could live with this restriction, and this expression is
unambiguous (a good thing for Python!).


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Moshe Zadka <moshez@math.huji.ac.il>  Sat Aug 19 14:54:21 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sat, 19 Aug 2000 16:54:21 +0300 (IDT)
Subject: [Python-Dev] Intent to document: Cookie.py
Message-ID: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>

This is just a notice that I'm currently in the middle of documenting
Cookie. I should be finished sometime today. This is just to stop anyone
else from wasting his time -- if you got time to kill, you can write a
test suite <wink>

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From james@daa.com.au  Sat Aug 19 15:14:12 2000
From: james@daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 22:14:12 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E56F3.53799860@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008192202520.25394-100000@james.daa.com.au>

On Sat, 19 Aug 2000, M.-A. Lemburg wrote:

> > As I said above, most of that turned out not to be very useful.  Did you
> > include any of the language selection code in the last version of my
> > gettext module?  It gave behaviour very close to C gettext in this
> > respect.  It expands the locale name given by the user using the
> > locale.alias files found on the systems, then decomposes that into the
> > simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> > search for catalogs by names:
> >   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> > 
> > This also allows things like expanding LANG=catalan to:
> >   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> > (provided the appropriate locale.alias files are found)
> > 
> > If you missed that that version I sent you I can send it again.  It
> > stripped out a lot of the experimental code giving a much simpler module.
> 
> Uhm, can't you make some use of the new APIs in locale.py
> for this ?
> 
> locale.py has a whole new set of encoding aware support for
> LANG variables. It supports Unix and Windows (thanks to /F).

Well, it can do a little more than that.  It will also handle the case of
a number of locales listed in the LANG environment variable.  It also
doesn't look like it handles decomposition of a locale like
ll_CC.encoding@modifier into other matching encodings in the correct
precedence order.

Maybe something to do this sort of decomposition would fit better in
locale.py though.

This sort of thing is very useful for people who know more than one
language, and doesn't seem to be handled by plain setlocale()

>  
> --
> Marc-Andre Lemburg

James.

-- 
Email: james@daa.com.au
WWW:   http://www.daa.com.au/~james/




From fdrake@beopen.com  Sat Aug 19 15:14:27 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Sat, 19 Aug 2000 10:14:27 -0400 (EDT)
Subject: [Python-Dev] Intent to document: Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>
References: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>
Message-ID: <14750.38467.274688.274349@cj42289-a.reston1.va.home.com>

Moshe Zadka writes:
 > This is just a notice that I'm currently in the middle of documenting
 > Cookie. I should be finished sometime today. This is just to stop anyone
 > else from wasting his time -- if you got time to kill, you can write a
 > test suite <wink>

  Great, thanks!  Just check it in as libcookie.tex when you're ready,
and I'll check the markup for details.  Someone familiar with the
module can proof it for content.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From m.favas@per.dem.csiro.au  Sat Aug 19 15:24:18 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 22:24:18 +0800
Subject: [Python-Dev] [Fwd: Who can make test_fork1 fail?]
Message-ID: <399E9892.35A1AC79@per.dem.csiro.au>

This is a multi-part message in MIME format.
--------------24871638E708E90DACDF8A87
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

 
--------------24871638E708E90DACDF8A87
Content-Type: message/rfc822
Content-Disposition: inline

Message-ID: <399E5A71.C54C8055@per.dem.csiro.au>
Date: Sat, 19 Aug 2000 17:59:13 +0800
From: Mark Favas <m.favas@per.dem.csiro.au>
Organization: CSIRO Exploration & Mining
X-Mailer: Mozilla 4.73 [en] (X11; U; OSF1 V4.0 alpha)
X-Accept-Language: en
MIME-Version: 1.0
To: cgw@fnal.gov
Subject: Who can make test_fork1 fail?
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

{Charles]
>Or try this program (based on Neil's example), which will fail almost
>immediately unless you apply my patch:
<snip>

Just a data point: - said program runs happily on <OSF1/Digital
Unix/Tru64 Unix>...

-- 
Mark

--------------24871638E708E90DACDF8A87--



From tim_one@email.msn.com  Sat Aug 19 18:34:28 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 13:34:28 -0400
Subject: [Python-Dev] New anal crusade
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com>

Has anyone tried compiling Python under gcc with

    -Wmissing-prototypes -Wstrict-prototypes

?  Someone on Python-Help just complained about warnings under that mode,
but it's unclear to me which version of Python they were talking about.




From Vladimir.Marangozov@inrialpes.fr  Sat Aug 19 19:01:52 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 20:01:52 +0200 (CEST)
Subject: [Python-Dev] New anal crusade
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 19, 2000 01:34:28 PM
Message-ID: <200008191801.UAA17999@python.inrialpes.fr>

Tim Peters wrote:
> 
> Has anyone tried compiling Python under gcc with
> 
>     -Wmissing-prototypes -Wstrict-prototypes
> 
> ?  Someone on Python-Help just complained about warnings under that mode,
> but it's unclear to me which version of Python they were talking about.

Just tried it. Indeed, there are a couple of warnings. Wanna list?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From tim_one@email.msn.com  Sat Aug 19 19:33:57 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 14:33:57 -0400
Subject: [Python-Dev] New anal crusade
In-Reply-To: <200008191801.UAA17999@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEMGHAAA.tim_one@email.msn.com>

[Tim, on gcc -Wmissing-prototypes -Wstrict-prototypes]

[Vladimir]
> Just tried it. Indeed, there are a couple of warnings. Wanna list?

Not me personally, no.  The very subtle <wink> implied request in that was
that someone who *can* run gcc this way actually commit to doing so as a
matter of course, and fix warnings as they pop up.  But, in the absence of
joy, the occasional one-shot list is certainly better than nothing.




From Vladimir.Marangozov@inrialpes.fr  Sat Aug 19 19:58:18 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 20:58:18 +0200 (CEST)
Subject: [Python-Dev] New anal crusade
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEMGHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 19, 2000 02:33:57 PM
Message-ID: <200008191858.UAA31550@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Tim, on gcc -Wmissing-prototypes -Wstrict-prototypes]
> 
> [Vladimir]
> > Just tried it. Indeed, there are a couple of warnings. Wanna list?
> 
> Not me personally, no.  The very subtle <wink> implied request in that was
> that someone who *can* run gcc this way actually commit to doing so as a
> matter of course, and fix warnings as they pop up.  But, in the absence of
> joy, the occasional one-shot list is certainly better than nothing.

Sorry, I'm running after my plane (and I need to run fast <wink>) so please
find another volunteer. They're mostly ansification thingies, as expected.

Here's the list from a the default ./configure, make, on Linux, so that
even someone without gcc can fix them <wink>:

----------------------------------------------------------------------------

pgenmain.c:43: warning: no previous prototype for `Py_Exit'
pgenmain.c:169: warning: no previous prototype for `PyOS_Readline'

myreadline.c:66: warning: no previous prototype for `PyOS_StdioReadline'

intrcheck.c:138: warning: no previous prototype for `PyErr_SetInterrupt'
intrcheck.c:191: warning: no previous prototype for `PyOS_FiniInterrupts'

fileobject.c:253: warning: function declaration isn't a prototype
fileobject.c:302: warning: function declaration isn't a prototype

floatobject.c:242: warning: no previous prototype for `PyFloat_AsStringEx'
floatobject.c:285: warning: no previous prototype for `PyFloat_AsString'

unicodeobject.c:548: warning: no previous prototype for `_PyUnicode_AsDefaultEncodedString'
unicodeobject.c:5142: warning: no previous prototype for `_PyUnicode_Init'
unicodeobject.c:5159: warning: no previous prototype for `_PyUnicode_Fini'

codecs.c:423: warning: no previous prototype for `_PyCodecRegistry_Init'
codecs.c:434: warning: no previous prototype for `_PyCodecRegistry_Fini'

frozenmain.c:34: warning: no previous prototype for `Py_FrozenMain'

getmtime.c:30: warning: no previous prototype for `PyOS_GetLastModificationTime'

import.c:2269: warning: no previous prototype for `initimp'

marshal.c:771: warning: no previous prototype for `PyMarshal_Init'

pyfpe.c:21: warning: no previous prototype for `PyFPE_dummy'

pythonrun.c: In function `run_pyc_file':
pythonrun.c:880: warning: function declaration isn't a prototype

dynload_shlib.c:49: warning: no previous prototype for `_PyImport_GetDynLoadFunc'

In file included from thread.c:125:
thread_pthread.h:205: warning: no previous prototype for `PyThread__exit_thread'

getopt.c:48: warning: no previous prototype for `getopt'

./threadmodule.c:389: warning: no previous prototype for `initthread'
./gcmodule.c:698: warning: no previous prototype for `initgc'
./regexmodule.c:661: warning: no previous prototype for `initregex'
./pcremodule.c:633: warning: no previous prototype for `initpcre'
./posixmodule.c:3698: warning: no previous prototype for `posix_strerror'
./posixmodule.c:5456: warning: no previous prototype for `initposix'
./signalmodule.c:322: warning: no previous prototype for `initsignal'
./_sre.c:2301: warning: no previous prototype for `init_sre'
./arraymodule.c:792: warning: function declaration isn't a prototype
./arraymodule.c:1511: warning: no previous prototype for `initarray'
./cmathmodule.c:412: warning: no previous prototype for `initcmath'
./mathmodule.c:254: warning: no previous prototype for `initmath'
./stropmodule.c:1205: warning: no previous prototype for `initstrop'
./structmodule.c:1225: warning: no previous prototype for `initstruct'
./timemodule.c:571: warning: no previous prototype for `inittime'
./operator.c:251: warning: no previous prototype for `initoperator'
./_codecsmodule.c:628: warning: no previous prototype for `init_codecs'
./unicodedata.c:277: warning: no previous prototype for `initunicodedata'
./ucnhash.c:107: warning: no previous prototype for `getValue'
./ucnhash.c:179: warning: no previous prototype for `initucnhash'
./_localemodule.c:408: warning: no previous prototype for `init_locale'
./fcntlmodule.c:322: warning: no previous prototype for `initfcntl'
./pwdmodule.c:129: warning: no previous prototype for `initpwd'
./grpmodule.c:136: warning: no previous prototype for `initgrp'
./errnomodule.c:74: warning: no previous prototype for `initerrno'
./mmapmodule.c:940: warning: no previous prototype for `initmmap'
./selectmodule.c:339: warning: no previous prototype for `initselect'
./socketmodule.c:2366: warning: no previous prototype for `init_socket'
./md5module.c:275: warning: no previous prototype for `initmd5'
./shamodule.c:550: warning: no previous prototype for `initsha'
./rotormodule.c:621: warning: no previous prototype for `initrotor'
./newmodule.c:205: warning: no previous prototype for `initnew'
./binascii.c:1014: warning: no previous prototype for `initbinascii'
./parsermodule.c:2637: warning: no previous prototype for `initparser'
./cStringIO.c:643: warning: no previous prototype for `initcStringIO'
./cPickle.c:358: warning: no previous prototype for `cPickle_PyMapping_HasKey'
./cPickle.c:2287: warning: no previous prototype for `Pickler_setattr'
./cPickle.c:4518: warning: no previous prototype for `initcPickle'

main.c:33: warning: function declaration isn't a prototype
main.c:79: warning: no previous prototype for `Py_Main'
main.c:292: warning: no previous prototype for `Py_GetArgcArgv'

getbuildinfo.c:34: warning: no previous prototype for `Py_GetBuildInfo'
./Modules/getbuildinfo.c:34: warning: no previous prototype for `Py_GetBuildInfo'


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From guido@python.org  Fri Aug 18 20:13:14 2000
From: guido@python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:13:14 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
Message-ID: <011301c00a1f$927e7980$7aa41718@beopen.com>

I'm reading this tread off-line. I'm feeling responsible because I gave Skip
the green light. I admit that that was a mistake: I didn't recall the
purpose of commonprefix() correctrly, and didn't refresh my memory by
reading the docs. I think Tim is right: as the docs say, the function was
*intended* to work on a character basis. This doesn't mean that it doesn't
belong in os.path! Note that os.dirname() will reliably return the common
directory, exactly because the trailing slash is kept.

I propose:

- restore the old behavior on all platforms
- add to the docs that to get the common directory you use dirname()
- add testcases that check that this works on all platforms

- don't add commonpathprefix(), because dirname() already does it

Note that I've only read email up to Thursday morning. If this has been
superseded by more recent resolution, I'll reconsider; but if it's still up
in the air this should be it.

It doesn't look like the change made it into 1.6.

--Guido




From guido@python.org  Fri Aug 18 20:20:06 2000
From: guido@python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:20:06 -0400
Subject: [Python-Dev] PEP 214, extended print statement
References: <14747.22851.266303.28877@anthem.concentric.net><Pine.GSO.4.10.10008170915050.24783-100000@sundial><20000817083023.J376@xs4all.nl> <14747.63511.725610.771162@anthem.concentric.net>
Message-ID: <011401c00a1f$92db8da0$7aa41718@beopen.com>

I'm still reading my email off-line on the plane. I've now read PEP 214 and
think I'll reverse my opinion: it's okay. Barry, check it in! (And change
the SF PM status to 'Accepted'.) I think I'll start using it for error
messages: errors should go to stderr, but it's often inconvenient, so in
minor scripts instead of doing

  sys.stderr.write("Error: can't open %s\n" % filename)

I often write

  print "Error: can't open", filename

which is incorrect but more convenient. I can now start using

  print >>sys.stderr, "Error: can't open", filename

--Guido




From guido@python.org  Fri Aug 18 20:23:37 2000
From: guido@python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:23:37 -0400
Subject: [Python-Dev] PyErr_NoMemory
References: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <011501c00a1f$939bd060$7aa41718@beopen.com>

> The current PyErr_NoMemory() function reads:
>
> PyObject *
> PyErr_NoMemory(void)
> {
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
>         else
>                 /* this will probably fail since there's no memory and
hee,
>                    hee, we have to instantiate this class
>                 */
>                 PyErr_SetNone(PyExc_MemoryError);
>
>         return NULL;
> }
>
> thus overriding any previous exceptions unconditionally. This is a
> problem when the current exception already *is* PyExc_MemoryError,
> notably when we have a chain (cascade) of memory errors. It is a
> problem because the original memory error and eventually its error
> message is lost.
>
> I suggest to make this code look like:
>
> PyObject *
> PyErr_NoMemory(void)
> {
> if (PyErr_ExceptionMatches(PyExc_MemoryError))
> /* already current */
> return NULL;
>
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
> ...
>
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

+1. The cascading memory error seems a likely scenario indeed: something
returns a memory error, the error handling does some more stuff, and hits
more memory errors.

--Guido





From guido@python.org  Fri Aug 18 21:57:15 2000
From: guido@python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 16:57:15 -0400
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net>
Message-ID: <011601c00a1f$9923d460$7aa41718@beopen.com>

Paul Prescod wrote:

> I don't think of iterators as indexing in terms of numbers. Otherwise I
> could do this:
>
> >>> a={0:"zero",1:"one",2:"two",3:"three"}
> >>> for i in a:
> ...     print i
> ...
>
> So from a Python user's point of view, for-looping has nothing to do
> with integers. From a Python class/module creator's point of view it
> does have to do with integers. I wouldn't be either surprised nor
> disappointed if that changed one day.

Bingo!

I've long had an idea for generalizing 'for' loops using iterators. This is
more a Python 3000 thing, but I'll explain it here anyway because I think
it's relevant. Perhaps this should become a PEP?  (Maybe we should have a
series of PEPs with numbers in the 3000 range for Py3k ideas?)

The statement

  for <variable> in <object>: <block>

should translate into this kind of pseudo-code:

  # variant 1
  __temp = <object>.newiterator()
  while 1:
      try: <variable> = __temp.next()
      except ExhaustedIterator: break
      <block>

or perhaps (to avoid the relatively expensive exception handling):

  # variant 2
  __temp = <object>.newiterator()
  while 1:
      __flag, <variable = __temp.next()
      if not __flag: break
      <block>

In variant 1, the next() method returns the next object or raises
ExhaustedIterator. In variant 2, the next() method returns a tuple (<flag>,
<variable>) where <flag> is 1 to indicate that <value> is valid or 0 to
indicate that there are no more items available. I'm not crazy about the
exception, but I'm even less crazy about the more complex next() return
value (careful observers may have noticed that I'm rarely crazy about flag
variables :-).

Another argument for variant 1 is that variant 2 changes what <variable> is
after the loop is exhausted, compared to current practice: currently, it
keeps the last valid value assigned to it. Most likely, the next() method
returns None when the sequence is exhausted. It doesn't make a lot of sense
to require it to return the last item of the sequence -- there may not *be*
a last item, if the sequence is empty, and not all sequences make it
convenient to keep hanging on to the last item in the iterator, so it's best
to specify that next() returns (0, None) when the sequence is exhausted.

(It would be tempting to suggeste a variant 1a where instead of raising an
exception, next() returns None when the sequence is exhausted, but this
won't fly: you couldn't iterate over a list containing some items that are
None.)

Side note: I believe that the iterator approach could actually *speed up*
iteration over lists compared to today's code. This is because currently the
interation index is a Python integer object that is stored on the stack.
This means an integer add with overflow check, allocation, and deallocation
on each iteration! But the iterator for lists (and other basic sequences)
could easily store the index as a C int! (As long as the sequence's length
is stored in an int, the index can't overflow.)

[Warning: thinking aloud ahead!]

Once we have the concept of iterators, we could support explicit use of them
as well. E.g. we could use a variant of the for statement to iterate over an
existing iterator:

  for <variable> over <iterator>: <block>

which would (assuming variant 1 above) translate to:

  while 1:
      try: <variable> = <iterator>.next()
      except ExhaustedIterator: break
      <block>

This could be used in situations where you have a loop iterating over the
first half of a sequence and a second loop that iterates over the remaining
items:

  it = something.newiterator()
  for x over it:
      if time_to_start_second_loop(): break
      do_something()
  for x over it:
      do_something_else()

Note that the second for loop doesn't reset the iterator -- it just picks up
where the first one left off! (Detail: the x that caused the break in the
first loop doesn't get dealt with in the second loop.)

I like the iterator concept because it allows us to do things lazily. There
are lots of other possibilities for iterators. E.g. mappings could have
several iterator variants to loop over keys, values, or both, in sorted or
hash-table order. Sequences could have an iterator for traversing them
backwards, and a few other ones for iterating over their index set (cf.
indices()) and over (index, value) tuples (cf. irange()). Files could be
their own iterator where the iterator is almost the same as readline()
except it raises ExhaustedIterator at EOF instead of returning "".  A tree
datastructure class could have an associated iterator class that maintains a
"finger" into the tree.

Hm, perhaps iterators could be their own iterator? Then if 'it' were an
iterator, it.newiterator() would return a reference to itself (not a copy).
Then we wouldn't even need the 'over' alternative syntax. Maybe the method
should be called iterator() then, not newiterator(), to avoid suggesting
anything about the newness of the returned iterator.

Other ideas:

- Iterators could have a backup() method that moves the index back (or
raises an exception if this feature is not supported, e.g. when reading data
from a pipe).

- Iterators over certain sequences might support operations on the
underlying sequence at the current position of the iterator, so that you
could iterate over a sequence and occasionally insert or delete an item (or
a slice).

Of course iterators also connect to generators. The basic list iterator
doesn't need coroutines or anything, it can be done as follows:

  class Iterator:
      def __init__(self, seq):
          self.seq = seq
          self.ind = 0
      def next(self):
          if self.ind >= len(self.seq): raise ExhaustedIterator
          val = self.seq[self.ind]
          self.ind += 1
          return val

so that <list>.iterator() could just return Iterator(<list>) -- at least
conceptually.

But for other data structures the amount of state needed might be
cumbersome. E.g. a tree iterator needs to maintain a stack, and it's much
easier to code that using a recursive Icon-style generator than by using an
explicit stack. On the other hand, I remember reading an article a while ago
(in Dr. Dobbs?) by someone who argued (in a C++ context) that such recursive
solutions are very inefficient, and that an explicit stack (1) is really not
that hard to code, and (2) gives much more control over the memory and time
consumption of the code. On the third hand, some forms of iteration really
*are* expressed much more clearly using recursion. On the fourth hand, I
disagree with Matthias ("Dr. Scheme") Felleisen about recursion as the root
of all iteration. Anyway, I believe that iterators (as explained above) can
be useful even if we don't have generators (in the Icon sense, which I
believe means coroutine-style).

--Guido




From akuchlin@mems-exchange.org  Sat Aug 19 22:15:53 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Sat, 19 Aug 2000 17:15:53 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
Message-ID: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>

The handwritten BSDDB3 module has just started actually functioning.
It now runs the dbtest.py script without core dumps or reported
errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
db.py and the most recent _bsddb.c.

I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
have to struggle a bit with integrating it into Greg's package and
compiling it (replacing db.py with my version, and modifying Setup to
compile _bsddb.c).  I haven't integrated it more, because I'm not sure
how we want to proceed with it.  Robin/Greg, do you want to continue
to maintain the package?  ...in which I'll contribute the code to one
or both of you.  Or, I can take over maintaining the package, or we
can try to get the module into Python 2.0, but with the feature freeze
well advanced, I'm doubtful that it'll get in.

Still missing: Cursor objects still aren't implemented -- Martin, if
you haven't started yet, let me know and I'll charge ahead with them
tomorrow.  Docstrings.  More careful type-checking of function
objects.  Finally, general tidying, re-indenting, and a careful
reading to catch any stupid errors that I made.  

--amk



From esr@thyrsus.com  Sat Aug 19 22:37:27 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Sat, 19 Aug 2000 17:37:27 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>; from amk@s222.tnt1.ann.va.dialup.rcn.com on Sat, Aug 19, 2000 at 05:15:53PM -0400
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000819173727.A4015@thyrsus.com>

A.M. Kuchling <amk@s222.tnt1.ann.va.dialup.rcn.com>:
> The handwritten BSDDB3 module has just started actually functioning.
> It now runs the dbtest.py script without core dumps or reported
> errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
> db.py and the most recent _bsddb.c.

I see I wasn't on the explicit addressee list.  But if you can get any good
use out of another pair of hands, I'm willing.

> I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
> have to struggle a bit with integrating it into Greg's package and
> compiling it (replacing db.py with my version, and modifying Setup to
> compile _bsddb.c).  I haven't integrated it more, because I'm not sure
> how we want to proceed with it.  Robin/Greg, do you want to continue
> to maintain the package?  ...in which I'll contribute the code to one
> or both of you.  Or, I can take over maintaining the package, or we
> can try to get the module into Python 2.0, but with the feature freeze
> well advanced, I'm doubtful that it'll get in.

I'm +1 for slipping this one in under the wire, if it matters.

I'm not just randomly pushing a feature here -- I think the multiple-reader/
one-writer atomicity guarantees this will give us will be extremely important
for CGI programmers, who often need a light-duty database facility with exactly
this kind of concurrency guarantee.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The people of the various provinces are strictly forbidden to have in their
possession any swords, short swords, bows, spears, firearms, or other types
of arms. The possession of unnecessary implements makes difficult the
collection of taxes and dues and tends to foment uprisings.
        -- Toyotomi Hideyoshi, dictator of Japan, August 1588


From martin@loewis.home.cs.tu-berlin.de  Sat Aug 19 22:52:56 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 19 Aug 2000 23:52:56 +0200
Subject: [Python-Dev] Re: BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
 (amk@s222.tnt1.ann.va.dialup.rcn.com)
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200008192152.XAA00691@loewis.home.cs.tu-berlin.de>

> Still missing: Cursor objects still aren't implemented -- Martin, if
> you haven't started yet, let me know and I'll charge ahead with them
> tomorrow.

No, I haven't started yet, so go ahead.

Regards,
Martin


From trentm@ActiveState.com  Sun Aug 20 00:59:40 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sat, 19 Aug 2000 16:59:40 -0700
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 19, 2000 at 01:11:28AM -0400
References: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
Message-ID: <20000819165940.A21864@ActiveState.com>

On Sat, Aug 19, 2000 at 01:11:28AM -0400, Tim Peters wrote:
> Note that a patch has been posted to SourceForge that purports to solve
> *some* thread vs fork problems:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470
> 
> Since nobody has made real progress on figuring out why test_fork1 fails on
> some systems, would somebody who is able to make it fail please just try
> this patch & see what happens?
> 

That patch *seems* to fix it for me. As before, I can get test_fork to fail
intermittently (i.e. it doesn't hang every time I run it) without the patch
and cannot get it to hang at all with the patch.

Would you like me to run and provide the instrumented output that I showed
last time this topic came up?


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From tim_one@email.msn.com  Sun Aug 20 01:45:32 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 20:45:32 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <20000819165940.A21864@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEMPHAAA.tim_one@email.msn.com>

[Trent Mick, on
//sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470
]
> That patch *seems* to fix it for me. As before, I can get test_fork
> to fail intermittently (i.e. it doesn't hang every time I run it) without
> the patch and cannot get it to hang at all with the patch.

Thanks a bunch, Trent!  (That's a Minnesotaism -- maybe that's far enough
North that it sounds natural to you, though <wink>.)

> Would you like me to run and provide the instrumented output that
> I showed last time this topic came up?

Na, it's enough to know that the problem appears to have gone away, and
since this was-- in some sense --the simplest of the test cases (just one
fork), it provides the starkest contrast we're going to get between the
behaviors people are seeing and my utter failure to account for them.  OTOH,
we knew the global lock *should be* a problem here (just not the problem we
actually saw!), and Charles is doing the right kind of thing to make that go
away.

I still encourage everyone to run all the tests that failed on all the SMP
systems they can get hold of, before and after the patch.  I'll talk with
Guido about it too (the patch is still a bit too hacky to put out in the
world with pride <wink>).




From dgoodger@bigfoot.com  Sun Aug 20 05:53:05 2000
From: dgoodger@bigfoot.com (David Goodger)
Date: Sun, 20 Aug 2000 00:53:05 -0400
Subject: [Python-Dev] Re: Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com>
Message-ID: <B5C4DC70.7D6C%dgoodger@bigfoot.com>

on 2000-08-19 05:19, Tim Peters (tim_one@email.msn.com) wrote:
> I'm afraid "shot down" is the answer...

That's too bad. Thanks for the 'gentle' explanation. This 'crusader' knows
when to give up on a lost cause. ;>

>> test_getopt.py
> 
> We don't have to ask Guido abut *that*:  a test module for getopt would be
> accepted with extreme (yet intangible <wink>) gratitude.  Thank you!

Glad to contribute. You'll find a regression test module for the current
getopt.py as revised patch #101110. I based it on some existing Lib/test/
modules, but haven't found the canonical example or instruction set. Is
there one?

FLASH: Tim's been busy. Just received the official rejections & acceptance
of test_getopt.py.

-- 
David Goodger    dgoodger@bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)



From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug 20 06:19:28 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 20 Aug 2000 08:19:28 +0300 (IDT)
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <20000819173727.A4015@thyrsus.com>
Message-ID: <Pine.GSO.4.10.10008200817510.13651-100000@sundial>

On Sat, 19 Aug 2000, Eric S. Raymond wrote:

> I'm +1 for slipping this one in under the wire, if it matters.
> 
> I'm not just randomly pushing a feature here -- I think the multiple-reader/
> one-writer atomicity guarantees this will give us will be extremely important
> for CGI programmers, who often need a light-duty database facility with exactly
> this kind of concurrency guarantee.

I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
yet, which makes it the perfect place to get stuff for 2.0.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From greg@electricrain.com  Sun Aug 20 07:04:51 2000
From: greg@electricrain.com (Gregory P. Smith)
Date: Sat, 19 Aug 2000 23:04:51 -0700
Subject: [Python-Dev] Re: BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>; from amk@s222.tnt1.ann.va.dialup.rcn.com on Sat, Aug 19, 2000 at 05:15:53PM -0400
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000819230451.A22669@yyz.electricrain.com>

On Sat, Aug 19, 2000 at 05:15:53PM -0400, A.M. Kuchling wrote:
> The handwritten BSDDB3 module has just started actually functioning.
> It now runs the dbtest.py script without core dumps or reported
> errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
> db.py and the most recent _bsddb.c.
> 
> I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
> have to struggle a bit with integrating it into Greg's package and
> compiling it (replacing db.py with my version, and modifying Setup to
> compile _bsddb.c).  I haven't integrated it more, because I'm not sure
> how we want to proceed with it.  Robin/Greg, do you want to continue
> to maintain the package?  ...in which I'll contribute the code to one
> or both of you.  Or, I can take over maintaining the package, or we
> can try to get the module into Python 2.0, but with the feature freeze
> well advanced, I'm doubtful that it'll get in.

I just did a quick scan over your code and liked what I saw.  I was
thinking it'd be cool if someone did this (write a non-SWIG version based
on mine) but knew I wouldn't have time right now.  Thanks!  Note that I
haven't tested your module or looked closely to see if anything looks odd.

I'm happy to keep maintaining the bsddb3 module until it makes its way
into a future Python version.  I don't have a lot of time for it, but send
me updates/fixes as you make them (I'm not on python-dev at the moment).
If your C version is working well, I'll make a new release sometime next
week after I test it a bit more in our application on a few platforms
(linux, linux alpha and win98).

> Still missing: Cursor objects still aren't implemented -- Martin, if
> you haven't started yet, let me know and I'll charge ahead with them
> tomorrow.  Docstrings.  More careful type-checking of function
> objects.  Finally, general tidying, re-indenting, and a careful
> reading to catch any stupid errors that I made.  

It looked like you were keeping the same interface (good!), so I
recommend simply stealing the docstrings from mine if you haven't already
and reviewing them to make sure they make sense.  I pretty much pasted
trimmed down forms of the docs that come with BerkeleyDB in to make them
as well as using some of what Robin had from before.

Also, unless someone actually tests the Recno format databases should
we even bother to include support for it?  I haven't tested them at all.
If nothing else, writing some Recno tests for dbtest.py would be a good
idea before including it.

Greg

-- 
Gregory P. Smith   gnupg/pgp: http://suitenine.com/greg/keys/
                   C379 1F92 3703 52C9 87C4  BE58 6CDA DB87 105D 9163


From tim_one@email.msn.com  Sun Aug 20 07:11:52 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 02:11:52 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <Pine.GSO.4.10.10008200817510.13651-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCEENHHAAA.tim_one@email.msn.com>

[esr]
> I'm +1 for slipping this one in under the wire, if it matters.

[Moshe Zadka]
> I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
> yet, which makes it the perfect place to get stuff for 2.0.

I may be an asshole, but I'm not an idiot:  note that the planned release
date (PEP 200) for 2.0b1 is a week from Monday.  And since there is only one
beta cycle planned, *nothing* goes in except bugfixes after 2.0b1 is
released.  Guido won't like that, but he's not the release manager, and when
I'm replaced by the real release manager on Tuesday, he'll agree with me on
this and Guido will get noogied to death if he opposes us <wink>.

So whatever tricks you want to try to play, play 'em fast.

not-that-i-believe-the-beta-release-date-will-be-met-anyway-
    but-i-won't-admit-that-until-after-it-slips-ly y'rs  - tim




From Moshe Zadka <moshez@math.huji.ac.il>  Sun Aug 20 07:17:12 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sun, 20 Aug 2000 09:17:12 +0300 (IDT)
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEENHHAAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008200915410.13651-100000@sundial>

On Sun, 20 Aug 2000, Tim Peters wrote:

> [esr]
> > I'm +1 for slipping this one in under the wire, if it matters.
> 
> [Moshe Zadka]
> > I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
> > yet, which makes it the perfect place to get stuff for 2.0.
> 
> I may be an asshole, but I'm not an idiot:  note that the planned release
> date (PEP 200) for 2.0b1 is a week from Monday.  And since there is only one
> beta cycle planned, *nothing* goes in except bugfixes after 2.0b1 is
> released. 

But that's irrelevant. The sumo interpreter will be a different release,
and will probably be based on 2.0 for core. So what if it's only available
only a month after 2.0 is ready?

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tim_one@email.msn.com  Sun Aug 20 07:24:31 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 02:24:31 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <Pine.GSO.4.10.10008200915410.13651-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENIHAAA.tim_one@email.msn.com>

[Moshe]
> But that's irrelevant. The sumo interpreter will be a different release,
> and will probably be based on 2.0 for core. So what if it's only available
> only a month after 2.0 is ready?

Like I said, I may be an idiot, but I'm not an asshole -- have fun!




From sjoerd@oratrix.nl  Sun Aug 20 10:22:28 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 11:22:28 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Fri, 18 Aug 2000 20:42:34 +0200.
 <000001c00945$a8d37e40$f2a6b5d4@hagrid>
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl>
 <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <20000820092229.F3A2D31047C@bireme.oratrix.nl>

Why don't we handle graminit.c/graminit.h the same way as we currently
handle configure/config.h.in?  The person making a change to
configure.in is responsible for running autoconf and checking in the
result.  Similarly, the person making a change to Grammar should
regenerate graminit.c/graminit.h and check in the result.  In fact,
that is exactly what happened in this particular case.  I'd say there
isn't really a reason to create graminit.c/graminit.h automatically
whenever you do a build of Python.  Even worse, when you have a
read-only copy of the source and you build in a different directory
(and that used to be supported) the current setup will break since it
tries to overwrite Python/graminit.c and Include/graminit.h.

I'd say, go back to the old situation, possibly with a simple Makefile
rule added so that you *can* build graminit, but one that is not used
automatically.

On Fri, Aug 18 2000 "Fredrik Lundh" wrote:

> sjoerd wrote:
> 
> > The problem was that because of your (I think it was you :-) earlier
> > change to have a Makefile in Grammar, I had an old graminit.c lying
> > around in my build directory.  I don't build in the source directory
> > and the changes for a Makefile in Grammar resulted in a file
> > graminit.c in the wrong place.
> 
> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?
> 
> or is python development a unix-only thingie these days?
> 
> </F>
> 
> 

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From thomas@xs4all.net  Sun Aug 20 10:41:14 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 11:41:14 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000820092229.F3A2D31047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Sun, Aug 20, 2000 at 11:22:28AM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl>
Message-ID: <20000820114114.A4797@xs4all.nl>

On Sun, Aug 20, 2000 at 11:22:28AM +0200, Sjoerd Mullender wrote:

> I'd say, go back to the old situation, possibly with a simple Makefile
> rule added so that you *can* build graminit, but one that is not used
> automatically.

That *is* the old situation. The rule of making graminit as a matter of
course was for convenience with patches that change grammar. Now that most
have been checked in, and we've seen what havoc and confusion making
graminit automatically can cause, I'm all for going back to that too ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From loewis@informatik.hu-berlin.de  Sun Aug 20 11:51:16 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Sun, 20 Aug 2000 12:51:16 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E5340.B00811EF@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com>
Message-ID: <200008201051.MAA05259@pandora.informatik.hu-berlin.de>

> Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
> chars then the traditional API would have to raise encoding
> errors

I don't know what you mean by "traditional" here. The gettext.gettext
implementation in Barry's patch will return the UTF-8 encoded byte
string, instead of raising encoding errors - no code conversion takes
place.

> Perhaps the return value type of .gettext() should be given on
> the .install() call: e.g. encoding='utf-8' would have .gettext()
> return a string using UTF-8 while encoding='unicode' would have
> it return Unicode objects.

No. You should have the option of either receiving byte strings, or
Unicode strings. If you want byte strings, you should get the ones
appearing in the catalog.

Regards,
Martin


From loewis@informatik.hu-berlin.de  Sun Aug 20 11:59:28 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Sun, 20 Aug 2000 12:59:28 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E5558.C7B6029B@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net>
 <399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com>
Message-ID: <200008201059.MAA05292@pandora.informatik.hu-berlin.de>

> Martin mentioned the possibility of using UTF-8 for the
> catalogs and then decoding them into Unicode. That should be
> a reasonable way of getting .gettext() to talk Unicode :-)

You misunderstood. Using UTF-8 in the catalogs is independent from
using Unicode. You can have the catalogs in UTF-8, and still access
the catalog as byte strings, and you can have the catalog in Latin-1,
and convert that to unicode strings upon retrieval.

> Just dreaming a little here: I would prefer that we use some
> form of XML to write the catalogs.=20

Well, I hope that won't happen. We have excellent tools dealing with
the catalogs, and I see no value in replacing

#: src/grep.c:183 src/grep.c:200 src/grep.c:300 src/grep.c:408 src/kwset.c:=
184
#: src/kwset.c:190
msgid "memory exhausted"
msgstr "Virtueller Speicher ersch=F6pft."

with

<entry>
  <sourcelist>
    <source file=3D"src/grep.c" line=3D"183"/>
    <source file=3D"src/grep.c" line=3D"200"/>
    <source file=3D"src/grep.c" line=3D"300"/>
    <source file=3D"src/grep.c" line=3D"408"/>
    <source file=3D"src/kwset.c" line=3D"184"/>
    <source file=3D"src/kwset.c" line=3D"190"/>
  </sourcelist>
  <msgid>memory exhausted</msgid>
  <msgstr>Virtueller Speicher ersch=F6pft.</msgstr>
</entry>

> XML comes with Unicode support and tools for writing XML are
> available too.

Well, the catalog files also "come with unicode support", meaning that
you can write them in UTF-8 if you want; and tools could be easily
extended to process UCS-2 input if anybody desires.

OTOH, the tools for writing po files are much more advanced than any
XML editor I know.

> We'd only need a way to transform XML into catalog files of some
> Python specific platform independent format (should be possible to
> create .mo files from XML too).

Or we could convert the XML catalogs in Uniforum-style catalogs, and
then use the existing tools.

Regards,
Martin


From sjoerd@oratrix.nl  Sun Aug 20 12:26:05 2000
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 13:26:05 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Sun, 20 Aug 2000 11:41:14 +0200.
 <20000820114114.A4797@xs4all.nl>
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl>
 <20000820114114.A4797@xs4all.nl>
Message-ID: <20000820112605.BF61431047C@bireme.oratrix.nl>

Here's another pretty serious bug.  Can you verify that this time it
isn't my configurations?

Try this:

from encodings import cp1006, cp1026

I get the error
ImportError: cannot import name cp1026
but if I try to import the two modules separately I get no error.

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From thomas@xs4all.net  Sun Aug 20 14:51:17 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 15:51:17 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000820112605.BF61431047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Sun, Aug 20, 2000 at 01:26:05PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> <20000820114114.A4797@xs4all.nl> <20000820112605.BF61431047C@bireme.oratrix.nl>
Message-ID: <20000820155117.C4797@xs4all.nl>

On Sun, Aug 20, 2000 at 01:26:05PM +0200, Sjoerd Mullender wrote:

> Here's another pretty serious bug.  Can you verify that this time it
> isn't my configurations?

It isn't your config, this is a genuine bug. I'll be checking in a quick fix
in a few minutes, and start thinking about a test case that would've caught
this.

> Try this:
> from encodings import cp1006, cp1026

> I get the error
> ImportError: cannot import name cp1026
> but if I try to import the two modules separately I get no error.

Yes. 'find_from_args' wasn't trying hard enough to find out what the
argument to an import were. Previously, all it had to do was scan the
bytecodes immediately following an 'IMPORT_NAME' for IMPORT_FROM statements,
and record its names. However, now that IMPORT_FROM generates a STORE, it
stops looking after the first IMPORT_FROM. This worked fine for normal
object-retrieval imports, which don't use the list generated by
find_from_args, but not for dynamic loading tricks such as 'encodings' uses.

The fix I made was to unconditionally jump over 5 bytes, after an
IMPORT_FROM, rather than 2 (2 for the oparg, 1 for the next instruction (a
STORE) and two more for the oparg for the STORE)

This does present a problem for the proposed change in semantics for the
'as' clause, though. If we allow all expressions that yield valid l-values
in import-as and from-import-as, we can't easily find out what the import
arguments are by examining the future bytecode stream. (It might be
possible, if we changed the POP_TOP to, say, END_IMPORT, that pops the
module from the stack and can be used to end the search for import
arguments.

However, I find this hackery a bit appalling :) Why are we constructing a
list of import arguments at runtime, from compile-time information, if all
that information is available at compile time too ? And more easily so ?
What would break if I made IMPORT_NAME retrieve the from-arguments from a
list, which is built on the stack by com_import_stmt ? Or is there a more
convenient way of passing a variable list of strings to a bytecode ? It
won't really affect performance, since find_from_args is called for all
imports anyway.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Fredrik Lundh" <effbot@telia.com  Sun Aug 20 15:34:31 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sun, 20 Aug 2000 16:34:31 +0200
Subject: [Python-Dev] [ Patch #101238 ] PyOS_CheckStack for Windows
References: <20000815104723.A27306@ActiveState.com> <005401c006ec$a95a74a0$f2a6b5d4@hagrid>
Message-ID: <019801c00ab3$c59e8d20$f2a6b5d4@hagrid>

I've prepared a patch based on the PyOS_CheckStack code
I posted earlier:

http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101238&group_id=5470

among other things, this fixes the recursive __repr__/__str__
problems under Windows.  it also makes it easier to use Python
with non-standard stack sizes (e.g. when embedding).

some additional notes:

- the new function was added to pythonrun.c, mostly because
it was already declared in pythonrun.h...

- for the moment, it's only enabled if you're using MSVC.  if any-
one here knows if structured exceptions are widely supported by
Windows compilers, let me know.

- it would probably be a good idea to make it an __inline function
(and put the entire function in the header file instead), but I don't
recall if MSVC does the right thing in that case, and I haven't had
time to try it out just yet...

enjoy /F



From sjoerd.mullender@oratrix.com  Sun Aug 20 15:54:43 2000
From: sjoerd.mullender@oratrix.com (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 16:54:43 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> <20000820114114.A4797@xs4all.nl> <20000820112605.BF61431047C@bireme.oratrix.nl> <20000820155117.C4797@xs4all.nl>
Message-ID: <399FF133.63B83A52@oratrix.com>

This seems to have done the trick.  Thanks.

Thomas Wouters wrote:
> 
> On Sun, Aug 20, 2000 at 01:26:05PM +0200, Sjoerd Mullender wrote:
> 
> > Here's another pretty serious bug.  Can you verify that this time it
> > isn't my configurations?
> 
> It isn't your config, this is a genuine bug. I'll be checking in a quick fix
> in a few minutes, and start thinking about a test case that would've caught
> this.
> 
> > Try this:
> > from encodings import cp1006, cp1026
> 
> > I get the error
> > ImportError: cannot import name cp1026
> > but if I try to import the two modules separately I get no error.
> 
> Yes. 'find_from_args' wasn't trying hard enough to find out what the
> argument to an import were. Previously, all it had to do was scan the
> bytecodes immediately following an 'IMPORT_NAME' for IMPORT_FROM statements,
> and record its names. However, now that IMPORT_FROM generates a STORE, it
> stops looking after the first IMPORT_FROM. This worked fine for normal
> object-retrieval imports, which don't use the list generated by
> find_from_args, but not for dynamic loading tricks such as 'encodings' uses.
> 
> The fix I made was to unconditionally jump over 5 bytes, after an
> IMPORT_FROM, rather than 2 (2 for the oparg, 1 for the next instruction (a
> STORE) and two more for the oparg for the STORE)
> 
> This does present a problem for the proposed change in semantics for the
> 'as' clause, though. If we allow all expressions that yield valid l-values
> in import-as and from-import-as, we can't easily find out what the import
> arguments are by examining the future bytecode stream. (It might be
> possible, if we changed the POP_TOP to, say, END_IMPORT, that pops the
> module from the stack and can be used to end the search for import
> arguments.
> 
> However, I find this hackery a bit appalling :) Why are we constructing a
> list of import arguments at runtime, from compile-time information, if all
> that information is available at compile time too ? And more easily so ?
> What would break if I made IMPORT_NAME retrieve the from-arguments from a
> list, which is built on the stack by com_import_stmt ? Or is there a more
> convenient way of passing a variable list of strings to a bytecode ? It
> won't really affect performance, since find_from_args is called for all
> imports anyway.
> 
> --
> Thomas Wouters <thomas@xs4all.net>
> 
> Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev


From nascheme@enme.ucalgary.ca  Sun Aug 20 16:53:28 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Sun, 20 Aug 2000 09:53:28 -0600
Subject: [Python-Dev] Re: Eureka! (Re: test_fork fails --with-thread)
In-Reply-To: <14750.3466.34096.504552@buffalo.fnal.gov>; from Charles G Waldman on Fri, Aug 18, 2000 at 11:31:06PM -0500
References: <14750.3466.34096.504552@buffalo.fnal.gov>
Message-ID: <20000820095328.A25233@keymaster.enme.ucalgary.ca>

On Fri, Aug 18, 2000 at 11:31:06PM -0500, Charles G Waldman wrote:
> Well, I think I understand what's going on and I have a patch that
> fixes the problem.

Yes!  With this patch my nasty little tests run successfully both
single and dual CPU Linux machines.  Its still a mystery how the child
can screw up the parent after the fork.  Oh well.

  Neil


From trentm@ActiveState.com  Sun Aug 20 18:15:52 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Sun, 20 Aug 2000 10:15:52 -0700
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <B5C4DC70.7D6C%dgoodger@bigfoot.com>; from dgoodger@bigfoot.com on Sun, Aug 20, 2000 at 12:53:05AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com> <B5C4DC70.7D6C%dgoodger@bigfoot.com>
Message-ID: <20000820101552.A24181@ActiveState.com>

On Sun, Aug 20, 2000 at 12:53:05AM -0400, David Goodger wrote:
> Glad to contribute. You'll find a regression test module for the current
> getopt.py as revised patch #101110. I based it on some existing Lib/test/
> modules, but haven't found the canonical example or instruction set. Is
> there one?

I don't really think there is. Kind of folk lore. There are some good
examples to follow in the existing test suite. Skip Montanaro wrote a README
for writing tests and using the test suite (.../dist/src/Lib/test/README).

Really, the testing framework is extremely simple. Which is one of its
benefits. There is not a whole lot of depth that one has not grokked just by
writing one test_XXX.py.


Trent

-- 
Trent Mick
TrentM@ActiveState.com


From tim_one@email.msn.com  Sun Aug 20 18:46:35 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 13:46:35 -0400
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <20000820101552.A24181@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>

[David Goodger]
> Glad to contribute. You'll find a regression test module for the current
> getopt.py as revised patch #101110. I based it on some existing
> Lib/test/ modules, but haven't found the canonical example or instruction
> set. Is there one?

[Trent Mick]
> I don't really think there is. Kind of folk lore. There are some good
> examples to follow in the existing test suite. Skip Montanaro
> wrote a README for writing tests and using the test suite
> (.../dist/src/Lib/test/README).
>
> Really, the testing framework is extremely simple. Which is one of its
> benefits. There is not a whole lot of depth that one has not
> grokked just by writing one test_XXX.py.

What he said.  The README explains it well, and I think the only thing you
(David) missed in your patch was the need to generate the "expected output"
file via running regrtest once with -g on the new test case.

I'd add one thing:  people use "assert" *way* too much in the test suite.
It's usually much better to just print what you got, and rely on regrtest's
output-comparison to complain if what you get isn't what you expected.  The
primary reason for this is that asserts "vanish" when Python is run
using -O, so running regrtest in -O mode simply doesn't test *anything*
caught by an assert.

A compromise is to do both:

    print what_i_expected, what_i_got
    assert what_i_expected == what_i_got

In Python 3000, I expect we'll introduce a new binary infix operator

    !!!

so that

    print x !!! y

both prints x and y and asserts that they're equal <wink>.




From mal@lemburg.com  Sun Aug 20 18:57:32 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sun, 20 Aug 2000 19:57:32 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <Pine.LNX.4.21.0008192202520.25394-100000@james.daa.com.au>
Message-ID: <39A01C0C.E6BA6CCB@lemburg.com>

James Henstridge wrote:
> 
> On Sat, 19 Aug 2000, M.-A. Lemburg wrote:
> 
> > > As I said above, most of that turned out not to be very useful.  Did you
> > > include any of the language selection code in the last version of my
> > > gettext module?  It gave behaviour very close to C gettext in this
> > > respect.  It expands the locale name given by the user using the
> > > locale.alias files found on the systems, then decomposes that into the
> > > simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> > > search for catalogs by names:
> > >   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> > >
> > > This also allows things like expanding LANG=catalan to:
> > >   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> > > (provided the appropriate locale.alias files are found)
> > >
> > > If you missed that that version I sent you I can send it again.  It
> > > stripped out a lot of the experimental code giving a much simpler module.
> >
> > Uhm, can't you make some use of the new APIs in locale.py
> > for this ?
> >
> > locale.py has a whole new set of encoding aware support for
> > LANG variables. It supports Unix and Windows (thanks to /F).
> 
> Well, it can do a little more than that.  It will also handle the case of
> a number of locales listed in the LANG environment variable.  It also
> doesn't look like it handles decomposition of a locale like
> ll_CC.encoding@modifier into other matching encodings in the correct
> precedence order.
> 
> Maybe something to do this sort of decomposition would fit better in
> locale.py though.
> 
> This sort of thing is very useful for people who know more than one
> language, and doesn't seem to be handled by plain setlocale()

I'm not sure I can follow you here: are you saying that your
support in gettext.py does more or less than what's present
in locale.py ?

If it's more, I think it would be a good idea to add those
parts to locale.py.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From bckfnn@worldonline.dk  Sun Aug 20 19:34:53 2000
From: bckfnn@worldonline.dk (Finn Bock)
Date: Sun, 20 Aug 2000 18:34:53 GMT
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
Message-ID: <39a024b1.5036672@smtp.worldonline.dk>

[Tim Peters]

>I'd add one thing:  people use "assert" *way* too much in the test suite.

I'll second that.

>It's usually much better to just print what you got, and rely on regrtest's
>output-comparison to complain if what you get isn't what you expected.  The
>primary reason for this is that asserts "vanish" when Python is run
>using -O, so running regrtest in -O mode simply doesn't test *anything*
>caught by an assert.

It can also stop the test script from being used with JPython. A
difference that is acceptable (perhaps by necessity) will prevent the
remaining test from executing.

regards,
finn


From akuchlin@mems-exchange.org  Sun Aug 20 21:44:02 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Sun, 20 Aug 2000 16:44:02 -0400
Subject: [Python-Dev] ANN: BerkeleyDB 2.9.0 (experimental)
Message-ID: <200008202044.QAA01842@207-172-111-161.s161.tnt1.ann.va.dialup.rcn.com>

This is an experimental release of a rewritten version of the
BerkeleyDB module by Robin Dunn.  Starting from Greg Smith's version,
which supports the 3.1.x versions of Sleepycat's DB library, I've
translated the SWIG wrapper into hand-written C code.  

Warnings: this module is experimental, so don't put it to production
use.  I've only compiled the code with the current Python CVS tree;
there might be glitches with 1.5.2 which will need to be fixed.
Cursor objects are implemented, but completely untested; methods might
not work or might dump core.  (DB and DbEnv objects *are* tested, and
seem to work fine.)

Grab the code from this FTP directory: 
     ftp://starship.python.net/pub/crew/amk/new/

Please report problems to me.  Thanks!

--amk


From thomas@xs4all.net  Sun Aug 20 21:49:18 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 22:49:18 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: <200008202002.NAA13530@delerium.i.sourceforge.net>; from noreply@sourceforge.net on Sun, Aug 20, 2000 at 01:02:32PM -0700
References: <200008202002.NAA13530@delerium.i.sourceforge.net>
Message-ID: <20000820224918.D4797@xs4all.nl>

On Sun, Aug 20, 2000 at 01:02:32PM -0700, noreply@sourceforge.net wrote:

> Summary: Allow all assignment expressions after 'import something as'

> Date: 2000-Aug-19 21:29
> By: twouters
> 
> Comment:
> This absurdly simple patch (4 lines changed in 2 files) turns 'import-as'
> and 'from-import-as' into true assignment expressions: the name given
> after 'as' can be any expression that is a valid l-value:

> >>> from sys import version_info as (maj,min,pl,relnam,relno)          
> >>> maj,min,pl,relnam,relno
> (2, 0, 0, 'beta', 1)

[snip other examples]

> This looks so natural, I would almost treat this as a bugfix instead of a
> new feature ;)

> -------------------------------------------------------
> 
> Date: 2000-Aug-20 20:02
> By: nowonder
> 
> Comment:
> Looks fine. Works as I expect. Doesn't break old code. I hope Guido likes
> it (assigned to gvanrossum).

Actually, it *will* break old code. Try using some of those tricks on, say,
'encodings', like so (excessively convoluted to prove a point ;):

>>> x = {}
>>> from encodings import cp1006 as x[oct(1)], cp1026 as x[hex(20)]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ImportError: cannot import name cp1026

I've another patch waiting which I'll upload after some cleanup, which
circumvents this. The problem is that find_from_args is having a hard time
figuring out how 'import' is being called, exactly. So instead, I create a
list *before* calling import, straight from the information available at
compile-time. (It's only a list because it is currently a list, I would
prefer to make it a tuple instead but I donno if it would break stuff)

That patch is necessary to be able to support this new behaviour, but I
think it's worth checking in even if this patch is rejected -- it speeds up
pystone ! :-) Basically it moves the logic of finding out what the import
arguments are to com_import_stmt() (at compiletime), rather than the
'IMPORT_NAME' bytecode (at runtime).

The only downside is that it adds all 'from-import' arguments to the
co_consts list (as PyString objects) as well as where they already are, the
co_names list (as normal strings). I don't think that's a high price to pay,
myself, and mayhaps the actual storage use could be reduced by making the
one point to the data of the other. Not sure if it's worth it, though.

I've just uploaded the other patch, it can be found here:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101243&group_id=5470

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From dgoodger@bigfoot.com  Sun Aug 20 22:08:05 2000
From: dgoodger@bigfoot.com (David Goodger)
Date: Sun, 20 Aug 2000 17:08:05 -0400
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call
 for reviewer!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
Message-ID: <B5C5C0F4.7E0D%dgoodger@bigfoot.com>

on 2000-08-20 13:46, Tim Peters (tim_one@email.msn.com) wrote:
> What he said.  The README explains it well...

Missed that. Will read it & update the test module.

> In Python 3000, I expect we'll introduce a new binary infix operator
> 
> !!!

Looking forward to more syntax in future releases of Python. I'm sure you'll
lead the way, Tim.
-- 
David Goodger    dgoodger@bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)



From thomas@xs4all.net  Sun Aug 20 22:17:30 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 23:17:30 +0200
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <B5C5C0F4.7E0D%dgoodger@bigfoot.com>; from dgoodger@bigfoot.com on Sun, Aug 20, 2000 at 05:08:05PM -0400
References: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com> <B5C5C0F4.7E0D%dgoodger@bigfoot.com>
Message-ID: <20000820231730.F4797@xs4all.nl>

On Sun, Aug 20, 2000 at 05:08:05PM -0400, David Goodger wrote:
> on 2000-08-20 13:46, Tim Peters (tim_one@email.msn.com) wrote:

> > In Python 3000, I expect we'll introduce a new binary infix operator
> > 
> > !!!
> 
> Looking forward to more syntax in future releases of Python. I'm sure you'll
> lead the way, Tim.

I think you just witnessed some of Tim's legendary wit ;) I suspect most
Python programmers would shoot Guido, Tim, whoever wrote the patch that
added the new syntax, and then themselves, if that ever made it into Python
;)

Good-thing-I-can't-legally-carry-guns-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Mon Aug 21 00:22:10 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Sun, 20 Aug 2000 23:22:10 +0000
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment
 expressions after 'import something as'
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl>
Message-ID: <39A06822.5360596D@nowonder.de>

Thomas Wouters wrote:
> 
> > Date: 2000-Aug-20 20:02
> > By: nowonder
> >
> > Comment:
> > Looks fine. Works as I expect. Doesn't break old code. I hope Guido likes
> > it (assigned to gvanrossum).
> 
> Actually, it *will* break old code. Try using some of those tricks on, say,
> 'encodings', like so (excessively convoluted to prove a point ;):

Actually, I meant that it won't break any existing code (because there
is no code using 'import x as y' yet).

Although I don't understand your example (because the word "encoding"
makes
me want to stick my head into the sand), I am fine with your shift
of the list building to compile-time. When I realized what IMPORT_NAME
does at runtime, I wondered if this was really neccessary.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From nowonder@nowonder.de  Mon Aug 21 00:54:09 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Sun, 20 Aug 2000 23:54:09 +0000
Subject: [Python-Dev] OT: How to send CVS update mails?
Message-ID: <39A06FA1.C5EB34D1@nowonder.de>

Sorry, but I cannot figure out how to make SourceForge send
updates whenever there is a CVS commit (checkins mailing
list).

I need this for another project, so if someone remembers
how to do this, please tell me.

off-topic-and-terribly-sorri-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From greg@cosc.canterbury.ac.nz  Mon Aug 21 02:50:49 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 21 Aug 2000 13:50:49 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14749.26431.198802.970572@cj42289-a.reston1.va.home.com>
Message-ID: <200008210150.NAA15911@s454.cosc.canterbury.ac.nz>

"Fred L. Drake, Jr." <fdrake@beopen.com>:

> Let's accept (some variant) or Skip's desired functionality as
> os.path.splitprefix(); The result can be (prefix, [list of suffixes]).

+1

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From guido@beopen.com  Mon Aug 21 05:37:46 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 20 Aug 2000 23:37:46 -0500
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Sun, 20 Aug 2000 22:49:18 +0200."
 <20000820224918.D4797@xs4all.nl>
References: <200008202002.NAA13530@delerium.i.sourceforge.net>
 <20000820224918.D4797@xs4all.nl>
Message-ID: <200008210437.XAA22075@cj20424-a.reston1.va.home.com>

> > Summary: Allow all assignment expressions after 'import something as'

-1.  Hypergeneralization.

By the way, notice that

  import foo.bar

places 'foo' in the current namespace, after ensuring that 'foo.bar'
is defined.

What should

  import foo.bar as spam

assign to spam?  I hope foo.bar, not foo.

I note that the CVS version doesn't support this latter syntax at all;
it should be fixed, even though the same effect can be has with

  from foo import bar as spam

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Mon Aug 21 06:08:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 01:08:25 -0400
Subject: [Python-Dev] Py_MakePendingCalls
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>

Does anyone ever call Py_MakePendingCalls?  It's an undocumented entry point
in ceval.c.  I'd like to get rid of it.  Guido sez:

    The only place I know that uses it was an old Macintosh module I
    once wrote to play sounds asynchronously.  I created
    Py_MakePendingCalls() specifically for that purpose.  It may be
    best to get rid of it.

It's not best if anyone is using it despite its undocumented nature, but is
best otherwise.




From Moshe Zadka <moshez@math.huji.ac.il>  Mon Aug 21 06:56:31 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Mon, 21 Aug 2000 08:56:31 +0300 (IDT)
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for
 reviewer!)
In-Reply-To: <20000820231730.F4797@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008210855550.8603-100000@sundial>

On Sun, 20 Aug 2000, Thomas Wouters wrote:

> I think you just witnessed some of Tim's legendary wit ;) I suspect most
> Python programmers would shoot Guido, Tim, whoever wrote the patch that
> added the new syntax, and then themselves, if that ever made it into Python
> ;)
> 
> Good-thing-I-can't-legally-carry-guns-ly y'rs,

Oh, I'm sure ESR will let you use on of his for this purpose.

it's-a-worth-goal-ly y'rs, Z.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From nowonder@nowonder.de  Mon Aug 21 09:30:02 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Mon, 21 Aug 2000 08:30:02 +0000
Subject: [Python-Dev] Re: compile.c: problem with duplicate argument bugfix
References: <14750.1321.978274.117748@buffalo.fnal.gov>
Message-ID: <39A0E88A.2E2DB35E@nowonder.de>

Charles G Waldman wrote:
> 
> I'm catching up on the python-dev archives and see your message.
> 
> Note that I submitted a patch back in May to fix this same problem:
> 
>  http://www.python.org/pipermail/patches/2000-May/000638.html
> 
> There you will find a working patch, and a detailed discussion which
> explains why your approach results in a core-dump.

I had a look. This problem was fixed by removing the call to
PyErr_Clear() from (at that time) line 359 in Objects/object.c.

If you think your patch is a better solution/still needed, please
explain why. Thanks anyway.

> I submitted this patch back before Python moved over to SourceForge,
> there was a small amount of discussion about it and then the word from
> Guido was "I'm too busy to look at this now", and the patch got
> dropped on the floor.

a-patch-manager-can-be-a-good-thing---even-web-based-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From gstein@lyra.org  Mon Aug 21 08:57:06 2000
From: gstein@lyra.org (Greg Stein)
Date: Mon, 21 Aug 2000 00:57:06 -0700
Subject: [Python-Dev] OT: How to send CVS update mails?
In-Reply-To: <39A06FA1.C5EB34D1@nowonder.de>; from nowonder@nowonder.de on Sun, Aug 20, 2000 at 11:54:09PM +0000
References: <39A06FA1.C5EB34D1@nowonder.de>
Message-ID: <20000821005706.D11327@lyra.org>

Take a look at CVSROOT/loginfo and CVSROOT/syncmail in the Python repository.

Cheers,
-g

On Sun, Aug 20, 2000 at 11:54:09PM +0000, Peter Schneider-Kamp wrote:
> Sorry, but I cannot figure out how to make SourceForge send
> updates whenever there is a CVS commit (checkins mailing
> list).
> 
> I need this for another project, so if someone remembers
> how to do this, please tell me.
> 
> off-topic-and-terribly-sorri-ly y'rs
> Peter
> -- 
> Peter Schneider-Kamp          ++47-7388-7331
> Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
> N-7050 Trondheim              http://schneider-kamp.de
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/


From gstein@lyra.org  Mon Aug 21 08:58:41 2000
From: gstein@lyra.org (Greg Stein)
Date: Mon, 21 Aug 2000 00:58:41 -0700
Subject: [Python-Dev] Py_MakePendingCalls
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 21, 2000 at 01:08:25AM -0400
References: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>
Message-ID: <20000821005840.E11327@lyra.org>

Torch the sucker. It is a pain for free-threading.

(and no: I don't use it, nor do I know anything that uses it)

Cheers,
-g

On Mon, Aug 21, 2000 at 01:08:25AM -0400, Tim Peters wrote:
> Does anyone ever call Py_MakePendingCalls?  It's an undocumented entry point
> in ceval.c.  I'd like to get rid of it.  Guido sez:
> 
>     The only place I know that uses it was an old Macintosh module I
>     once wrote to play sounds asynchronously.  I created
>     Py_MakePendingCalls() specifically for that purpose.  It may be
>     best to get rid of it.
> 
> It's not best if anyone is using it despite its undocumented nature, but is
> best otherwise.
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/


From martin@loewis.home.cs.tu-berlin.de  Mon Aug 21 08:57:54 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Mon, 21 Aug 2000 09:57:54 +0200
Subject: [Python-Dev] ANN: BerkeleyDB 2.9.0 (experimental)
Message-ID: <200008210757.JAA08643@loewis.home.cs.tu-berlin.de>

Hi Andrew,

I just downloaded your new module, and found a few problems with it:

- bsddb3.db.hashopen does not work, as Db() is called with no
  arguments; it expects at least one argument. The same holds for btopen
  and rnopen.

- The Db() function should accept None as an argument (or no argument),
  as invoking db_create with a NULL environment creates a "standalone
  database".

- Error recovery appears to be missing; I'm not sure whether this is
  the fault of the library or the fault of the module, though:

>>> from bsddb3 import db
>>> e=db.DbEnv()
>>> e.open("/tmp/aaa",db.DB_CREATE)
>>> d=db.Db(e)
>>> d.open("foo",db.DB_HASH,db.DB_CREATE)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
_bsddb.error: (22, 'Das Argument ist ung\374ltig')
>>> d.open("foo",db.DB_HASH,db.DB_CREATE)
zsh: segmentation fault  python

BTW, I still don't know what argument exactly was invalid ...

Regards,
Martin


From mal@lemburg.com  Mon Aug 21 10:42:04 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 11:42:04 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
 <399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com> <200008201059.MAA05292@pandora.informatik.hu-berlin.de>
Message-ID: <39A0F96C.EA0D0D4B@lemburg.com>

Martin von Loewis wrote:
> 
> > Just dreaming a little here: I would prefer that we use some
> > form of XML to write the catalogs.
> 
> Well, I hope that won't happen. We have excellent tools dealing with
> the catalogs, and I see no value in replacing
> 
> #: src/grep.c:183 src/grep.c:200 src/grep.c:300 src/grep.c:408 src/kwset.c:184
> #: src/kwset.c:190
> msgid "memory exhausted"
> msgstr "Virtueller Speicher erschöpft."
> 
> with
> 
> <entry>
>   <sourcelist>
>     <source file="src/grep.c" line="183"/>
>     <source file="src/grep.c" line="200"/>
>     <source file="src/grep.c" line="300"/>
>     <source file="src/grep.c" line="408"/>
>     <source file="src/kwset.c" line="184"/>
>     <source file="src/kwset.c" line="190"/>
>   </sourcelist>
>   <msgid>memory exhausted</msgid>
>   <msgstr>Virtueller Speicher erschöpft.</msgstr>
> </entry>

Well, it's the same argument as always: better have one format
which fits all than a new format for every application. XML
suits these tasks nicely and is becoming more and more accepted
these days.
 
> > XML comes with Unicode support and tools for writing XML are
> > available too.
> 
> Well, the catalog files also "come with unicode support", meaning that
> you can write them in UTF-8 if you want; and tools could be easily
> extended to process UCS-2 input if anybody desires.
> 
> OTOH, the tools for writing po files are much more advanced than any
> XML editor I know.
> 
> > We'd only need a way to transform XML into catalog files of some
> > Python specific platform independent format (should be possible to
> > create .mo files from XML too).
> 
> Or we could convert the XML catalogs in Uniforum-style catalogs, and
> then use the existing tools.

True.

Was just a thought...
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Aug 21 10:30:20 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 11:30:20 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com> <200008201051.MAA05259@pandora.informatik.hu-berlin.de>
Message-ID: <39A0F6AC.B9C0FCC9@lemburg.com>

Martin von Loewis wrote:
> 
> > Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
> > chars then the traditional API would have to raise encoding
> > errors
> 
> I don't know what you mean by "traditional" here. The gettext.gettext
> implementation in Barry's patch will return the UTF-8 encoded byte
> string, instead of raising encoding errors - no code conversion takes
> place.

True.
 
> > Perhaps the return value type of .gettext() should be given on
> > the .install() call: e.g. encoding='utf-8' would have .gettext()
> > return a string using UTF-8 while encoding='unicode' would have
> > it return Unicode objects.
> 
> No. You should have the option of either receiving byte strings, or
> Unicode strings. If you want byte strings, you should get the ones
> appearing in the catalog.

So you're all for the two different API version ? After some
more thinking, I think I agree. The reason is that the lookup
itself will have to be Unicode-aware too:

gettext.unigettext(u"Löschen") would have to convert u"Löschen"
to UTF-8, then look this up and convert the returned value
back to Unicode.

gettext.gettext(u"Löschen") will fail with ASCII default encoding.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Aug 21 11:04:04 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 12:04:04 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>
Message-ID: <39A0FE94.1AF5FABF@lemburg.com>

This is a multi-part message in MIME format.
--------------D181A42EF954A5F1E101C9DB
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Guido van Rossum wrote:
> 
> Paul Prescod wrote:
> 
> > I don't think of iterators as indexing in terms of numbers. Otherwise I
> > could do this:
> >
> > >>> a={0:"zero",1:"one",2:"two",3:"three"}
> > >>> for i in a:
> > ...     print i
> > ...
> >
> > So from a Python user's point of view, for-looping has nothing to do
> > with integers. From a Python class/module creator's point of view it
> > does have to do with integers. I wouldn't be either surprised nor
> > disappointed if that changed one day.
> 
> Bingo!
> 
> I've long had an idea for generalizing 'for' loops using iterators. This is
> more a Python 3000 thing, but I'll explain it here anyway because I think
> it's relevant. Perhaps this should become a PEP?  (Maybe we should have a
> series of PEPs with numbers in the 3000 range for Py3k ideas?)
> 
> The statement
> 
>   for <variable> in <object>: <block>
> 
> should translate into this kind of pseudo-code:
> 
>   # variant 1
>   __temp = <object>.newiterator()
>   while 1:
>       try: <variable> = __temp.next()
>       except ExhaustedIterator: break
>       <block>
> 
> or perhaps (to avoid the relatively expensive exception handling):
> 
>   # variant 2
>   __temp = <object>.newiterator()
>   while 1:
>       __flag, <variable = __temp.next()
>       if not __flag: break
>       <block>
> 
> In variant 1, the next() method returns the next object or raises
> ExhaustedIterator. In variant 2, the next() method returns a tuple (<flag>,
> <variable>) where <flag> is 1 to indicate that <value> is valid or 0 to
> indicate that there are no more items available. I'm not crazy about the
> exception, but I'm even less crazy about the more complex next() return
> value (careful observers may have noticed that I'm rarely crazy about flag
> variables :-).
> 
> Another argument for variant 1 is that variant 2 changes what <variable> is
> after the loop is exhausted, compared to current practice: currently, it
> keeps the last valid value assigned to it. Most likely, the next() method
> returns None when the sequence is exhausted. It doesn't make a lot of sense
> to require it to return the last item of the sequence -- there may not *be*
> a last item, if the sequence is empty, and not all sequences make it
> convenient to keep hanging on to the last item in the iterator, so it's best
> to specify that next() returns (0, None) when the sequence is exhausted.
> 
> (It would be tempting to suggeste a variant 1a where instead of raising an
> exception, next() returns None when the sequence is exhausted, but this
> won't fly: you couldn't iterate over a list containing some items that are
> None.)

How about a third variant:

#3:
__iter = <object>.iterator()
while __iter:
   <variable> = __iter.next()
   <block>

This adds a slot call, but removes the malloc overhead introduced
by returning a tuple for every iteration (which is likely to be
a performance problem).

Another possibility would be using an iterator attribute
to get at the variable:

#4:
__iter = <object>.iterator()
while 1:
   if not __iter.next():
        break
   <variable> = __iter.value
   <block>

> Side note: I believe that the iterator approach could actually *speed up*
> iteration over lists compared to today's code. This is because currently the
> interation index is a Python integer object that is stored on the stack.
> This means an integer add with overflow check, allocation, and deallocation
> on each iteration! But the iterator for lists (and other basic sequences)
> could easily store the index as a C int! (As long as the sequence's length
> is stored in an int, the index can't overflow.)

You might want to check out the counterobject.c approach I used
to speed up the current for-loop in Python 1.5's ceval.c:
it's basically a mutable C integer which is used instead of
the current Python integer index.

The details can be found in my old patch:

  http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz

> [Warning: thinking aloud ahead!]
> 
> Once we have the concept of iterators, we could support explicit use of them
> as well. E.g. we could use a variant of the for statement to iterate over an
> existing iterator:
> 
>   for <variable> over <iterator>: <block>
> 
> which would (assuming variant 1 above) translate to:
> 
>   while 1:
>       try: <variable> = <iterator>.next()
>       except ExhaustedIterator: break
>       <block>
> 
> This could be used in situations where you have a loop iterating over the
> first half of a sequence and a second loop that iterates over the remaining
> items:
> 
>   it = something.newiterator()
>   for x over it:
>       if time_to_start_second_loop(): break
>       do_something()
>   for x over it:
>       do_something_else()
> 
> Note that the second for loop doesn't reset the iterator -- it just picks up
> where the first one left off! (Detail: the x that caused the break in the
> first loop doesn't get dealt with in the second loop.)
> 
> I like the iterator concept because it allows us to do things lazily. There
> are lots of other possibilities for iterators. E.g. mappings could have
> several iterator variants to loop over keys, values, or both, in sorted or
> hash-table order. Sequences could have an iterator for traversing them
> backwards, and a few other ones for iterating over their index set (cf.
> indices()) and over (index, value) tuples (cf. irange()). Files could be
> their own iterator where the iterator is almost the same as readline()
> except it raises ExhaustedIterator at EOF instead of returning "".  A tree
> datastructure class could have an associated iterator class that maintains a
> "finger" into the tree.
> 
> Hm, perhaps iterators could be their own iterator? Then if 'it' were an
> iterator, it.newiterator() would return a reference to itself (not a copy).
> Then we wouldn't even need the 'over' alternative syntax. Maybe the method
> should be called iterator() then, not newiterator(), to avoid suggesting
> anything about the newness of the returned iterator.
> 
> Other ideas:
> 
> - Iterators could have a backup() method that moves the index back (or
> raises an exception if this feature is not supported, e.g. when reading data
> from a pipe).
> 
> - Iterators over certain sequences might support operations on the
> underlying sequence at the current position of the iterator, so that you
> could iterate over a sequence and occasionally insert or delete an item (or
> a slice).

FYI, I've attached a module which I've been using a while for
iteration. The code is very simple and implements the #4 variant
described above.
 
> Of course iterators also connect to generators. The basic list iterator
> doesn't need coroutines or anything, it can be done as follows:
> 
>   class Iterator:
>       def __init__(self, seq):
>           self.seq = seq
>           self.ind = 0
>       def next(self):
>           if self.ind >= len(self.seq): raise ExhaustedIterator
>           val = self.seq[self.ind]
>           self.ind += 1
>           return val
> 
> so that <list>.iterator() could just return Iterator(<list>) -- at least
> conceptually.
> 
> But for other data structures the amount of state needed might be
> cumbersome. E.g. a tree iterator needs to maintain a stack, and it's much
> easier to code that using a recursive Icon-style generator than by using an
> explicit stack. On the other hand, I remember reading an article a while ago
> (in Dr. Dobbs?) by someone who argued (in a C++ context) that such recursive
> solutions are very inefficient, and that an explicit stack (1) is really not
> that hard to code, and (2) gives much more control over the memory and time
> consumption of the code. On the third hand, some forms of iteration really
> *are* expressed much more clearly using recursion. On the fourth hand, I
> disagree with Matthias ("Dr. Scheme") Felleisen about recursion as the root
> of all iteration. Anyway, I believe that iterators (as explained above) can
> be useful even if we don't have generators (in the Icon sense, which I
> believe means coroutine-style).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
--------------D181A42EF954A5F1E101C9DB
Content-Type: text/python; charset=us-ascii;
 name="Iterator.py"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="Iterator.py"

""" Generic object iterators.

    (c) Copyright Marc-Andre Lemburg; All Rights Reserved.
    See the documentation for further information on copyrights,
    or contact the author (mal@lemburg.com).

"""
import exceptions

class Error(exceptions.StandardError):
    pass

class Iterator:

    value = None                # Current value

    def __init__(self,obj,startpos=0):

        self.obj = obj
        self.index = startpos
        self.startpos = startpos
        self.step = 1

    def set(self,pos):

        self.index = self.startpos = pos 
        self.value = self.obj[pos]

    def reset(self):

        self.index = pos = self.startpos
        self.value = self.obj[pos]

    def next(self):

        self.index = i = self.index + self.step
        if i < 0:
            return 0
        try:
            self.value = self.obj[i]
            return 1
        except IndexError:
            return 0

    def prev(self):

        self.index = i = self.index - self.step
        if i < 0:
            return 0
        try:
            self.value = self.obj[i]
            return 1
        except IndexError:
            return 0

    def __getitem__(self,i):

        if i != 0:
            self.index = i = self.index + self.step
        else:
            i = self.index
        if i < 0:
            raise IndexError
        self.value = v = self.obj[i]
        return v

ForwardIterator = Iterator

class BackwardIterator(Iterator):

    def __init__(self,obj,startpos=None):

        self.obj = obj
        if startpos is not None:
            self.index = startpos
        else:
            self.index = startpos = len(obj) - 1
        self.startpos = startpos
        self.value = self.obj[startpos]
        self.step = -1

class CallIterator(Iterator):

    def __init__(self,obj):

        self.obj = obj
        self.index = 0

    def set(self,pos):

        raise Error,'.set() not supported'

    def reset(self):

        raise Error,'.reset() not supported'

    def next(self):

        self.index = self.index + 1
        try:
            v = self.obj()
            if not v:
                return 0
            self.value = v
            return 1
        except IndexError:
            return 0

    def prev(self):

        raise Error,'.prev() not supported'

    def __getitem__(self,i):

        self.index = self.index + 1
        v = self.obj()
        if not v:
            raise IndexError
        self.value = v
        return v


def _test():
    i = BackwardIterator(range(1,10))
    for x in i:
        print x
    print
    i.reset()
    while 1:
        print i.value
        if not i.next():
            break
    print
    filereader = CallIterator(open('/usr/dict/words').readline)
    for line in filereader:
        pass
    print 'Read %i lines' % filereader.index

if __name__ == '__main__':
    _test()

--------------D181A42EF954A5F1E101C9DB--



From loewis@informatik.hu-berlin.de  Mon Aug 21 11:25:01 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 21 Aug 2000 12:25:01 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A0F6AC.B9C0FCC9@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com> <200008201051.MAA05259@pandora.informatik.hu-berlin.de> <39A0F6AC.B9C0FCC9@lemburg.com>
Message-ID: <200008211025.MAA14212@pandora.informatik.hu-berlin.de>

> So you're all for the two different API version ? After some
> more thinking, I think I agree. The reason is that the lookup
> itself will have to be Unicode-aware too:
>=20
> gettext.unigettext(u"L=F6schen") would have to convert u"L=F6schen"
> to UTF-8, then look this up and convert the returned value
> back to Unicode.

I did not even think of using Unicode as *keys* to the lookup. The GNU
translation project recommends that message in the source code are in
English. This is good advice, as translators producing, say, Japanese
translation like have more problems with German input than with
English one.

So I'd say that message ids can safely be byte strings, especially as
I believe that the gettext tools treat them that way, as well. If
authors really have to put non-ASCII into message ids, they should use
\x escapes. I have never seen such a message, though (and I have
translated a number of message catalogs).

Regards,
Martin



From loewis@informatik.hu-berlin.de  Mon Aug 21 11:31:25 2000
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 21 Aug 2000 12:31:25 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A0F96C.EA0D0D4B@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net>
 <399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com> <200008201059.MAA05292@pandora.informatik.hu-berlin.de> <39A0F96C.EA0D0D4B@lemburg.com>
Message-ID: <200008211031.MAA14260@pandora.informatik.hu-berlin.de>

> Well, it's the same argument as always: better have one format
> which fits all than a new format for every application. XML
> suits these tasks nicely and is becoming more and more accepted
> these days.

I believe this is a deluding claim. First, XML is not *one* format; is
is rather a "meta format"; you still need the document type definition
(valid vs. well-formed). Furthermore, the XML argument is good if
there is no established data format for some application. If there is
already an accepted format, I see no value in converting that to XML.

Regards,
Martin


From fredrik@pythonware.com  Mon Aug 21 11:43:47 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Mon, 21 Aug 2000 12:43:47 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com>
Message-ID: <020401c00b5c$b07f1870$0900a8c0@SPIFF>

mal wrote:
> How about a third variant:
> 
> #3:
> __iter = <object>.iterator()
> while __iter:
>    <variable> = __iter.next()
>    <block>

how does that one terminate?

maybe you meant something like:

    __iter = <object>.iterator()
    while __iter:
        <variable> = __iter.next()
        if <variable> is <sentinel>:
            break
        <block>

(where <sentinel> could be __iter itself...)

</F>



From thomas@xs4all.net  Mon Aug 21 11:59:44 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 12:59:44 +0200
Subject: [Python-Dev] iterators
In-Reply-To: <020401c00b5c$b07f1870$0900a8c0@SPIFF>; from fredrik@pythonware.com on Mon, Aug 21, 2000 at 12:43:47PM +0200
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <020401c00b5c$b07f1870$0900a8c0@SPIFF>
Message-ID: <20000821125944.K4797@xs4all.nl>

On Mon, Aug 21, 2000 at 12:43:47PM +0200, Fredrik Lundh wrote:
> mal wrote:
> > How about a third variant:
> > 
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>

> how does that one terminate?

__iter should evaluate to false once it's "empty". 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fredrik@pythonware.com  Mon Aug 21 12:08:05 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Mon, 21 Aug 2000 13:08:05 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <020401c00b5c$b07f1870$0900a8c0@SPIFF>
Message-ID: <024301c00b60$15168fe0$0900a8c0@SPIFF>

I wrote:
> mal wrote:
> > How about a third variant:
> > 
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>
> 
> how does that one terminate?

brain disabled.  sorry.

</F>



From thomas@xs4all.net  Mon Aug 21 13:03:06 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 14:03:06 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: <200008210437.XAA22075@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 20, 2000 at 11:37:46PM -0500
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl> <200008210437.XAA22075@cj20424-a.reston1.va.home.com>
Message-ID: <20000821140306.L4797@xs4all.nl>

On Sun, Aug 20, 2000 at 11:37:46PM -0500, Guido van Rossum wrote:

> > > Summary: Allow all assignment expressions after 'import something as'
> -1.  Hypergeneralization.

I don't think it's hypergeneralization. In fact, people might expect it[*],
if we claim that the 'import-as' syntax is a shortcut for the current
practice of

   import somemod
   sspam = somemod.somesubmod.spam

(or similar constructs.) However, I realize you're under a lot of pressure
to Pronounce a number of things now that you're back, and we can always
change this later (if you change your mind.) I dare to predict, though, that
we'll see questions about why this isn't generalized, on c.l.py.

(*] I know 'people might expect it' and 'hypergeneralization' aren't
mutually exclusive, but you know what I mean.)

> By the way, notice that
>   import foo.bar
> places 'foo' in the current namespace, after ensuring that 'foo.bar'
> is defined.

Oh, I noticed ;) We had a small thread about that, this weekend. The subject
was something like ''import as'' or so.

> What should
>   import foo.bar as spam
> assign to spam?  I hope foo.bar, not foo.

The original patch assigned foo to spam, not foo.bar. Why ? Well, all the
patch did was use a different name for the STORE operation that follows an
IMPORT_NAME. To elaborate, 'import foo.bar' does this:

    IMPORT_NAME "foo.bar"
    <resulting object, 'foo', is pushed on the stack>
    STORE_NAME "foo"

and all the patch did was replace the "foo" in STORE_NAME with the name
given in the "as" clause.

> I note that the CVS version doesn't support this latter syntax at all;
> it should be fixed, even though the same effect can be has with
>   from foo import bar as spam

Well, "general consensus" (where the general was a three-headed beast, see
the thread I mentioned) prompted me to make it illegal for now. At least
noone is going to rely on it just yet ;) Making it work as you suggest
requires a separate approach, though. I'll think about how to do it best.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Mon Aug 21 14:52:34 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 15:52:34 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <200008211335.GAA27170@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Mon, Aug 21, 2000 at 06:35:40AM -0700
References: <200008211335.GAA27170@slayer.i.sourceforge.net>
Message-ID: <20000821155234.M4797@xs4all.nl>

On Mon, Aug 21, 2000 at 06:35:40AM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/nondist/peps
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv27120

> Modified Files:
> 	pep-0000.txt 
> Log Message:
> PEP 202, change tim's email address to tpeters -- we really need a key
> for these.

>    I   200  pep-0200.txt  Python 2.0 Release Schedule           jhylton
>    SA  201  pep-0201.txt  Lockstep Iteration                    bwarsaw
> !  S   202  pep-0202.txt  List Comprehensions                   tim_one
>    S   203  pep-0203.txt  Augmented Assignments                 twouters
>    S   204  pep-0204.txt  Range Literals                        twouters


I thought the last collumn was the SourceForge username ? I don't have an
email address that reads 'twouters', except the SF one, anyway, and I
thought tim had 'tim_one' there. Or did he change it ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From james@daa.com.au  Mon Aug 21 15:02:21 2000
From: james@daa.com.au (James Henstridge)
Date: Mon, 21 Aug 2000 22:02:21 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A01C0C.E6BA6CCB@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008210948070.15515-100000@james.daa.com.au>

On Sun, 20 Aug 2000, M.-A. Lemburg wrote:

> James Henstridge wrote:
> > Well, it can do a little more than that.  It will also handle the case of
> > a number of locales listed in the LANG environment variable.  It also
> > doesn't look like it handles decomposition of a locale like
> > ll_CC.encoding@modifier into other matching encodings in the correct
> > precedence order.
> > 
> > Maybe something to do this sort of decomposition would fit better in
> > locale.py though.
> > 
> > This sort of thing is very useful for people who know more than one
> > language, and doesn't seem to be handled by plain setlocale()
> 
> I'm not sure I can follow you here: are you saying that your
> support in gettext.py does more or less than what's present
> in locale.py ?
> 
> If it's more, I think it would be a good idea to add those
> parts to locale.py.

It does a little more than the current locale.py.

I just checked the current locale module, and it gives a ValueError
exception when LANG is set to something like en_AU:fr_FR.  This sort of
thing should be handled by the python interface to gettext, as it is by
the C implementation (and I am sure that most programmers would not expect
such an error from the locale module).

The code in my gettext module handles that case.

James.

-- 
Email: james@daa.com.au
WWW:   http://www.daa.com.au/~james/




From MarkH@ActiveState.com  Mon Aug 21 15:48:09 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 00:48:09 +1000
Subject: [Python-Dev] configure.in, C++ and Linux
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>

I'm pretty new to all of this, so please bear with me.

I create a Setup.in that references some .cpp or .cxx files.  When I create
the Makefile, the command line generated for building these C++ sources is
similar to:

  $(CCC) $(CCSHARE) ...

However, CCC is never set anywhere....

Looking at configure.in, there appears to be support for setting this CCC
variable.  However, it was commented out in revision 1.113 - a checkin by
Guido, December 1999, with the comment:
"""
Patch by Geoff Furnish to make compiling with C++ more gentle.
(The configure script is regenerated, not from his patch.)
"""

Digging a little deeper, I find that config/Makefile.pre.in and
config/makesetup both have references to CCC that account for the
references in my Makefile.  Unfortunately, my knowledge doesnt yet stretch
to knowing exactly where these files come from :-)

Surely all of this isn't correct.  Can anyone tell me what is going on, or
what I am doing wrong?

Thanks,

Mark.







From mal@lemburg.com  Mon Aug 21 15:59:36 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 16:59:36 +0200
Subject: [Python-Dev] Adding more LANG support to locale.py (Re: gettext in
 the standard library)
References: <Pine.LNX.4.21.0008210948070.15515-100000@james.daa.com.au>
Message-ID: <39A143D8.5595B7C4@lemburg.com>

James Henstridge wrote:
> 
> On Sun, 20 Aug 2000, M.-A. Lemburg wrote:
> 
> > James Henstridge wrote:
> > > Well, it can do a little more than that.  It will also handle the case of
> > > a number of locales listed in the LANG environment variable.  It also
> > > doesn't look like it handles decomposition of a locale like
> > > ll_CC.encoding@modifier into other matching encodings in the correct
> > > precedence order.
> > >
> > > Maybe something to do this sort of decomposition would fit better in
> > > locale.py though.
> > >
> > > This sort of thing is very useful for people who know more than one
> > > language, and doesn't seem to be handled by plain setlocale()
> >
> > I'm not sure I can follow you here: are you saying that your
> > support in gettext.py does more or less than what's present
> > in locale.py ?
> >
> > If it's more, I think it would be a good idea to add those
> > parts to locale.py.
> 
> It does a little more than the current locale.py.
> 
> I just checked the current locale module, and it gives a ValueError
> exception when LANG is set to something like en_AU:fr_FR.  This sort of
> thing should be handled by the python interface to gettext, as it is by
> the C implementation (and I am sure that most programmers would not expect
> such an error from the locale module).

That usage of LANG is new to me... I wonder how well the
multiple options settings fit the current API.
 
> The code in my gettext module handles that case.

Would you be willing to supply a patch to locale.py which
adds multiple LANG options to the interface ?

I guess we'd need a new API getdefaultlocales() [note the trailing
"s"] which will then return a list of locale tuples rather than
a single one. The standard getdefaultlocale() should then return
whatever is considered to be the standard locale when using the
multiple locale notation for LANG.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From bwarsaw@beopen.com  Mon Aug 21 16:05:13 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 21 Aug 2000 11:05:13 -0400 (EDT)
Subject: [Python-Dev] OT: How to send CVS update mails?
References: <39A06FA1.C5EB34D1@nowonder.de>
 <20000821005706.D11327@lyra.org>
Message-ID: <14753.17705.775721.360133@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

    GS> Take a look at CVSROOT/loginfo and CVSROOT/syncmail in the
    GS> Python repository.

Just to complete the picture, add CVSROOT/checkoutlist.

-Barry


From akuchlin@mems-exchange.org  Mon Aug 21 16:06:16 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Mon, 21 Aug 2000 11:06:16 -0400
Subject: [Python-Dev] configure.in, C++ and Linux
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Tue, Aug 22, 2000 at 12:48:09AM +1000
References: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>
Message-ID: <20000821110616.A547@kronos.cnri.reston.va.us>

On Tue, Aug 22, 2000 at 12:48:09AM +1000, Mark Hammond wrote:
>Digging a little deeper, I find that config/Makefile.pre.in and
>config/makesetup both have references to CCC that account for the
>references in my Makefile.  Unfortunately, my knowledge doesnt yet stretch
>to knowing exactly where these files come from :-)

The Makefile.pre.in is probably from Misc/Makefile.pre.in, which has a
reference to $(CCC); Modules/Makefile.pre.in is more up to date and
uses $(CXX).  Modules/makesetup also refers to $(CCC), and probably
needs to be changed to use $(CXX), matching Modules/Makefile.pre.in.

Given that we want to encourage the use of the Distutils,
Misc/Makefile.pre.in should be deleted to avoid having people use it.

--amk



From fdrake@beopen.com  Mon Aug 21 17:01:38 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 12:01:38 -0400 (EDT)
Subject: [Python-Dev] OT: How to send CVS update mails?
In-Reply-To: <39A06FA1.C5EB34D1@nowonder.de>
References: <39A06FA1.C5EB34D1@nowonder.de>
Message-ID: <14753.21090.492033.754101@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > Sorry, but I cannot figure out how to make SourceForge send
 > updates whenever there is a CVS commit (checkins mailing
 > list).

  I wrote up some instructions at:

http://sfdocs.sourceforge.net/sfdocs/display_topic.php?topicid=52


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fdrake@beopen.com  Mon Aug 21 17:49:00 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 12:49:00 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python import.c,2.146,2.147
In-Reply-To: <200008211635.JAA09187@slayer.i.sourceforge.net>
References: <200008211635.JAA09187@slayer.i.sourceforge.net>
Message-ID: <14753.23932.816392.92125@cj42289-a.reston1.va.home.com>

Barry Warsaw writes:
 > Thomas reminds me to bump the MAGIC number for the extended print
 > opcode additions.

  You also need to document the new opcodes in Doc/lib/libdis.tex.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Mon Aug 21 18:21:32 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 13:21:32 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <20000821155234.M4797@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>

[Thomas Wouters]
> I thought the last collumn was the SourceForge username ?
> I don't have an email address that reads 'twouters', except the
> SF one, anyway, and I thought tim had 'tim_one' there. Or did he
> change it ?

I don't know what the last column means.  What would you like it to mean?
Perhaps a complete email address, or (what a concept!) the author's *name*,
would be best.

BTW, my current employer assigned "tpeters@beopen.com" to me.  I was just
"tim" for the first 15 years of my career, and then "tim_one" when you kids
started using computers faster than me and took "tim" everywhere before I
got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
now.  I have given up all hope of retaining an online identity.

the-effbot-is-next-ly y'rs  - tim




From cgw@fnal.gov  Mon Aug 21 18:47:04 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Mon, 21 Aug 2000 12:47:04 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps
 pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
References: <20000821155234.M4797@xs4all.nl>
 <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <14753.27416.760084.528198@buffalo.fnal.gov>

Tim Peters writes:

 >  I have given up all hope of retaining an online identity.

And have you seen http://www.timpeters.com ?

(I don't know how you find the time to take those stunning color
photographs!)



From bwarsaw@beopen.com  Mon Aug 21 19:29:27 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 21 Aug 2000 14:29:27 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
References: <20000821155234.M4797@xs4all.nl>
 <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <14753.29959.135594.439438@anthem.concentric.net>

I don't know why I haven't seen Thomas's reply yet, but in any event...

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> [Thomas Wouters]
    >> I thought the last collumn was the SourceForge username ?  I
    >> don't have an email address that reads 'twouters', except the
    >> SF one, anyway, and I thought tim had 'tim_one' there. Or did
    >> he change it ?

    TP> I don't know what the last column means.  What would you like
    TP> it to mean?  Perhaps a complete email address, or (what a
    TP> concept!) the author's *name*, would be best.

    TP> BTW, my current employer assigned "tpeters@beopen.com" to me.
    TP> I was just "tim" for the first 15 years of my career, and then
    TP> "tim_one" when you kids started using computers faster than me
    TP> and took "tim" everywhere before I got to it.  Now even
    TP> "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one" now.
    TP> I have given up all hope of retaining an online identity.

I'm not sure what it should mean either, except as a shorthand way to
identify the owner of the PEP.  Most important is that each line fit
in 80 columns!

Perhaps we can do away with the filename column, since that's easily
calculated?

I had originally thought the owner should be the mailbox on
SourceForge, but then I thought maybe it ought to be the mailbox given
in the Author: field of the PEP.  Perhaps the Real Name is best after
all, if we can reclaim some horizontal space.

unsure-ly, y'rs,
-Barry


From thomas@xs4all.net  Mon Aug 21 20:02:58 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 21:02:58 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <14753.29959.135594.439438@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 21, 2000 at 02:29:27PM -0400
References: <20000821155234.M4797@xs4all.nl> <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com> <14753.29959.135594.439438@anthem.concentric.net>
Message-ID: <20000821210258.B4933@xs4all.nl>

On Mon, Aug 21, 2000 at 02:29:27PM -0400, Barry A. Warsaw wrote:

> I don't know why I haven't seen Thomas's reply yet, but in any event...

Strange, it went to python-dev not long after the checkin. Tim quoted about
the entire email, though, so you didn't miss much. The name-calling and
snide remarks weren't important anyway :)

> I'm not sure what it should mean either, except as a shorthand way to
> identify the owner of the PEP.  Most important is that each line fit
> in 80 columns!

> I had originally thought the owner should be the mailbox on
> SourceForge, but then I thought maybe it ought to be the mailbox given
> in the Author: field of the PEP.

Emails in the Author field are likely to be too long to fit in that list,
even if you remove the filename. I'd say go for the SF username, for five
reasons:

  1) it's a name developers know and love (or hate)
  2) more information on a user can be found through SourceForge
  5) that SF email address should work, too. It's where patch updates and
     stuff are sent, so most people are likely to have it forwarding to
     their PEP author address.

Also, it's the way of least resistance. All you need to change is that
'tpeters' into 'tim_one' :-)
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Mon Aug 21 20:11:37 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 21:11:37 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 21, 2000 at 01:21:32PM -0400
References: <20000821155234.M4797@xs4all.nl> <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <20000821211137.C4933@xs4all.nl>

On Mon, Aug 21, 2000 at 01:21:32PM -0400, Tim Peters wrote:

> BTW, my current employer assigned "tpeters@beopen.com" to me.  I was just
> "tim" for the first 15 years of my career, and then "tim_one" when you kids
> started using computers faster than me and took "tim" everywhere before I
> got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
> now.  I have given up all hope of retaining an online identity.

For the first few years online, I was known as 'zonny'. Chosen because my
first online experience, The Digital City of Amsterdam (a local freenet),
was a free service, and I'd forgotten the password of 'thomas', 'sonny',
'twouters' and 'thomasw'. And back then you couldn't get the password
changed :-) So 'zonny' it was, even when I started working there and
could've changed it. And I was happy with it, because I could use 'zonny'
everywhere, noone had apparently ever thought of that name (no suprise
there, eh ? :)

And then two years after I started with that name, I ran into another
'zonny' in some American MUD or another. (I believe it was TinyTIM(*), for
those who know about such things.) And it was a girl, and she had been using
it for years as well! So to avoid confusion I started using 'thomas', and
have had the luck of not needing another name until Mailman moved to
SourceForge :-) But ever since then, I don't believe *any* name is not
already taken. You'll just have to live with the confusion.

*) This is really true. There was a MUD called TinyTIM (actually an offshoot
of TinyMUSH) and it had a shitload of bots, too. It was one of the most
amusing senseless MU*s out there, with a lot of Pythonic humour.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Mon Aug 21 21:30:41 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 15:30:41 -0500
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Mon, 21 Aug 2000 14:03:06 +0200."
 <20000821140306.L4797@xs4all.nl>
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl> <200008210437.XAA22075@cj20424-a.reston1.va.home.com>
 <20000821140306.L4797@xs4all.nl>
Message-ID: <200008212030.PAA26887@cj20424-a.reston1.va.home.com>

> > > > Summary: Allow all assignment expressions after 'import
> > > > something as'

[GvR]
> > -1.  Hypergeneralization.

[TW]
> I don't think it's hypergeneralization. In fact, people might expect it[*],
> if we claim that the 'import-as' syntax is a shortcut for the current
> practice of
> 
>    import somemod
>    sspam = somemod.somesubmod.spam
> 
> (or similar constructs.) However, I realize you're under a lot of pressure
> to Pronounce a number of things now that you're back, and we can always
> change this later (if you change your mind.) I dare to predict, though, that
> we'll see questions about why this isn't generalized, on c.l.py.

I kind of doubt it, because it doesn't look useful.

I do want "import foo.bar as spam" back, assigning foo.bar to spam.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From Moshe Zadka <moshez@math.huji.ac.il>  Mon Aug 21 20:42:50 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Mon, 21 Aug 2000 22:42:50 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps
 pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008212241020.7563-100000@sundial>

On Mon, 21 Aug 2000, Tim Peters wrote:

> BTW, my current employer assigned "tpeters@beopen.com" to me.  I was just
> "tim" for the first 15 years of my career, and then "tim_one" when you kids
> started using computers faster than me and took "tim" everywhere before I
> got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
> now.  I have given up all hope of retaining an online identity.

Hah! That's all I have to say to you! 
Being the only moshez in the Free Software/Open Source community and 
the only zadka which cares about the internet has certainly made my life
easier (can you say zadka.com?) 

on-the-other-hand-people-use-moshe-as-a-generic-name-ly y'rs, Z.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From mal@lemburg.com  Mon Aug 21 21:22:10 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 22:22:10 +0200
Subject: [Python-Dev] Doc-strings for class attributes ?!
Message-ID: <39A18F72.A0EADEA7@lemburg.com>

Lately I was busy extracting documentation from a large
Python application. 

Everything worked just fine building on existing doc-strings and
the nice Python reflection features, but I came across one 
thing to which I didn't find a suitable Python-style solution:
inline documentation for class attributes.

We already have doc-strings for modules, classes, functions
and methods, but there is no support for documenting class
attributes in a way that:

1. is local to the attribute definition itself
2. doesn't affect the attribute object in any way (e.g. by
   adding wrappers of some sort)
3. behaves well under class inheritence
4. is available online

After some thinking and experimenting with different ways
of achieving the above I came up with the following solution
which looks very Pythonesque to me:

class C:
        " class C doc-string "

        a = 1
        " attribute C.a doc-string "

        b = 2
        " attribute C.b doc-string "

The compiler would handle these cases as follows:

" class C doc-string " -> C.__doc__
" attribute C.a doc-string " -> C.__doc__a__
" attribute C.b doc-string " -> C.__doc__b__

All of the above is perfectly valid Python syntax. Support
should be easy to add to the byte code compiler. The
name mangling assures that attribute doc-strings

a) participate in class inheritence and
b) are treated as special attributes (following the __xxx__
   convention)

Also, the look&feel of this convention is similar to that
of the other existing conventions: the doc string follows
the definition of the object.

What do you think about this idea ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fdrake@beopen.com  Mon Aug 21 21:32:22 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 16:32:22 -0400 (EDT)
Subject: [Python-Dev] regression test question
Message-ID: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>

  I'm working on bringing the parser module up to date and introducing
a regression test for it.  (And if the grammar stops changing, it may
actually get finished!)
  I'm having a bit of a problem, though:  the test passes when run as
a script, but not when run via the regression test framework.  The
problem is *not* with the output file.  I'm getting an exception from
the module which is not expected, and is only raised when it runs
using the regression framework.
  Has anyone else encountered a similar problem?  I've checked to make
sure the right version or parsermodule.so and test_parser.py are being
picked up.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gvwilson@nevex.com  Mon Aug 21 21:48:11 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Mon, 21 Aug 2000 16:48:11 -0400 (EDT)
Subject: [Python-Dev] Doc-strings for class attributes ?!
In-Reply-To: <39A18F72.A0EADEA7@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008211644050.24709-100000@akbar.nevex.com>

> Marc-Andre Lemburg wrote:
> We already have doc-strings for modules, classes, functions and
> methods, but there is no support for documenting class attributes in a
> way that:
> 
> 1. is local to the attribute definition itself
> 2. doesn't affect the attribute object
> 3. behaves well under class inheritence
> 4. is available online
> 
> [proposal]
> class C:
>         " class C doc-string "
> 
>         a = 1
>         " attribute C.a doc-string "
> 
>         b = 2
>         " attribute C.b doc-string "
>
> What do you think about this idea ?

Greg Wilson:
I think it would be useful, but as we've discussed elsewhere, I think that
if the doc string mechanism is going to be extended, I would like it to
allow multiple chunks of information to be attached to objects (functions,
modules, class variables, etc.), so that different developers and tools
can annotate programs without colliding.

Thanks,
Greg



From tim_one@email.msn.com  Mon Aug 21 23:01:54 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 18:01:54 -0400
Subject: [Python-Dev] Looking for an "import" expert
Message-ID: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>

Fred Drake opened an irritating <wink> bug report (#112436).

Cut to the chase:

regrtest.py imports test_support.
test_support.verbose is 1 after that.
regrtest's main() reaches into test_support and
    stomps on test_support.verbose, usually setting to 0.

Now in my build directory, if I run

   python ..\lib\test\regrtest.py test_getopt

the test passes.  However, it *shouldn't* (and the crux of Fred's bug report
is that it does fail when he runs regrtest in an old & deprecated way).

What happens is that test_getopt.py has this near the top:

    from test.test_support import verbose

and this is causing *another* copy of the test_support module to get loaded,
and *its* verbose attr is 1.

So when we run test_getopt "normally" via regrtest, it incorrectly believes
that verbose is 1, and the "expected result" file (which I generated via
regrtest -g) is in fact verbose-mode output.

If I change the import at the top of test_getopt.py to

    from test import test_support
    from test_support import verbose

then test_getopt.py sees the 0 it's supposed to see.

The story is exactly the same, btw, if I run regrtest while *in* the test
directory (so this has nothing to do with that I usually run regrtest from
my build directory).

Now what *Fred* does is equivalent to getting into a Python shell and typing

>>> from test import regrtest
>>> regrtest.main()

and in *that* case (the original) test_getopt sees the 0 it's supposed to
see.

I confess I lost track how fancy Python imports work a long time ago.  Can
anyone make sense of these symptoms?  Why is a 2nd version of test_support
getting loaded, and why only sometimes?





From fdrake@beopen.com  Mon Aug 21 23:10:53 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 18:10:53 -0400 (EDT)
Subject: [Python-Dev] regression test question
In-Reply-To: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
References: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
Message-ID: <14753.43245.685276.857116@cj42289-a.reston1.va.home.com>

I wrote:
 >   I'm having a bit of a problem, though:  the test passes when run as
 > a script, but not when run via the regression test framework.  The
 > problem is *not* with the output file.  I'm getting an exception from
 > the module which is not expected, and is only raised when it runs
 > using the regression framework.

  Of course, I managed to track this down to a bug in my own code.  I
wasn't clearing an error that should have been cleared, and that was
affecting the result in strange ways.
  I'm not at all sure why it didn't affect the results in more cases,
but that may just mean I need more variation in my tests.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Mon Aug 21 23:13:32 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 00:13:32 +0200
Subject: [Python-Dev] regression test question
In-Reply-To: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Mon, Aug 21, 2000 at 04:32:22PM -0400
References: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
Message-ID: <20000822001331.D4933@xs4all.nl>

On Mon, Aug 21, 2000 at 04:32:22PM -0400, Fred L. Drake, Jr. wrote:

>   I'm working on bringing the parser module up to date and introducing
> a regression test for it.  (And if the grammar stops changing, it may
> actually get finished!)

Well, I have augmented assignment in the queue, but that's about it for
Grammar changing patches ;)

>   I'm having a bit of a problem, though:  the test passes when run as
> a script, but not when run via the regression test framework.  The
> problem is *not* with the output file.  I'm getting an exception from
> the module which is not expected, and is only raised when it runs
> using the regression framework.
>   Has anyone else encountered a similar problem?  I've checked to make
> sure the right version or parsermodule.so and test_parser.py are being
> picked up.

I've seen this kind of problem when writing the pty test suite: fork() can
do nasty things to the regression test suite. You have to make sure the
child process exits brutally, in all cases, and *does not output anything*,
etc. I'm not sure if that's your problem though. Another issue I've had to
deal with was with a signal/threads problem on BSDI: enabling threads
screwed up random tests *after* the signal or thread test (never could
figure out what triggered it ;)

(This kind of problem is generic: several test modules, like test_signal,
set 'global' attributes to test something, and don't always reset them. If
you type ^C at the right moment in the test process, test_signal doesn't
remove the SIGINT-handler, and subsequent ^C's dont do anything other than
saying 'HandlerBC called' and failing the test ;))

I'm guessing this is what your parser test is hitting: the regression tester
itself sets something differently from running it directly. Try importing
the test from a script rather than calling it directly ? Did you remember to
set PYTHONPATH and such, like 'make test' does ? Did you use '-tt' like
'make test' does ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Mon Aug 21 23:30:11 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 18:30:11 -0400
Subject: [Python-Dev] Looking for an "import" expert
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDAHBAA.tim_one@email.msn.com>

> ...
> What happens is that test_getopt.py has this near the top:
>
>     from test.test_support import verbose
>
> and this is causing *another* copy of the test_support module to
> get loaded, and *its* verbose attr is 1.

Maybe adding these lines after that import will help clarify it for you
(note that you can't print to stdout without screwing up what regrtest
expects):

import sys
print >> sys.stderr, sys.modules["test_support"], \
                     sys.modules["test.test_support"]

(hot *damn* is that more pleasant than pasting stuff together by hand!).

When I run it, I get

<module 'test_support' from '..\lib\test\test_support.pyc'>
<module 'test.test_support' from
    'C:\CODE\PYTHON\DIST\SRC\lib\test\test_support.pyc'>

so they clearly are distinct modules.




From guido@beopen.com  Tue Aug 22 01:00:41 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 19:00:41 -0500
Subject: [Python-Dev] Looking for an "import" expert
In-Reply-To: Your message of "Mon, 21 Aug 2000 18:01:54 -0400."
 <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>
Message-ID: <200008220000.TAA27482@cj20424-a.reston1.va.home.com>

If the tests are run "the modern way" (python ../Lib/test/regrtest.py)
then the test module is the script directory and it is on the path, so
"import test_support" sees and loads a toplevel module test_support.
Then "import test.test_support" sees a package test with a
test_support submodule which is assumed to be a different one, so it
is loaded again.

But if the tests are run via "import test.autotest" (or "import
test.regrtest; test.regrtest.main()" the "import test_support" knows
that the importing module is in the test package, so it first tries to
import the test_support submodule from that package, so
test.test_support and (plain) test_support are the same.

Conclusion: inside the test package, never refer explicitly to the
test package.  Always use "import test_support".  Never "import
test.test_support" or "from test.test_support import verbose" or "from
test import test_support".

This is one for the README!

I've fixed this by checking in a small patch to test_getopt.py and the
corresponding output file (because of the bug, the output file was
produced under verbose mode).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From MarkH@ActiveState.com  Tue Aug 22 02:58:15 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 11:58:15 +1000
Subject: [Python-Dev] configure.in, C++ and Linux
In-Reply-To: <20000821110616.A547@kronos.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGENODFAA.MarkH@ActiveState.com>

[Andrew]

> The Makefile.pre.in is probably from Misc/Makefile.pre.in, which has a
> reference to $(CCC); Modules/Makefile.pre.in is more up to date and
> uses $(CXX).  Modules/makesetup also refers to $(CCC), and probably
> needs to be changed to use $(CXX), matching Modules/Makefile.pre.in.

This is a bug in the install script then - it installs the CCC version into
/usr/local/lib/python2.0/config.

Also, the online extending-and-embedding instructions explicitly tell you
to use the Misc/ version
(http://www.python.org/doc/current/ext/building-on-unix.html)

> Given that we want to encourage the use of the Distutils,
> Misc/Makefile.pre.in should be deleted to avoid having people use it.

That may be a little drastic :-)

So, as far as I can tell, we have a problem in that using the most widely
available instructions, attempting to build a new C++ extension module on
Linux will fail.  Can someone confirm it is indeed a bug that I should
file?  (Or maybe a patch I should submit?)

Mark.



From guido@beopen.com  Tue Aug 22 04:32:18 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 22:32:18 -0500
Subject: [Python-Dev] iterators
In-Reply-To: Your message of "Mon, 21 Aug 2000 12:04:04 +0200."
 <39A0FE94.1AF5FABF@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>
 <39A0FE94.1AF5FABF@lemburg.com>
Message-ID: <200008220332.WAA02661@cj20424-a.reston1.va.home.com>

[BDFL]
> > The statement
> > 
> >   for <variable> in <object>: <block>
> > 
> > should translate into this kind of pseudo-code:
> > 
> >   # variant 1
> >   __temp = <object>.newiterator()
> >   while 1:
> >       try: <variable> = __temp.next()
> >       except ExhaustedIterator: break
> >       <block>
> > 
> > or perhaps (to avoid the relatively expensive exception handling):
> > 
> >   # variant 2
> >   __temp = <object>.newiterator()
> >   while 1:
> >       __flag, <variable = __temp.next()
> >       if not __flag: break
> >       <block>

[MAL]
> How about a third variant:
> 
> #3:
> __iter = <object>.iterator()
> while __iter:
>    <variable> = __iter.next()
>    <block>
> 
> This adds a slot call, but removes the malloc overhead introduced
> by returning a tuple for every iteration (which is likely to be
> a performance problem).

Are you sure the slot call doesn't cause some malloc overhead as well?

Ayway, the problem with this one is that it requires a dynamic
iterator (one that generates values on the fly, e.g. something reading
lines from a pipe) to hold on to the next value between "while __iter"
and "__iter.next()".

> Another possibility would be using an iterator attribute
> to get at the variable:
> 
> #4:
> __iter = <object>.iterator()
> while 1:
>    if not __iter.next():
>         break
>    <variable> = __iter.value
>    <block>

Uglier than any of the others.

> You might want to check out the counterobject.c approach I used
> to speed up the current for-loop in Python 1.5's ceval.c:
> it's basically a mutable C integer which is used instead of
> the current Python integer index.

> The details can be found in my old patch:
> 
>   http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz

Ah, yes, that's what I was thinking of.

> """ Generic object iterators.
[...]

Thanks -- yes, that's what I was thinking of.  Did you just whip this
up?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Tue Aug 22 08:58:12 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 09:58:12 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>
 <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>
Message-ID: <39A23294.B2DB3C77@lemburg.com>

Guido van Rossum wrote:
> 
> [BDFL]
> > > The statement
> > >
> > >   for <variable> in <object>: <block>
> > >
> > > should translate into this kind of pseudo-code:
> > >
> > >   # variant 1
> > >   __temp = <object>.newiterator()
> > >   while 1:
> > >       try: <variable> = __temp.next()
> > >       except ExhaustedIterator: break
> > >       <block>
> > >
> > > or perhaps (to avoid the relatively expensive exception handling):
> > >
> > >   # variant 2
> > >   __temp = <object>.newiterator()
> > >   while 1:
> > >       __flag, <variable = __temp.next()
> > >       if not __flag: break
> > >       <block>
> 
> [MAL]
> > How about a third variant:
> >
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>
> >
> > This adds a slot call, but removes the malloc overhead introduced
> > by returning a tuple for every iteration (which is likely to be
> > a performance problem).
> 
> Are you sure the slot call doesn't cause some malloc overhead as well?

Quite likely not, since the slot in question doesn't generate
Python objects (nb_nonzero).
 
> Ayway, the problem with this one is that it requires a dynamic
> iterator (one that generates values on the fly, e.g. something reading
> lines from a pipe) to hold on to the next value between "while __iter"
> and "__iter.next()".

Hmm, that depends on how you look at it: I was thinking in terms
of reading from a file -- feof() is true as soon as the end of
file is reached. The same could be done for iterators.

We might also consider a mixed approach:

#5:
__iter = <object>.iterator()
while __iter:
   try:
       <variable> = __iter.next()
   except ExhaustedIterator:
       break
   <block>

Some iterators may want to signal the end of iteration using
an exception, others via the truth text prior to calling .next(),
e.g. a list iterator can easily implement the truth test
variant, while an iterator with complex .next() processing
might want to use the exception variant.

Another possibility would be using exception class objects
as singleton indicators of "end of iteration":

#6:
__iter = <object>.iterator()
while 1:
   try:
       rc = __iter.next()
   except ExhaustedIterator:
       break
   else:
       if rc is ExhaustedIterator:
           break
   <variable> = rc
   <block>

> > Another possibility would be using an iterator attribute
> > to get at the variable:
> >
> > #4:
> > __iter = <object>.iterator()
> > while 1:
> >    if not __iter.next():
> >         break
> >    <variable> = __iter.value
> >    <block>
> 
> Uglier than any of the others.
> 
> > You might want to check out the counterobject.c approach I used
> > to speed up the current for-loop in Python 1.5's ceval.c:
> > it's basically a mutable C integer which is used instead of
> > the current Python integer index.
> 
> > The details can be found in my old patch:
> >
> >   http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz
> 
> Ah, yes, that's what I was thinking of.
> 
> > """ Generic object iterators.
> [...]
> 
> Thanks -- yes, that's what I was thinking of.  Did you just whip this
> up?

The file says: Feb 2000... I don't remember what I wrote it for;
it's in my lib/ dir meaning that it qualified as general purpose
utility :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Tue Aug 22 09:01:40 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 10:01:40 +0200
Subject: [Python-Dev] Looking for an "import" expert
References: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> <200008220000.TAA27482@cj20424-a.reston1.va.home.com>
Message-ID: <39A23364.356E9EE4@lemburg.com>

Guido van Rossum wrote:
> 
> If the tests are run "the modern way" (python ../Lib/test/regrtest.py)
> then the test module is the script directory and it is on the path, so
> "import test_support" sees and loads a toplevel module test_support.
> Then "import test.test_support" sees a package test with a
> test_support submodule which is assumed to be a different one, so it
> is loaded again.
> 
> But if the tests are run via "import test.autotest" (or "import
> test.regrtest; test.regrtest.main()" the "import test_support" knows
> that the importing module is in the test package, so it first tries to
> import the test_support submodule from that package, so
> test.test_support and (plain) test_support are the same.
> 
> Conclusion: inside the test package, never refer explicitly to the
> test package.  Always use "import test_support".  Never "import
> test.test_support" or "from test.test_support import verbose" or "from
> test import test_support".

I'd rather suggest to use a different convention: *always* import
using the full path, i.e. "from test import test_support". 

This scales much better and also avoids a nasty problem with
Python pickles related to much the same problem Tim found here:
dual import of subpackage modules (note that pickle will always
do the full path import).
 
> This is one for the README!
> 
> I've fixed this by checking in a small patch to test_getopt.py and the
> corresponding output file (because of the bug, the output file was
> produced under verbose mode).
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jack@oratrix.nl  Tue Aug 22 10:34:20 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Tue, 22 Aug 2000 11:34:20 +0200
Subject: [Python-Dev] New anal crusade
In-Reply-To: Message by "Tim Peters" <tim_one@email.msn.com> ,
 Sat, 19 Aug 2000 13:34:28 -0400 , <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com>
Message-ID: <20000822093420.AE00B303181@snelboot.oratrix.nl>

> Has anyone tried compiling Python under gcc with
> 
>     -Wmissing-prototypes -Wstrict-prototypes

I have a similar set of options (actually it's difficult to turn them off if 
you're checking for prototypes:-) which will make the CodeWarrior compiler for 
the Mac be strict about prototypes, which complains about external functions 
being declared without a prototype in scope. I was initially baffled by this: 
why would it want a prototype if the function declaration is ansi-style 
anyway? But, it turns out its a really neat warning: usually if you declare an 
external without a prototype in scope it means that it isn't declared in a .h 
file, which means that either (a) it shouldn't have been an extern in the 
first place or (b) you've duplicated the prototype in an external declaration 
somewhere else which means the prototypes aren't necessarily identical.

For Python this option produces warnings for all the init routines, which is 
to be expected, but also for various other things (I seem to remember there's 
a couple in the GC code). If anyone is interested I can print them out and 
send them here.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From jack@oratrix.nl  Tue Aug 22 10:45:41 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Tue, 22 Aug 2000 11:45:41 +0200
Subject: [Python-Dev] iterators
In-Reply-To: Message by "Guido van Rossum" <guido@python.org> ,
 Fri, 18 Aug 2000 16:57:15 -0400 , <011601c00a1f$9923d460$7aa41718@beopen.com>
Message-ID: <20000822094542.71467303181@snelboot.oratrix.nl>

>   it = something.newiterator()
>   for x over it:
>       if time_to_start_second_loop(): break
>       do_something()
>   for x over it:
>       do_something_else()

Another, similar, paradigm I find myself often using is something like
    tmplist = []
    for x in origlist:
        if x.has_some_property():
           tmplist.append(x)
        else
           do_something()
    for x in tmplist:
        do_something_else()

I think I'd like it if iterators could do something like
    it = origlist.iterator()
    for x in it:
        if x.has_some_property():
           it.pushback()
        else
           do_something()
    for x in it:
        do_something_else()

--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From guido@beopen.com  Tue Aug 22 14:03:28 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 22 Aug 2000 08:03:28 -0500
Subject: [Python-Dev] iterators
In-Reply-To: Your message of "Tue, 22 Aug 2000 09:58:12 +0200."
 <39A23294.B2DB3C77@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>
 <39A23294.B2DB3C77@lemburg.com>
Message-ID: <200008221303.IAA09992@cj20424-a.reston1.va.home.com>

> > [MAL]
> > > How about a third variant:
> > >
> > > #3:
> > > __iter = <object>.iterator()
> > > while __iter:
> > >    <variable> = __iter.next()
> > >    <block>
> > >
> > > This adds a slot call, but removes the malloc overhead introduced
> > > by returning a tuple for every iteration (which is likely to be
> > > a performance problem).
> > 
> > Are you sure the slot call doesn't cause some malloc overhead as well?
> 
> Quite likely not, since the slot in question doesn't generate
> Python objects (nb_nonzero).

Agreed only for built-in objects like lists.  For class instances this
would be way more expensive, because of the two calls vs. one!

> > Ayway, the problem with this one is that it requires a dynamic
> > iterator (one that generates values on the fly, e.g. something reading
> > lines from a pipe) to hold on to the next value between "while __iter"
> > and "__iter.next()".
> 
> Hmm, that depends on how you look at it: I was thinking in terms
> of reading from a file -- feof() is true as soon as the end of
> file is reached. The same could be done for iterators.

But feof() needs to read an extra character into the buffer if the
buffer is empty -- so it needs buffering!  That's what I'm trying to
avoid.

> We might also consider a mixed approach:
> 
> #5:
> __iter = <object>.iterator()
> while __iter:
>    try:
>        <variable> = __iter.next()
>    except ExhaustedIterator:
>        break
>    <block>
> 
> Some iterators may want to signal the end of iteration using
> an exception, others via the truth text prior to calling .next(),
> e.g. a list iterator can easily implement the truth test
> variant, while an iterator with complex .next() processing
> might want to use the exception variant.

Belt and suspenders.  What does this buy you over "while 1"?

> Another possibility would be using exception class objects
> as singleton indicators of "end of iteration":
> 
> #6:
> __iter = <object>.iterator()
> while 1:
>    try:
>        rc = __iter.next()
>    except ExhaustedIterator:
>        break
>    else:
>        if rc is ExhaustedIterator:
>            break
>    <variable> = rc
>    <block>

Then I'd prefer to use a single protocol:

    #7:
    __iter = <object>.iterator()
    while 1:
       rc = __iter.next()
       if rc is ExhaustedIterator:
	   break
       <variable> = rc
       <block>

This means there's a special value that you can't store in lists
though, and that would bite some introspection code (e.g. code listing
all internal objects).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From fredrik@pythonware.com  Tue Aug 22 13:27:11 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 22 Aug 2000 14:27:11 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
Message-ID: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>

well, see for yourself:
http://www.pythonlabs.com/logos.html





From thomas.heller@ion-tof.com  Tue Aug 22 14:13:20 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Tue, 22 Aug 2000 15:13:20 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
References: <200008221210.FAA25857@slayer.i.sourceforge.net>
Message-ID: <001501c00c3a$becdaac0$4500a8c0@thomasnb>

> Update of /cvsroot/python/python/dist/src/PCbuild
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv25776
> 
> Modified Files:
> python20.wse 
> Log Message:
> Thomas Heller noticed that the wrong registry entry was written for
> the DLL.  Replace
>  %_SYSDEST_%\Python20.dll
> with
>  %_DLLDEST_%\Python20.dll.
> 
Unfortunately, there was a bug in my bug-report.

%DLLDEST% would have been correct.
Sorry: Currently I don't have time to test the fix.

Thomas Heller



From MarkH@ActiveState.com  Tue Aug 22 14:32:25 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 23:32:25 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: <001501c00c3a$becdaac0$4500a8c0@thomasnb>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>

> > Modified Files:
> > python20.wse
> > Log Message:
> > Thomas Heller noticed that the wrong registry entry was written for
> > the DLL.  Replace
> >  %_SYSDEST_%\Python20.dll
> > with
> >  %_DLLDEST_%\Python20.dll.
> >
> Unfortunately, there was a bug in my bug-report.

Actually, there is no need to write that entry at all!  It should be
removed.  I thought it was, ages ago.

Mark.



From thomas@xs4all.net  Tue Aug 22 14:35:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 15:35:13 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>; from fredrik@pythonware.com on Tue, Aug 22, 2000 at 02:27:11PM +0200
References: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>
Message-ID: <20000822153512.H4933@xs4all.nl>

On Tue, Aug 22, 2000 at 02:27:11PM +0200, Fredrik Lundh wrote:

> well, see for yourself:
> http://www.pythonlabs.com/logos.html

Oh, that reminds me, the FAQ needs adjusting ;) It still says:
"""
1.2. Why is it called Python?

Apart from being a computer scientist, I'm also a fan of "Monty Python's
Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely
-- case you didn't know). It occurred to me one day that I needed a name
that was short, unique, and slightly mysterious. And I happened to be
reading some scripts from the series at the time... So then I decided to
call my language Python. But Python is not a joke. And don't you associate
it with dangerous reptiles either! (If you need an icon, use an image of the
16-ton weight from the TV series or of a can of SPAM :-)
"""
 
And while I'm at it, I hope I can say without offending anyone that I hope
the logo is open for critisism. Few (if any?) python species look that
green, making the logo looks more like an adder. And I think the more
majestic python species are cooler on a logo, anyway ;) If whoever makes the
logos wants, I can go visit the small reptile-zoo around the corner from
where I live and snap some pictures of the Pythons they have there (they
have great Tiger-Pythons, including an albino one!)

I-still-like-the-shirt-though!-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas.heller@ion-tof.com  Tue Aug 22 14:52:16 2000
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Tue, 22 Aug 2000 15:52:16 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
References: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>
Message-ID: <002b01c00c40$2e7b32c0$4500a8c0@thomasnb>

> > > Modified Files:
> > > python20.wse
> > > Log Message:
> > > Thomas Heller noticed that the wrong registry entry was written for
> > > the DLL.  Replace
> > >  %_SYSDEST_%\Python20.dll
> > > with
> > >  %_DLLDEST_%\Python20.dll.
> > >
> > Unfortunately, there was a bug in my bug-report.
> 
> Actually, there is no need to write that entry at all!  It should be
> removed.  I thought it was, ages ago.
I would like to use this entry to find the python-interpreter
belonging to a certain registry entry.

How would you do it if this entry is missing?
Guess the name python<major-version/minor-version>.dll???

Thomas Heller



From guido@beopen.com  Tue Aug 22 16:04:33 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 22 Aug 2000 10:04:33 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: Your message of "Tue, 22 Aug 2000 23:32:25 +1000."
 <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>
Message-ID: <200008221504.KAA01151@cj20424-a.reston1.va.home.com>

> From: "Mark Hammond" <MarkH@ActiveState.com>
> To: "Thomas Heller" <thomas.heller@ion-tof.com>, <python-dev@python.org>
> Date: Tue, 22 Aug 2000 23:32:25 +1000
> 
> > > Modified Files:
> > > python20.wse
> > > Log Message:
> > > Thomas Heller noticed that the wrong registry entry was written for
> > > the DLL.  Replace
> > >  %_SYSDEST_%\Python20.dll
> > > with
> > >  %_DLLDEST_%\Python20.dll.
> > >
> > Unfortunately, there was a bug in my bug-report.

Was that last like Thomas Heller speaking?  I didn't see that mail!
(Maybe because my machine crashed due to an unexpected power outage a
few minutes ago.)

> Actually, there is no need to write that entry at all!  It should be
> removed.  I thought it was, ages ago.

OK, will do, but for 2.0 only.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From MarkH@ActiveState.com  Tue Aug 22 15:04:08 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Wed, 23 Aug 2000 00:04:08 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: <002b01c00c40$2e7b32c0$4500a8c0@thomasnb>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEOPDFAA.MarkH@ActiveState.com>

[Me, about removing the .DLL entry from the registry]

> > Actually, there is no need to write that entry at all!  It should be
> > removed.  I thought it was, ages ago.

[Thomas]
> I would like to use this entry to find the python-interpreter
> belonging to a certain registry entry.
>
> How would you do it if this entry is missing?
> Guess the name python<major-version/minor-version>.dll???

I think I am responsible for this registry entry in the first place.
Pythonwin/COM etc. went down the path of locating and loading the Python
DLL from the registry, but it has since all been long removed.

The basic problem is that there is only _one_ acceptable Python DLL for a
given version, regardless of what that particular registry says!  If the
registry points to the "wrong" DLL, things start to go wrong pretty quick,
and in not-so-obvious ways!

I think it is better to LoadLibrary("Python%d.dll") (or GetModuleHandle()
if you know Python is initialized) - this is what the system itself will
soon be doing to start Python up anyway!

Mark.



From skip@mojam.com (Skip Montanaro)  Tue Aug 22 15:45:27 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 22 Aug 2000 09:45:27 -0500 (CDT)
Subject: [Python-Dev] commonprefix - the beast just won't die...
Message-ID: <14754.37383.913840.582313@beluga.mojam.com>

I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
updated the tests (still to be checked in) and was starting to work on
documentation changes, when I realized that something Guido said about using
dirname to trim to the common directory prefix is probably not correct.
Here's an example.  The common prefix of ["/usr/local", "/usr/local/bin"] is
"/usr/local".  If you blindly apply dirname to that (which is what I think
Guido suggested as the way to make commonprefix do what I wanted, you wind
up with "/usr", which isn't going to be correct on most Unix flavors.
Instead, you need to check that the prefix doesn't exist or isn't a
directory before applying dirname.  (And of course, that only works on the
machine containing the paths in question.  You should be able to import
posixpath on a Mac and feed it Unix-style paths, which you won't be able to
check for existence.)

Based on this problem, I'm against documenting using dirname to trim the
commonprefix output to a directory prefix.  I'm going to submit a patch with
the test case and minimal documentation changes and leave it at that for
now.

Skip


From mal@lemburg.com  Tue Aug 22 15:43:50 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 16:43:50 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>
 <39A23294.B2DB3C77@lemburg.com> <200008221303.IAA09992@cj20424-a.reston1.va.home.com>
Message-ID: <39A291A6.3DC7A4E3@lemburg.com>

Guido van Rossum wrote:
> 
> > > [MAL]
> > > > How about a third variant:
> > > >
> > > > #3:
> > > > __iter = <object>.iterator()
> > > > while __iter:
> > > >    <variable> = __iter.next()
> > > >    <block>
> > > >
> > > > This adds a slot call, but removes the malloc overhead introduced
> > > > by returning a tuple for every iteration (which is likely to be
> > > > a performance problem).
> > >
> > > Are you sure the slot call doesn't cause some malloc overhead as well?
> >
> > Quite likely not, since the slot in question doesn't generate
> > Python objects (nb_nonzero).
> 
> Agreed only for built-in objects like lists.  For class instances this
> would be way more expensive, because of the two calls vs. one!

True.
 
> > > Ayway, the problem with this one is that it requires a dynamic
> > > iterator (one that generates values on the fly, e.g. something reading
> > > lines from a pipe) to hold on to the next value between "while __iter"
> > > and "__iter.next()".
> >
> > Hmm, that depends on how you look at it: I was thinking in terms
> > of reading from a file -- feof() is true as soon as the end of
> > file is reached. The same could be done for iterators.
> 
> But feof() needs to read an extra character into the buffer if the
> buffer is empty -- so it needs buffering!  That's what I'm trying to
> avoid.

Ok.
 
> > We might also consider a mixed approach:
> >
> > #5:
> > __iter = <object>.iterator()
> > while __iter:
> >    try:
> >        <variable> = __iter.next()
> >    except ExhaustedIterator:
> >        break
> >    <block>
> >
> > Some iterators may want to signal the end of iteration using
> > an exception, others via the truth text prior to calling .next(),
> > e.g. a list iterator can easily implement the truth test
> > variant, while an iterator with complex .next() processing
> > might want to use the exception variant.
> 
> Belt and suspenders.  What does this buy you over "while 1"?

It gives you two possible ways to signal "end of iteration".
But your argument about Python iterators (as opposed to
builtin ones) applies here as well, so I withdraw this one :-)
 
> > Another possibility would be using exception class objects
> > as singleton indicators of "end of iteration":
> >
> > #6:
> > __iter = <object>.iterator()
> > while 1:
> >    try:
> >        rc = __iter.next()
> >    except ExhaustedIterator:
> >        break
> >    else:
> >        if rc is ExhaustedIterator:
> >            break
> >    <variable> = rc
> >    <block>
> 
> Then I'd prefer to use a single protocol:
> 
>     #7:
>     __iter = <object>.iterator()
>     while 1:
>        rc = __iter.next()
>        if rc is ExhaustedIterator:
>            break
>        <variable> = rc
>        <block>
> 
> This means there's a special value that you can't store in lists
> though, and that would bite some introspection code (e.g. code listing
> all internal objects).

Which brings us back to the good old "end of iteration" == raise
an exception logic :-)

Would this really hurt all that much in terms of performance ?
I mean, todays for-loop code uses IndexError for much the same
thing...
 
    #8:
    __iter = <object>.iterator()
    while 1:
       try:
           <variable> = __iter.next()
       except ExhaustedIterator:
           break
       <block>

Since this will be written in C, we don't even have the costs
of setting up an exception block.

I would still suggest that the iterator provides the current
position and iteration value as attributes. This avoids some
caching of those values and also helps when debugging code
using introspection tools.

The positional attribute will probably have to be optional
since not all iterators can supply this information, but
the .value attribute is certainly within range (it would
cache the value returned by the last .next() or .prev()
call).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Tue Aug 22 16:16:38 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 17:16:38 +0200
Subject: [Python-Dev] commonprefix - the beast just won't die...
In-Reply-To: <14754.37383.913840.582313@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 22, 2000 at 09:45:27AM -0500
References: <14754.37383.913840.582313@beluga.mojam.com>
Message-ID: <20000822171638.I4933@xs4all.nl>

On Tue, Aug 22, 2000 at 09:45:27AM -0500, Skip Montanaro wrote:

> I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
> updated the tests (still to be checked in) and was starting to work on
> documentation changes, when I realized that something Guido said about using
> dirname to trim to the common directory prefix is probably not correct.
> Here's an example.  The common prefix of ["/usr/local", "/usr/local/bin"] is
> "/usr/local".  If you blindly apply dirname to that (which is what I think
> Guido suggested as the way to make commonprefix do what I wanted, you wind
> up with "/usr", which isn't going to be correct on most Unix flavors.
> Instead, you need to check that the prefix doesn't exist or isn't a
> directory before applying dirname.

And even that won't work, in a case like this:

/home/swenson/
/home/swen/

(common prefix would be /home/swen, which is a directory) or cases like
this:

/home/swenson/
/home/swenniker/

where another directory called
/home/swen

exists.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Tue Aug 22 19:14:56 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 20:14:56 +0200
Subject: [Python-Dev] Adding doc-strings to attributes [with patch]
Message-ID: <39A2C320.ADF5E80F@lemburg.com>

This is a multi-part message in MIME format.
--------------04B8F46BE4C7B93B2B5D8B87
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Here's a patch which roughly implements the proposed attribute
doc-string syntax and semantics:

class C:
        " class C doc-string "

        a = 1
        " attribute C.a doc-string "

        b = 2
        " attribute C.b doc-string "

The compiler would handle these cases as follows:

" class C doc-string " -> C.__doc__
" attribute C.a doc-string " -> C.__doc__a__
" attribute C.b doc-string " -> C.__doc__b__

The name mangling assures that attribute doc-strings

a) participate in class inheritence and
b) are treated as special attributes (following the __xxx__
   convention)

Also, the look&feel of this convention is similar to that
of the other existing conventions: the doc string follows
the definition of the object.

The patch is a little rough in the sense that binding the
doc-string to the attribute name is done using a helper
variable that is not reset by non-expressions during the
compile. Shouldn't be too hard to fix though... at least
not for one of you compiler wizards ;-)

What do you think ?

[I will probably have to write a PEP for this...]

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
--------------04B8F46BE4C7B93B2B5D8B87
Content-Type: text/plain; charset=us-ascii;
 name="attrdocstrings.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="attrdocstrings.patch"

--- CVS-Python/Python/compile.c	Tue Aug 22 10:31:06 2000
+++ Python+Unicode/Python/compile.c	Tue Aug 22 19:59:30 2000
@@ -293,10 +293,11 @@ struct compiling {
 	int c_last_addr, c_last_line, c_lnotab_next;
 #ifdef PRIVATE_NAME_MANGLING
 	char *c_private;	/* for private name mangling */
 #endif
 	int c_tmpname;		/* temporary local name counter */
+        char *c_last_name;       /* last assigned name */
 };
 
 
 /* Error message including line number */
 
@@ -435,12 +436,13 @@ com_init(struct compiling *c, char *file
 	c->c_stacklevel = 0;
 	c->c_maxstacklevel = 0;
 	c->c_firstlineno = 0;
 	c->c_last_addr = 0;
 	c->c_last_line = 0;
-	c-> c_lnotab_next = 0;
+	c->c_lnotab_next = 0;
 	c->c_tmpname = 0;
+	c->c_last_name = 0;
 	return 1;
 	
   fail:
 	com_free(c);
  	return 0;
@@ -1866,10 +1868,11 @@ com_assign_name(struct compiling *c, nod
 {
 	REQ(n, NAME);
 	com_addopname(c, assigning ? STORE_NAME : DELETE_NAME, n);
 	if (assigning)
 		com_pop(c, 1);
+	c->c_last_name = STR(n);
 }
 
 static void
 com_assign(struct compiling *c, node *n, int assigning)
 {
@@ -1974,18 +1977,40 @@ com_assign(struct compiling *c, node *n,
 		}
 	}
 }
 
 /* Forward */ static node *get_rawdocstring(node *);
+/* Forward */ static PyObject *get_docstring(node *);
 
 static void
 com_expr_stmt(struct compiling *c, node *n)
 {
 	REQ(n, expr_stmt); /* testlist ('=' testlist)* */
-	/* Forget it if we have just a doc string here */
-	if (!c->c_interactive && NCH(n) == 1 && get_rawdocstring(n) != NULL)
+	/* Handle attribute doc-strings here */
+	if (!c->c_interactive && NCH(n) == 1) {
+	    node *docnode = get_rawdocstring(n);
+	    if (docnode != NULL) {
+		if (c->c_last_name) {
+		    PyObject *doc = get_docstring(docnode);
+		    int i = com_addconst(c, doc);
+		    char docname[420];
+#if 0
+		    printf("found doc-string '%s' for '%s'\n", 
+			   PyString_AsString(doc),
+			   c->c_last_name);
+#endif
+		    sprintf(docname, "__doc__%.400s__", c->c_last_name);
+		    com_addoparg(c, LOAD_CONST, i);
+		    com_push(c, 1);
+		    com_addopnamestr(c, STORE_NAME, docname);
+		    com_pop(c, 1);
+		    c->c_last_name = NULL;
+		    Py_DECREF(doc);
+		}
 		return;
+	    }
+	}
 	com_node(c, CHILD(n, NCH(n)-1));
 	if (NCH(n) == 1) {
 		if (c->c_interactive)
 			com_addbyte(c, PRINT_EXPR);
 		else

--------------04B8F46BE4C7B93B2B5D8B87--



From Donald Beaudry <donb@init.com>  Tue Aug 22 20:16:39 2000
From: Donald Beaudry <donb@init.com> (Donald Beaudry)
Date: Tue, 22 Aug 2000 15:16:39 -0400
Subject: [Python-Dev] Adding insint() function
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <200008221916.PAA17130@zippy.init.com>

Andrew Kuchling <akuchlin@mems-exchange.org> wrote,
> This duplication bugs me.  Shall I submit a patch to add an API
> convenience function to do this, and change the modules to use it?
> Suggested prototype and name: PyDict_InsertInteger( dict *, string,
> long)

+0 but...

...why not:

   PyDict_SetValueString(PyObject* dict, char* key, char* fmt, ...)

and

   PyDict_SetValue(PyObject* dict, PyObject* key, char* fmt, ...)

where the fmt is Py_BuildValue() format string and the ... is, of
course, the argument list.

--
Donald Beaudry                                     Ab Initio Software Corp.
                                                   201 Spring Street
donb@init.com                                      Lexington, MA 02421
                      ...Will hack for sushi...


From ping@lfw.org  Tue Aug 22 21:02:31 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 22 Aug 2000 13:02:31 -0700 (PDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: <200008212030.PAA26887@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org>

On Mon, 21 Aug 2000, Guido van Rossum wrote:
> > > > > Summary: Allow all assignment expressions after 'import
> > > > > something as'
[...]
> I kind of doubt it, because it doesn't look useful.

Looks potentially useful to me.  If nothing else, it's certainly
easier to explain than any other behaviour i could think of, since
assignment is already well-understood.

> I do want "import foo.bar as spam" back, assigning foo.bar to spam.

No no no no.  Or at least let's step back and look at the whole
situation first.

"import foo.bar as spam" makes me uncomfortable because:

    (a) It's not clear whether spam should get foo or foo.bar, as
        evidenced by the discussion between Gordon and Thomas.

    (b) There's a straightforward and unambiguous way to express
        this already: "from foo import bar as spam".

    (c) It's not clear whether this should work only for modules
        named bar, or any symbol named bar.


Before packages, the only two forms of the import statement were:

    import <module>
    from <module> import <symbol>

After packages, the permitted forms are now:

    import <module>
    import <package>
    import <pkgpath>.<module>
    import <pkgpath>.<package>
    from <module> import <symbol>
    from <package> import <module>
    from <pkgpath>.<module> import <symbol>
    from <pkgpath>.<package> import <module>

where a <pkgpath> is a dot-separated list of package names.

With "as" clauses, we could permit:

    import <module> as <localmodule>
    import <package> as <localpackage>
??  import <pkgpath>.<module> as <localmodule>
??  import <pkgpath>.<package> as <localpackage>
??  import <module>.<symbol> as <localsymbol>
??  import <pkgpath>.<module>.<symbol> as <localsymbol>
    from <module> import <symbol> as <localsymbol>
    from <package> import <symbol> as <localsymbol>
    from <pkgpath>.<module> import <symbol> as <localsymbol>
    from <pkgpath>.<package> import <module> as <localmodule>

It's not clear that we should allow "as" on the forms marked with
??, since the other six clearly identify the thing being renamed
and they do not.

Also note that all the other forms using "as" assign exactly one
thing: the name after the "as".  Would the forms marked with ??
assign just the name after the "as" (consistent with the other
"as" forms), or also the top-level package name as well (consistent
with the current behaviour of "import <pkgpath>.<module>")?

That is, would

    import foo.bar as spam

define just spam or both foo and spam?

All these questions make me uncertain...


-- ?!ng



From jack@oratrix.nl  Tue Aug 22 23:03:24 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 23 Aug 2000 00:03:24 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: Message by Thomas Wouters <thomas@xs4all.net> ,
 Tue, 22 Aug 2000 15:35:13 +0200 , <20000822153512.H4933@xs4all.nl>
Message-ID: <20000822220329.19A91E266F@oratrix.oratrix.nl>

Ah, forget about the snake. It was an invention of
those-who-watch-blue-screens, and I guess Guido only jumped on the
bandwagon because those-who-gave-us-bluescreens offered him lots of
money or something.

On Real Machines Python still uses the One And Only True Python Icon,
and will continue to do so by popular demand (although
those-who-used-to-watch-bluescreens-but-saw-the-light occasionally
complain:-).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 


From tim_one@email.msn.com  Wed Aug 23 03:43:04 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 22 Aug 2000 22:43:04 -0400
Subject: Not commonprefix (was RE: [Python-Dev] commonprefix - the beast just won't die...)
In-Reply-To: <20000822171638.I4933@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEGDHBAA.tim_one@email.msn.com>

[Skip Montanaro]
> I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
> updated the tests (still to be checked in) and was starting to work on
> documentation changes, when I realized that something Guido
> said about using dirname to trim to the common directory prefix is
> probably not correct.  Here's an example.  The common prefix of
>     ["/usr/local", "/usr/local/bin"]
> is
>     "/usr/local"
> If you blindly apply dirname to that (which is what I think Guido
> suggested as the way to make commonprefix do what I wanted, you wind
> up with
>     "/usr"
> which isn't going to be correct on most Unix flavors.  Instead, you
> need to check that the prefix doesn't exist or isn't a directory
> before applying dirname.

[Thomas Wouters]
> And even that won't work, in a case like this:
>
> /home/swenson/
> /home/swen/
>
> (common prefix would be /home/swen, which is a directory)

Note that Guido's suggestion does work for that, though.

> or cases like this:
>
> /home/swenson/
> /home/swenniker/
>
> where another directory called
> /home/swen
>
> exists.

Ditto.  This isn't coincidence:  Guido's suggestion works as-is *provided
that* each dirname in the original collection ends with a path separator.
Change Skip's example to

    ["/usr/local/", "/usr/local/bin/"]
                ^ stuck in slashes ^

and Guido's suggestion works fine too.  But these are purely
string-crunching functions, and "/usr/local" *screams* "directory" to people
thanks to its specific name.  Let's make the test case absurdly "simple":

    ["/x/y", "/x/y"]

What's the "common (directory) prefix" here?  Well, there's simply no way to
know at this level.  It's /x/y if y is a directory, or /x if y's just a
regular file.  Guido's suggestion returns /x in this case, or /x/y if you
add trailing slashes to both.  If you don't tell a string gimmick which
inputs are and aren't directories, you can't expect it to guess.

I'll say again, if you want a new function, press for one!  Just leave
commonprefix alone.





From tim_one@email.msn.com  Wed Aug 23 05:32:32 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 00:32:32 -0400
Subject: [Python-Dev] 2.0 Release Plans
Message-ID: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>

PythonLabs had a high-decibel meeting today (well, Monday), culminating in
Universal Harmony.  Jeremy will be updating PEP 200 accordingly.  Just three
highlights:

+ The release schedule has Officially Slipped by one week:  2.0b1 will ship
a week from this coming Monday.  There are too many Open and Accepted
patches backed up to meet the original date.  Also problems cropping up,
like new std tests failing to pass every day (you're not supposed to do
that, you know!  consider yourself clucked at), and patches having to be
redone because of other patches getting checked in.  We want to take the
extra week to do this right, and give you more space to do *your* part
right.

+ While only one beta release is scheduled at this time, we reserve the
right to make a second beta release if significant problems crop up during
the first beta period.  Of course that would cause additional slippage of
2.0 final, if it becomes necessary.  Note that "no features after 2.0b1 is
out!" still stands, regardless of how many beta releases there are.

+ I changed the Patch Manager guidelines at

     http://python.sourceforge.net/sf-faq.html#a1

to better reflect the way we're actually using the beast.  In a nutshell,
Rejected has been changed to mean "this patch is dead, period"; and Open
patches that are awaiting resolution of complaints should remain Open.

All right.  Time for inspiration.  From my point of view, you've all had
waaaaaay too much sleep in August!  Pull non-stop all-nighters until 2.0b1
is out the door, or go work on some project for sissies -- like Perl 6.0 or
the twelve thousandth implementation of Scheme <wink>.

no-guts-no-glory-slow-and-steady-wins-the-race-ly y'rs  - tim




From esr@thyrsus.com  Wed Aug 23 06:16:01 2000
From: esr@thyrsus.com (Eric S. Raymond)
Date: Wed, 23 Aug 2000 01:16:01 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 23, 2000 at 12:32:32AM -0400
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
Message-ID: <20000823011601.D29063@thyrsus.com>

Tim Peters <tim_one@email.msn.com>:
> All right.  Time for inspiration.  From my point of view, you've all had
> waaaaaay too much sleep in August!  Pull non-stop all-nighters until 2.0b1
> is out the door, or go work on some project for sissies -- like Perl 6.0 or
> the twelve thousandth implementation of Scheme <wink>.
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Hey!  I *resemble* that remark!

I don't think I'm presently responsible for anything critical.  If I've
spaced something, somebody tell me now.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

What, then is law [government]? It is the collective organization of
the individual right to lawful defense."
	-- Frederic Bastiat, "The Law"


From tim_one@email.msn.com  Wed Aug 23 07:57:07 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 02:57:07 -0400
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <20000822153512.H4933@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com>

[F]
> well, see for yourself:
>     http://www.pythonlabs.com/logos.html

We should explain that.  I'll let Bob Weiner (BeOpen's CTO) do it instead,
though, because he explained it well to us:

<BOB>
From: weiner@beopen.com
Sent: Wednesday, August 23, 2000 1:23 AM

Just to clarify, the intent of these is for use by companies or individuals
who choose on their own to link back to the PythonLabs site and show their
support for BeOpen's work on Python.  Use of any such branding is wholly
voluntary, as you might expect.

To clarify even further, we recognize and work with many wonderful parties
who contribute to Python.  We expect to continue to put out source releases
called just `Python', and brand platform-specific releases which we produce
and quality-assure ourselves as `BeOpen Python' releases.  This is similar
to what other companies do in the Linux space and other open source arenas.
We know of another company branding their Python release; this helps
potential customers differentiate offerings in the largely undifferentiated
open source space.

We believe it is important and we meet with companies every week who tell us
they want one or more companies behind the development, productization and
support of Python (like Red Hat or SuSE behind Linux).  Connecting the
BeOpen name to Python is one way in which we can help them know that we
indeed do provide these services for Python.  The BeOpen name was chosen
very carefully to encourage people to take an open approach in their
technology deployments, so we think this is a good association for Python to
have and hope that many Python users will choose to help support these
efforts.

We're also very open to working with other Python-related firms to help
build broader use and acceptance of Python.  Mail
<pythonlabs-info@beopen.com> if you'd like to work on a partnership
together.

</BOB>

See?  It's not evil.  *All* American CTOs say "space" and "arena" too much,
so don't gripe about that either.  I can tell you that BeOpen isn't exactly
getting rich off their Python support so far, wrestling with CNRI is
exhausting in more ways than one, and Bob Weiner is a nice man.  Up to this
point, his support of PythonLabs has been purely philanthropic!  If you
appreciate that, you *might* even consider grabbing a link.

[Thomas Wouters]
> Oh, that reminds me, the FAQ needs adjusting ;) It still says:
> """
> 1.2. Why is it called Python?
>
> Apart from being a computer scientist, I'm also a fan of "Monty Python's
> Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely
> -- case you didn't know). It occurred to me one day that I needed a name
> that was short, unique, and slightly mysterious. And I happened to be
> reading some scripts from the series at the time... So then I decided to
> call my language Python. But Python is not a joke. And don't you associate
> it with dangerous reptiles either! (If you need an icon, use an
> image of the
> 16-ton weight from the TV series or of a can of SPAM :-)
> """

Yes, that needs to be rewritten.  Here you go:

    Apart from being a computer scientist, I'm also a fan of
    "Monty BeOpen Python's Flying Circus" (a BBC comedy series from
    the seventies, in the -- unlikely -- case you didn't know). It
    occurred to me one day that I needed a name that was short, unique,
    and slightly mysterious. And I happened to be reading some scripts
    from the series at the time... So then I decided to call my language
    BeOpen Python. But BeOpen Python is not a joke. And don't you associate
    it with dangerous reptiles either! (If you need an icon, use an image
    of the decidedly *friendly* BeOpen reptiles at
    http://www.pythonlabs.com/logos.html).

> And while I'm at it, I hope I can say without offending anyone that I
> hope the logo is open for critisism.

You can hope all you like, and I doubt you're offending anyone, but the logo
is nevertheless not open for criticism:  the BDFL picked it Himself!  Quoth
Guido, "I think he's got a definite little smile going".  Besides, if you
don't like this logo, you're going to be sooooooooo disappointed when you
get a PythonLabs T-shirt.

> ...
> I-still-like-the-shirt-though!-ly y'rs,

Good!  In that case, I'm going to help you with your crusade after all:

Hi! I'm a .signature virus! copy me into your .signature file to
help me spread!




From mal@lemburg.com  Wed Aug 23 08:44:56 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 09:44:56 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
Message-ID: <39A380F8.3D1C86F6@lemburg.com>

Tim Peters wrote:
> 
> PythonLabs had a high-decibel meeting today (well, Monday), culminating in
> Universal Harmony.  Jeremy will be updating PEP 200 accordingly.  Just three
> highlights:
> 
> + The release schedule has Officially Slipped by one week:  2.0b1 will ship
> a week from this coming Monday.  There are too many Open and Accepted
> patches backed up to meet the original date.  Also problems cropping up,
> like new std tests failing to pass every day (you're not supposed to do
> that, you know!  consider yourself clucked at), and patches having to be
> redone because of other patches getting checked in.  We want to take the
> extra week to do this right, and give you more space to do *your* part
> right.
> 
> + While only one beta release is scheduled at this time, we reserve the
> right to make a second beta release if significant problems crop up during
> the first beta period.  Of course that would cause additional slippage of
> 2.0 final, if it becomes necessary.  Note that "no features after 2.0b1 is
> out!" still stands, regardless of how many beta releases there are.

Does this mean I can still splip in that minor patch to allow
for attribute doc-strings in 2.0b1 provided I write up a short
PEP really fast ;-) ?

BTW, what the new standard on releasing ideas to dev public ?
I know I'll have to write a PEP, but where should I put the
patch ? Into the SF patch manager or on a separate page on the
Internet ?

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Wed Aug 23 08:36:04 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 09:36:04 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0204.txt,1.3,1.4
In-Reply-To: <200008230542.WAA02168@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Tue, Aug 22, 2000 at 10:42:00PM -0700
References: <200008230542.WAA02168@slayer.i.sourceforge.net>
Message-ID: <20000823093604.M4933@xs4all.nl>

On Tue, Aug 22, 2000 at 10:42:00PM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/nondist/peps
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv2158
> 
> Modified Files:
> 	pep-0204.txt 
> Log Message:
> Editorial review, including:
> 
>     - Rearrange and standardize headers
>     - Removed ^L's
>     - Spellchecked
>     - Indentation and formatting
>     - Added reference to PEP 202

Damn, I'm glad I didn't rewrite it on my laptop yesterday. This looks much
better, Barry, thanx ! Want to co-author it ? :) (I really need to get
myself some proper (X)Emacs education so I can do cool things like
two-spaces-after-finished-sentences too)

> Thomas, if the open issues have been decided, they can be `closed' in
> this PEP, and then it should probably be marked as Accepted.

Well, that would require me to force the open issues, because they haven't
been decided. They have hardly been discussed ;) I'm not sure how to
properly close them, however. For instance: I would say "not now" to ranges
of something other than PyInt objects, and the same to the idea of
generators. But the issues remain open for debate in future versions. Should
there be a 'closed issues' section, or should I just not mention them and
have people start a new PEP and gather the ideas anew when the time comes ?

(And a Decisions (either a consensus one or a BDFL one) would be nice on
whether the two new PyList_ functions should be part of the API or not. The
rest of the issues I can handle.)

Don't forget, I'm a newbie in standards texts. Be gentle ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nowonder@nowonder.de  Wed Aug 23 11:17:33 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Wed, 23 Aug 2000 10:17:33 +0000
Subject: [Python-Dev] ...and the new name for our favourite little language
 is...
References: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com>
Message-ID: <39A3A4BD.C30E4729@nowonder.de>

[Tim]
> get a PythonLabs T-shirt.

[Thomas]
> I-still-like-the-shirt-though!-ly y'rs,

Okay, folks. What's the matter? I don't see any T-shirt
references on http://pythonlabs.com. Where? How?

help-me-with-my-crusade-too-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From thomas@xs4all.net  Wed Aug 23 10:01:23 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 11:01:23 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <39A3A4BD.C30E4729@nowonder.de>; from nowonder@nowonder.de on Wed, Aug 23, 2000 at 10:17:33AM +0000
References: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com> <39A3A4BD.C30E4729@nowonder.de>
Message-ID: <20000823110122.N4933@xs4all.nl>

On Wed, Aug 23, 2000 at 10:17:33AM +0000, Peter Schneider-Kamp wrote:
> [Tim]
> > get a PythonLabs T-shirt.
> 
> [Thomas]
> > I-still-like-the-shirt-though!-ly y'rs,

> Okay, folks. What's the matter? I don't see any T-shirt
> references on http://pythonlabs.com. Where? How?

We were referring to the PythonLabs T-shirt that was given out (in limited
numbers, I do believe, since my perl-hugging colleague only got me one, and
couldn't get one for himself & the two python-learning colleagues *) at
O'Reilly's Open Source Conference. It has the PythonLabs logo on front (the
green snake, on a simple black background, in a white frame) with
'PYTHONLABS' underneath the logo, and on the back it says 'PYTHONLABS.COM'
and 'There Is Only One Way To Do It.'. 

I'm sure they'll have some more at the next IPC ;)

(* As a result, I can't wear this T-shirt to work, just like my X-Files
T-shirt, for fear of being forced to leave without it ;)
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug 23 10:21:25 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 23 Aug 2000 12:21:25 +0300 (IDT)
Subject: [Python-Dev] ...and the new name for our favourite little language
 is...
In-Reply-To: <20000823110122.N4933@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008231219190.8650-100000@sundial>

On Wed, 23 Aug 2000, Thomas Wouters wrote:

> We were referring to the PythonLabs T-shirt that was given out (in limited
> numbers, I do believe, since my perl-hugging colleague only got me one, and
> couldn't get one for himself & the two python-learning colleagues *) at
> O'Reilly's Open Source Conference. It has the PythonLabs logo on front (the
> green snake, on a simple black background, in a white frame) with
> 'PYTHONLABS' underneath the logo, and on the back it says 'PYTHONLABS.COM'
> and 'There Is Only One Way To Do It.'. 
> 
> I'm sure they'll have some more at the next IPC ;)

Can't they sell them over the net (at copyleft or something)? I'd love
to buy one for me and my friends, and maybe one for everyone in the
first Python-IL meeting..

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From fdrake@beopen.com  Wed Aug 23 15:38:12 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 23 Aug 2000 10:38:12 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A380F8.3D1C86F6@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
 <39A380F8.3D1C86F6@lemburg.com>
Message-ID: <14755.57812.111681.750661@cj42289-a.reston1.va.home.com>

M.-A. Lemburg writes:
 > Does this mean I can still splip in that minor patch to allow
 > for attribute doc-strings in 2.0b1 provided I write up a short
 > PEP really fast ;-) ?

  Write a PEP if you like; I think I'd really like to look at this
before you change any code, and I've not had a chance to read your
messages about this yet.  This is *awefully* late to be making a
change that hasn't been substantially hashed out and reviewed, and I'm
under the impression that this is pretty new (the past week or so).

 > BTW, what the new standard on releasing ideas to dev public ?
 > I know I'll have to write a PEP, but where should I put the
 > patch ? Into the SF patch manager or on a separate page on the
 > Internet ?

  Patches should still go to the SF patch manager.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Wed Aug 23 17:22:04 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 11:22:04 -0500
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Tue, 22 Aug 2000 13:02:31 MST."
 <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org>
References: <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org>
Message-ID: <200008231622.LAA02275@cj20424-a.reston1.va.home.com>

> On Mon, 21 Aug 2000, Guido van Rossum wrote:
> > > > > > Summary: Allow all assignment expressions after 'import
> > > > > > something as'
> [...]
> > I kind of doubt it, because it doesn't look useful.

[Ping]
> Looks potentially useful to me.  If nothing else, it's certainly
> easier to explain than any other behaviour i could think of, since
> assignment is already well-understood.

KISS suggests not to add it.  We had a brief discussion about this at
our 2.0 planning meeting and nobody there thought it would be worth
it, and several of us felt it would be asking for trouble.

> > I do want "import foo.bar as spam" back, assigning foo.bar to spam.
> 
> No no no no.  Or at least let's step back and look at the whole
> situation first.
> 
> "import foo.bar as spam" makes me uncomfortable because:
> 
>     (a) It's not clear whether spam should get foo or foo.bar, as
>         evidenced by the discussion between Gordon and Thomas.

As far as I recall that conversation, it's just that Thomas (more or
less accidentally) implemented what was easiest from the
implementation's point of view without thinking about what it should
mean.  *Of course* it should mean what I said if it's allowed.  Even
Thomas agrees to that now.

>     (b) There's a straightforward and unambiguous way to express
>         this already: "from foo import bar as spam".

Without syntax coloring that looks word soup to me.

  import foo.bar as spam

uses fewer words to say the same clearer.

>     (c) It's not clear whether this should work only for modules
>         named bar, or any symbol named bar.

Same as for import: bar must be a submodule (or subpackage) in package
foo.

> Before packages, the only two forms of the import statement were:
> 
>     import <module>
>     from <module> import <symbol>
> 
> After packages, the permitted forms are now:
> 
>     import <module>
>     import <package>
>     import <pkgpath>.<module>
>     import <pkgpath>.<package>
>     from <module> import <symbol>
>     from <package> import <module>
>     from <pkgpath>.<module> import <symbol>
>     from <pkgpath>.<package> import <module>

You're creating more cases than necessary to get a grip on this.  This
is enough, if you realize that a package is also a module and the
package path doesn't add any new cases:

  import <module>
  from <module> import <symbol>
  from <package> import <module>

> where a <pkgpath> is a dot-separated list of package names.
> 
> With "as" clauses, we could permit:
> 
>     import <module> as <localmodule>
>     import <package> as <localpackage>
> ??  import <pkgpath>.<module> as <localmodule>
> ??  import <pkgpath>.<package> as <localpackage>
> ??  import <module>.<symbol> as <localsymbol>
> ??  import <pkgpath>.<module>.<symbol> as <localsymbol>
>     from <module> import <symbol> as <localsymbol>
>     from <package> import <symbol> as <localsymbol>
>     from <pkgpath>.<module> import <symbol> as <localsymbol>
>     from <pkgpath>.<package> import <module> as <localmodule>

Let's simplify that to:

  import <module> as <localname>
  from <module> import <symbol> as <localname>
  from <package> import <module> as <localname>

> It's not clear that we should allow "as" on the forms marked with
> ??, since the other six clearly identify the thing being renamed
> and they do not.
> 
> Also note that all the other forms using "as" assign exactly one
> thing: the name after the "as".  Would the forms marked with ??
> assign just the name after the "as" (consistent with the other
> "as" forms), or also the top-level package name as well (consistent
> with the current behaviour of "import <pkgpath>.<module>")?
> 
> That is, would
> 
>     import foo.bar as spam
> 
> define just spam or both foo and spam?

Aargh!  Just spam, of course!

> All these questions make me uncertain...

Not me.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Wed Aug 23 16:38:31 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 17:38:31 +0200
Subject: [Python-Dev] Attribute Docstring PEP (2.0 Release Plans)
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
 <39A380F8.3D1C86F6@lemburg.com> <14755.57812.111681.750661@cj42289-a.reston1.va.home.com>
Message-ID: <39A3EFF7.A4D874EC@lemburg.com>

This is a multi-part message in MIME format.
--------------E566E4B4F5AFC0BF9ECC11DD
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

"Fred L. Drake, Jr." wrote:
> 
> M.-A. Lemburg writes:
>  > Does this mean I can still splip in that minor patch to allow
>  > for attribute doc-strings in 2.0b1 provided I write up a short
>  > PEP really fast ;-) ?
> 
>   Write a PEP if you like; I think I'd really like to look at this
> before you change any code, and I've not had a chance to read your
> messages about this yet.  This is *awefully* late to be making a
> change that hasn't been substantially hashed out and reviewed, and I'm
> under the impression that this is pretty new (the past week or so).

FYI, I've attached the pre-PEP below (I also sent it to Barry
for review).

This PEP is indeed very new, but AFAIK it doesn't harm any existing
code and also doesn't add much code complexity to achieve what it's
doing (see the patch).

>  > BTW, what the new standard on releasing ideas to dev public ?
>  > I know I'll have to write a PEP, but where should I put the
>  > patch ? Into the SF patch manager or on a separate page on the
>  > Internet ?
> 
>   Patches should still go to the SF patch manager.

Here it is:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
--------------E566E4B4F5AFC0BF9ECC11DD
Content-Type: text/plain; charset=us-ascii;
 name="pep-0224.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="pep-0224.txt"

PEP: 224
Title: Attribute Docstrings
Version: $Revision: 1.0 $
Owner: mal@lemburg.com (Marc-Andre Lemburg)
Python-Version: 2.0
Status: Draft
Created: 23-Aug-2000
Type: Standards Track


Introduction

    This PEP describes the "attribute docstring" proposal for Python
    2.0. This PEP tracks the status and ownership of this feature. It
    contains a description of the feature and outlines changes
    necessary to support the feature. The CVS revision history of this
    file contains the definitive historical record.


Rationale

    This PEP proposes a small addition to the way Python currently
    handles docstrings embedded in Python code. 

    Until now, Python only handles the case of docstrings which appear
    directly after a class definition, a function definition or as
    first string literal in a module. The string literals are added to
    the objects in question under the __doc__ attribute and are from
    then on available for introspecition tools which can extract the
    contained information for help, debugging and documentation
    purposes.

    Docstrings appearing in other locations as the ones mentioned are
    simply ignored and don't result in any code generation.

    Here is an example:

    class C:
	    " class C doc-string "

	    a = 1
	    " attribute C.a doc-string (1)"

	    b = 2
	    " attribute C.b doc-string (2)"

    The docstrings (1) and (2) are currently being ignored by the
    Python byte code compiler, but could obviously be put to good use
    for documenting the named assignments that preceed them.
    
    This PEP proposes to also make use of these cases by proposing
    semantics for adding their content to the objects in which they
    appear under new generated attribute names.

    The original idea behind this approach which also inspired the
    above example was to enable inline documentation of class
    attributes, which can currently only be documented in the class'
    docstring or using comments which are not available for
    introspection.


Implementation

    Docstrings are handled by the byte code compiler as expressions.
    The current implementation special cases the few locations
    mentioned above to make use of these expressions, but otherwise
    ignores the strings completely.

    To enable use of these docstrings for documenting named
    assignments (which is the natural way of defining e.g. class
    attributes), the compiler will have to keep track of the last
    assigned name and then use this name to assign the content of the
    docstring to an attribute of the containing object by means of
    storing it in as a constant which is then added to the object's
    namespace during object construction time.

    In order to preserve features like inheritence and hiding of
    Python's special attributes (ones with leading and trailing double
    underscores), a special name mangling has to be applied which
    uniquely identifies the docstring as belonging to the name
    assignment and allows finding the docstring later on by inspecting
    the namespace.

    The following name mangling scheme achieves all of the above:

		      __doc__<attributename>__

    To keep track of the last assigned name, the byte code compiler
    stores this name in a variable of the compiling structure. This
    variable defaults to NULL. When it sees a docstring, it then
    checks the variable and uses the name as basis for the above name
    mangling to produce an implicit assignment of the docstring to the
    mangled name. It then resets the variable to NULL to avoid
    duplicate assignments.

    If the variable does not point to a name (i.e. is NULL), no
    assignments are made.  These will continue to be ignored like
    before.  All classical docstrings fall under this case, so no
    duplicate assignments are done.

    In the above example this would result in the following new class
    attributes to be created:

    C.__doc__a__ == " attribute C.a doc-string (1)"
    C.__doc__b__ == " attribute C.b doc-string (2)"

    A patch to the current CVS version of Python 2.0 which implements
    the above is available on SourceForge at [1].


Caveats of the Implementation
    
    Since the implementation does not reset the compiling structure
    variable when processing a non-expression, e.g. a function definition,
    the last assigned name remains active until either the next assignment
    or the next occurrence of a docstring.

    This can lead to cases where the docstring and assignment may be
    separated by other expressions:

    class C:
	"C doc string"

	b = 2

	def x(self):
	    "C.x doc string"
	    y = 3
	    return 1

	"b's doc string"

    Since the definition of method "x" currently does not reset the
    used assignment name variable, it is still valid when the compiler
    reaches 

    
Copyright

    This document has been placed in the Public Domain.


References

    [1]
http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

--------------E566E4B4F5AFC0BF9ECC11DD--



From tim_one@email.msn.com  Wed Aug 23 16:40:46 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 11:40:46 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A380F8.3D1C86F6@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>

[MAL]
> Does this mean I can still splip in that minor patch to allow
> for attribute doc-strings in 2.0b1 provided I write up a short
> PEP really fast ;-) ?

2.0 went into feature freeze the Monday *before* this one!  So, no.  The "no
new features after 2.0b1" refers mostly to the patches currently in Open and
Accepted:  if *they're* not checked in before 2.0b1 goes out, they don't get
into 2.0 either.

Ideas that were accepted by Guido for 2.0 before last Monday aren't part of
the general "feature freeze".  Any new feature proposed *since* then has
been Postponed without second thought.  Guido accepted several ideas before
feature freeze that still haven't been checked in (in some cases, still not
coded!), and just dealing with them has already caused a slip in the
schedule.  We simply can't afford to entertain new ideas too now (indeed,
that's why "feature freeze" exists:  focus).

For you in particular <wink>, how about dealing with Open patch 100899?
It's been assigned to you for 5 weeks, and if you're not going to review it
or kick /F in the butt, assign it to someone else.

> BTW, what the new standard on releasing ideas to dev public ?
> I know I'll have to write a PEP, but where should I put the
> patch ? Into the SF patch manager or on a separate page on the
> Internet ?

The PEP should be posted to both Python-Dev and comp.lang.python after its
first stab is done.  If you don't at least post a link to the patch in the
SF Patch Manager, the patch doesn't officially exist.  I personally prefer
one-stop shopping, and SF is the Python Developer's Mall; but there's no
rule about that yet (note that 100899's patch was apparently so big SF
wouldn't accept it, so /F *had* to post just a URL to the Patch Manager).




From bwarsaw@beopen.com  Wed Aug 23 17:01:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 23 Aug 2000 12:01:32 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
 <39A380F8.3D1C86F6@lemburg.com>
Message-ID: <14755.62812.185580.367242@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> Does this mean I can still splip in that minor patch to allow
    M> for attribute doc-strings in 2.0b1 provided I write up a short
    M> PEP really fast ;-) ?

Well, it's really the 2.0 release manager's job to disappoint you, so
I won't. :) But yes a PEP would probably be required.  However, after
our group meeting yesterday, I'm changing the requirements for PEP
number assignment.  You need to send me a rough draft, not just an
abstract (there's too many incomplete PEPs already).

    M> BTW, what the new standard on releasing ideas to dev public ?
    M> I know I'll have to write a PEP, but where should I put the
    M> patch ? Into the SF patch manager or on a separate page on the
    M> Internet ?

Better to put the patches on SF.

-Barry


From bwarsaw@beopen.com  Wed Aug 23 17:09:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Wed, 23 Aug 2000 12:09:32 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0204.txt,1.3,1.4
References: <200008230542.WAA02168@slayer.i.sourceforge.net>
 <20000823093604.M4933@xs4all.nl>
Message-ID: <14755.63292.825567.868362@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> Damn, I'm glad I didn't rewrite it on my laptop
    TW> yesterday. This looks much better, Barry, thanx ! Want to
    TW> co-author it ? :)

Naw, that's what an editor is for (actually, I thought an editor was
for completely covering your desktop like lox on a bagel).
    
    TW> (I really need to get myself some proper (X)Emacs education so
    TW> I can do cool things like two-spaces-after-finished-sentences
    TW> too)

Heh, that's just finger training, but I do it only because it works
well with XEmacs's paragraph filling.

    TW> Well, that would require me to force the open issues, because
    TW> they haven't been decided. They have hardly been discussed ;)
    TW> I'm not sure how to properly close them, however. For
    TW> instance: I would say "not now" to ranges of something other
    TW> than PyInt objects, and the same to the idea of
    TW> generators. But the issues remain open for debate in future
    TW> versions. Should there be a 'closed issues' section, or should
    TW> I just not mention them and have people start a new PEP and
    TW> gather the ideas anew when the time comes ?

    TW> (And a Decisions (either a consensus one or a BDFL one) would
    TW> be nice on whether the two new PyList_ functions should be
    TW> part of the API or not. The rest of the issues I can handle.)

The thing to do is to request BDFL pronouncement on those issues for
2.0, and write them up in a "BDFL Pronouncements" section at the end
of the PEP.  See PEP 201 for an example.  You should probably email
Guido directly and ask him to rule.  If he doesn't, then they'll get
vetoed by default once 2.0beta1 is out.

IMO, if some extension of range literals is proposed for a future
release of Python, then we'll issue a new PEP for those.

-Barry


From mal@lemburg.com  Wed Aug 23 16:56:17 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 17:56:17 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <39A3F421.415107E6@lemburg.com>

Tim Peters wrote:
> 
> [MAL]
> > Does this mean I can still splip in that minor patch to allow
> > for attribute doc-strings in 2.0b1 provided I write up a short
> > PEP really fast ;-) ?
> 
> 2.0 went into feature freeze the Monday *before* this one!  So, no.  The "no
> new features after 2.0b1" refers mostly to the patches currently in Open and
> Accepted:  if *they're* not checked in before 2.0b1 goes out, they don't get
> into 2.0 either.

Ah, ok. 

Pity I just started to do some heaviy doc-string extracting
just last week... oh, well.
 
> Ideas that were accepted by Guido for 2.0 before last Monday aren't part of
> the general "feature freeze".  Any new feature proposed *since* then has
> been Postponed without second thought.  Guido accepted several ideas before
> feature freeze that still haven't been checked in (in some cases, still not
> coded!), and just dealing with them has already caused a slip in the
> schedule.  We simply can't afford to entertain new ideas too now (indeed,
> that's why "feature freeze" exists:  focus).
> 
> For you in particular <wink>, how about dealing with Open patch 100899?
> It's been assigned to you for 5 weeks, and if you're not going to review it
> or kick /F in the butt, assign it to someone else.

AFAIK, Fredrik hasn't continued work on that patch and some
important parts are still missing, e.g. the generator scripts
and a description of how the whole thing works.

It's not that important though, since the patch is a space
optimization of what is already in Python 2.0 (and has been
for quite a while now): the Unicode database.
 
Perhaps I should simply post-pone the patch to 2.1 ?!

> > BTW, what the new standard on releasing ideas to dev public ?
> > I know I'll have to write a PEP, but where should I put the
> > patch ? Into the SF patch manager or on a separate page on the
> > Internet ?
> 
> The PEP should be posted to both Python-Dev and comp.lang.python after its
> first stab is done.  If you don't at least post a link to the patch in the
> SF Patch Manager, the patch doesn't officially exist.  I personally prefer
> one-stop shopping, and SF is the Python Developer's Mall; but there's no
> rule about that yet (note that 100899's patch was apparently so big SF
> wouldn't accept it, so /F *had* to post just a URL to the Patch Manager).

I've just posted the PEP here, CCed it to Barry and uploaded the
patch to SF. I'll post it to c.l.p tomorrow (don't know what that's
good for though, since I don't read c.l.p anymore).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Wed Aug 23 18:49:28 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 23 Aug 2000 13:49:28 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
References: <39A380F8.3D1C86F6@lemburg.com>
 <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <14756.3752.23014.786587@bitdiddle.concentric.net>

I wanted to confirm: Tim is channeling the release manager just
fine.  We are in feature freeze for 2.0.

Jeremy


From jeremy@beopen.com  Wed Aug 23 18:55:34 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 23 Aug 2000 13:55:34 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A3F421.415107E6@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
 <39A3F421.415107E6@lemburg.com>
Message-ID: <14756.4118.865603.363166@bitdiddle.concentric.net>

>>>>> "MAL" == M-A Lemburg <mal@lemburg.com> writes:
>>>>> "TP" == Tim Peters <tpeters@beopen.com> writes:

  TP> For you in particular <wink>, how about dealing with Open patch
  TP>> 100899?  It's been assigned to you for 5 weeks, and if you're not
  TP> going to review it or kick /F in the butt, assign it to someone
  TP> else.

  MAL> AFAIK, Fredrik hasn't continued work on that patch and some
  MAL> important parts are still missing, e.g. the generator scripts
  MAL> and a description of how the whole thing works.

  MAL> It's not that important though, since the patch is a space
  MAL> optimization of what is already in Python 2.0 (and has been for
  MAL> quite a while now): the Unicode database.
 
  MAL> Perhaps I should simply post-pone the patch to 2.1 ?!

Thanks for clarifying the issue with this patch.  

I would like to see some compression in the release, but agree that it
is not an essential optimization.  People have talked about it for a
couple of months, and we haven't found someone to work on it because
at various times pirx and /F said they were working on it.

If we don't hear from /F by tomorrow promising he will finish it before
the beta release, let's postpone it.

Jeremy


From tim_one@email.msn.com  Wed Aug 23 19:32:20 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 14:32:20 -0400
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release Plans)
In-Reply-To: <14756.4118.865603.363166@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>

[Jeremy Hylton]
> I would like to see some compression in the release, but agree that it
> is not an essential optimization.  People have talked about it for a
> couple of months, and we haven't found someone to work on it because
> at various times pirx and /F said they were working on it.
>
> If we don't hear from /F by tomorrow promising he will finish it before
> the beta release, let's postpone it.

There was an *awful* lot of whining about the size increase without this
optimization, and the current situation violates the "no compiler warnings!"
rule too (at least under MSVC 6).  That means it's going to fail to compile
at all on *some* feebler system.  We said we'd put it in, so I'm afraid I
think it falls on PythonLabs to finish it if /F can't.




From thomas@xs4all.net  Wed Aug 23 19:59:20 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 20:59:20 +0200
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release Plans)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 23, 2000 at 02:32:20PM -0400
References: <14756.4118.865603.363166@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>
Message-ID: <20000823205920.A7566@xs4all.nl>

On Wed, Aug 23, 2000 at 02:32:20PM -0400, Tim Peters wrote:
> [Jeremy Hylton]
> > I would like to see some compression in the release, but agree that it
> > is not an essential optimization.  People have talked about it for a
> > couple of months, and we haven't found someone to work on it because
> > at various times pirx and /F said they were working on it.
> >
> > If we don't hear from /F by tomorrow promising he will finish it before
> > the beta release, let's postpone it.

> There was an *awful* lot of whining about the size increase without this
> optimization, and the current situation violates the "no compiler warnings!"
> rule too (at least under MSVC 6).

For the record, you can't compile unicodedatabase.c with g++ because of it's
size: g++ complains that the switch is too large to compile. Under gcc it
compiles, but only by trying really really hard, and I don't know how it
performs under other versions of gcc (in particular more heavily optimizing
ones -- might run into other limits in those situations.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Wed Aug 23 20:00:33 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 15:00:33 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <003b01c00d1f$3ef70fe0$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIOHBAA.tim_one@email.msn.com>

[Fredrik Lundh]

[on patch 100899]
> mal has reviewed the patch, and is waiting for an update
> from me.

Thanks!  On that basis, I've reassigned the patch to you.

> PS. the best way to get me to do something is to add a
> task to the task manager.

Yikes!  I haven't looked at the thing since the day after I enabled it
<wink> -- thanks for the clue.

> I currently have three things on my slate:
>
>     17333 add os.popen2 support for Unix

Guido definitely wants this for 2.0, but there's no patch for it and no
entry in PEP 200.  Jeremy, please add it.

>     17334 add PyErr_Format to errors module
>     17335 add compressed unicode database

Those two are in Open patches, and both assigned to you.

> if I missed something, let me know.

In your email (to Guido and me) from Monday, 31-July-2000,

> so to summarize, Python 2.0 will support the following
> hex-escapes:
>
>    \xNN
>    \uNNNN
>    \UNNNNNNNN
>
> where the last two are only supported in Unicode and
> SRE strings.
>
> I'll provide patches later this week, once the next SRE
> release is wrapped up (later tonight, I hope).

This apparently fell through the cracks, and I finally remembered it last
Friday, and added them to PEP 200 recently.  Guido wants this in 2.0, and
accepted them long before feature-freeze.  I'm currently writing a PEP for
the \x change (because it has a surreal chance of breaking old code).  I
haven't written any code for it.  The new \U escape is too minor to need a
PEP (according to me).




From Fredrik Lundh" <effbot@telia.com  Wed Aug 23 17:28:58 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 18:28:58 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <003b01c00d1f$3ef70fe0$f2a6b5d4@hagrid>

tim wrote:
> For you in particular <wink>, how about dealing with Open patch 100899?
> It's been assigned to you for 5 weeks, and if you're not going to review it
> or kick /F in the butt, assign it to someone else.

mal has reviewed the patch, and is waiting for an update
from me.

</F>

PS. the best way to get me to do something is to add a
task to the task manager.  I currently have three things
on my slate:

    17333 add os.popen2 support for Unix 
    17334 add PyErr_Format to errors module 
    17335 add compressed unicode database 

if I missed something, let me know.



From thomas@xs4all.net  Wed Aug 23 20:29:47 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 21:29:47 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src README,1.89,1.90
In-Reply-To: <200008231901.MAA31275@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Wed, Aug 23, 2000 at 12:01:47PM -0700
References: <200008231901.MAA31275@slayer.i.sourceforge.net>
Message-ID: <20000823212946.B7566@xs4all.nl>

On Wed, Aug 23, 2000 at 12:01:47PM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv31228
> 
> Modified Files:
> 	README 
> Log Message:
> Updated some URLs; removed mention of copyright (we'll have to add
> something in later after that discussion is over); remove explanation
> of 2.0 version number.

I submit that this file needs some more editing for 2.0. For instance, it
mentions that 'some modules' will not compile on old SunOS compilers because
they are written in ANSI C. It also has a section on threads which needs to
be rewritten to reflect that threads are *on* by default, and explain how to
turn them off. I also think it should put some more emphasis on editing
Modules/Setup, which is commonly forgotten by newbies. Either that or make
some more things 'standard', like '*shared*'.

(It mentions '... editing a file, typing make, ...' in the overview, but
doesn't actually mention which file to edit until much later, in a sideways
kind of way in the machine-specific section, and even later in a separate
section.)

It also has some teensy small bugs: it says "uncomment" when it should say
"comment out" in the Cray T3E section, and it's "glibc2" or "libc6", not
"glibc6", in the Linux section. (it's glibc version 2, but the interface
number is 6.) I would personally suggest removing that entire section, it's
a bit outdated. But the same might go for other sections!

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Fredrik Lundh" <effbot@telia.com  Wed Aug 23 20:50:21 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 21:50:21 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCIEIOHBAA.tim_one@email.msn.com>
Message-ID: <006601c00d3b$610f5440$f2a6b5d4@hagrid>

tim wrote:
> > I currently have three things on my slate:
> >
> >     17333 add os.popen2 support for Unix
> 
> Guido definitely wants this for 2.0, but there's no patch for it and no
> entry in PEP 200.  Jeremy, please add it.

to reduce my load somewhat, maybe someone who does
Python 2.0 development on a Unix box could produce that
patch?

(all our unix boxes are at the office, but I cannot run CVS
over SSH from there -- and sorting that one out will take
more time than I have right now...)

:::

anyway, fixing this is pretty straightforward:

1) move the class (etc) from popen2.py to os.py

2) modify the "if hasattr" stuff; change

    # popen2.py
    if hasattr(os, "popen2"):
        def popen2(...):
            # compatbility code, using os.popen2
    else:
        def popen2(...):
            # unix implementation

to

    # popen2.py
    def popen2(...):
        # compatibility code

    # os.py
    def popen2(...)
        # unix implementation, with the order of
        # the return values changed to (child_stdin,
        # child_stdout, child_stderr)

:::

> > so to summarize, Python 2.0 will support the following
> > hex-escapes:
> >
> >    \xNN
> >    \uNNNN
> >    \UNNNNNNNN
> >
> > where the last two are only supported in Unicode and
> > SRE strings.
> 
> This apparently fell through the cracks, and I finally remembered it last
> Friday, and added them to PEP 200 recently.  Guido wants this in 2.0, and
> accepted them long before feature-freeze.  I'm currently writing a PEP for
> the \x change (because it has a surreal chance of breaking old code).  I
> haven't written any code for it.  The new \U escape is too minor to need a
> PEP (according to me).

if someone else can do the popen2 stuff, I'll take care
of this one!

</F>



From Fredrik Lundh" <effbot@telia.com  Wed Aug 23 22:47:01 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 23:47:01 +0200
Subject: [Python-Dev] anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <002001c00d4b$acf32ca0$f2a6b5d4@hagrid>

doesn't work too well for me -- Tkinter._test() tends to hang
when I press quit (not every time, though).  the only way to
shut down the process is to reboot.

any ideas?

(msvc 5, win95).

</F>



From tim_one@email.msn.com  Wed Aug 23 22:30:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 17:30:23 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <006601c00d3b$610f5440$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>

[/F, on "add os.popen2 support for Unix"]
> to reduce my load somewhat, maybe someone who does
> Python 2.0 development on a Unix box could produce that
> patch?

Sounds like a more than reasonable idea to me; heck, AFAIK, until you
mentioned you thought it was on your plate, we didn't think it was on
*anyone's* plate.  It simply "came up" on its own at the PythonLabs mtg
yesterday (which I misidentified as "Monday" in an earlier post).

Can we get a volunteer here?  Here's /F's explanation:

> anyway, fixing this is pretty straightforward:
>
> 1) move the class (etc) from popen2.py to os.py
>
> 2) modify the "if hasattr" stuff; change
>
>     # popen2.py
>     if hasattr(os, "popen2"):
>         def popen2(...):
>             # compatbility code, using os.popen2
>     else:
>         def popen2(...):
>             # unix implementation
>
> to
>
>     # popen2.py
>     def popen2(...):
>         # compatibility code
>
>     # os.py
>     def popen2(...)
>         # unix implementation, with the order of
>         # the return values changed to (child_stdin,
>         # child_stdout, child_stderr)

[on \x, \u and \U]
> if someone else can do the popen2 stuff, I'll take care
> of this one!

It's a deal as far as I'm concerned.  Thanks!  I'll finish the \x PEP
anyway, though, as it's already in progress.

Jeremy, please update PEP 200 accordingly (after you volunteer to do the
os.popen2 etc bit for Unix(tm) <wink>).




From Fredrik Lundh" <effbot@telia.com  Wed Aug 23 22:59:50 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 23:59:50 +0200
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <000d01c00d4d$78a81c60$f2a6b5d4@hagrid>

I wrote:


> doesn't work too well for me -- Tkinter._test() tends to hang
> when I press quit (not every time, though).  the only way to
> shut down the process is to reboot.

hmm.  it looks like it's more likely to hang if the program
uses unicode strings.

    Tkinter._test() hangs about 2 times out of three

    same goes for a simple test program that passes a
    unicode string constant (containing Latin-1 chars)
    to a Label

    the same test program using a Latin-1 string (which,
    I suppose, is converted to Unicode inside Tk) hangs
    in about 1/3 of the runs.

    the same test program with a pure ASCII string
    never hangs...

confusing.

</F>



From thomas@xs4all.net  Wed Aug 23 22:53:45 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 23:53:45 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
Message-ID: <20000823235345.C7566@xs4all.nl>

While re-writing the PyNumber_InPlace*() functions in augmented assignment
to something Guido and I agree on should be the Right Way, I found something
that *might* be a bug. But I'm not sure.

The PyNumber_*() methods for binary operations (found in abstract.c) have
the following construct:

        if (v->ob_type->tp_as_number != NULL) {
                PyObject *x = NULL;
                PyObject * (*f)(PyObject *, PyObject *);
                if (PyNumber_Coerce(&v, &w) != 0)
                        return NULL;
                if ((f = v->ob_type->tp_as_number->nb_xor) != NULL)
                        x = (*f)(v, w);
                Py_DECREF(v);
                Py_DECREF(w);
                if (f != NULL)
                        return x;
        }

(This is after a check if either argument is an instance object, so both are
C objects here.) Now, I'm not sure how coercion is supposed to work, but I
see one problem here: 'v' can be changed by PyNumber_Coerce(), and the new
object's tp_as_number pointer could be NULL. I bet it's pretty unlikely that
(numeric) coercion of a numeric object and an unspecified object turns up a
non-numeric object, but I don't see anything guaranteeing it won't, either.

Is this a non-issue, or should I bother with adding the extra check in the
current binary operations (and the new inplace ones) ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Wed Aug 23 22:58:30 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 23 Aug 2000 17:58:30 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>
References: <006601c00d3b$610f5440$f2a6b5d4@hagrid>
 <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>
Message-ID: <14756.18694.812840.428389@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Sounds like a more than reasonable idea to me; heck, AFAIK, until you
 > mentioned you thought it was on your plate, we didn't think it was on
 > *anyone's* plate.  It simply "came up" on its own at the PythonLabs mtg
 > yesterday (which I misidentified as "Monday" in an earlier post).
...
 > Jeremy, please update PEP 200 accordingly (after you volunteer to do the
 > os.popen2 etc bit for Unix(tm) <wink>).

  Note that Guido asked me to do this, and I've updated the SF Task
Manager with the appropriate information.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Thu Aug 24 00:08:13 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:08:13 -0500
Subject: [Python-Dev] The Python 1.6 License Explained
Message-ID: <200008232308.SAA02986@cj20424-a.reston1.va.home.com>

[Also posted to c.l.py]

With BeOpen's help, CNRI has prepared a FAQ about the new license
which should answer those questions.  The official URL for the Python
1.6 license FAQ is http://www.python.org/1.6/license_faq.html (soon on
a mirror site near you), but I'm also appending it here.

We expect that we will be able to issue the final 1.6 release very
soon.  We're also working hard on the first beta release of Python
2.0, slated for September 4; the final 2.0 release should be ready in
October.  See http://www.pythonlabs.com/tech/python2.html for
up-to-date 2.0 information.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

Python 1.6 License FAQ

This FAQ addresses questions concerning the CNRI Open Source License
and its impact on past and future Python releases. The text below has
been approved for posting on the Python website and newsgroup by
CNRI's president, Dr. Robert E. Kahn.

    1.The old Python license from CWI worked well for almost 10
    years. Why a new license for Python 1.6?

      CNRI claims copyright in Python code and documentation from
      releases 1.3 through 1.6 inclusive.  However, for a number of
      technical reasons, CNRI never formally licensed this work for
      Internet download, although it did permit Guido to share the
      results with the Python community. As none of this work was
      published either, there were no CNRI copyright notices placed on
      these Python releases prior to 1.6. A CNRI copyright notice will
      appear on the official release of Python 1.6. The CNRI license
      was created to clarify for all users that CNRI's intent is to
      enable Python extensions to be developed in an extremely open
      form, in the best interests of the Python community.

    2.Why isn't the new CNRI license as short and simple as the CWI
    license? Are there any issues with it?

      A license is a legally binding document, and the CNRI Open
      Source License is-- according to CNRI --as simple as they were
      able to make it at this time while still maintaining a balance
      between the need for access and other use of Python with CNRI's
      rights.

    3.Are you saying that the CWI license did not protect our rights?

      CNRI has held copyright and other rights to the code but never
      codified them into a CNRI-blessed license prior to 1.6. The CNRI
      Open Source License is a binding contract between CNRI and
      Python 1.6's users and, unlike the CWI statement, cannot be
      revoked except for a material breach of its terms.  So this
      provides a licensing certainty to Python users that never really
      existed before.

    4.What is CNRI's position on prior Python releases, e.g. Python
    1.5.2?

      Releases of Python prior to 1.6 were shared with the community
      without a formal license from CNRI.  The CWI Copyright Notice
      and Permissions statement (which was included with Python
      releases prior to 1.6b1), as well as the combined CWI-CNRI
      disclaimer, were required to be included as a condition for
      using the prior Python software. CNRI does not intend to require
      users of prior versions of Python to upgrade to 1.6 unless they
      voluntarily choose to do so.

    5.OK, on to the new license. Is it an Open Source license?

      Yes. The board of the Open Source Initiative certified the CNRI
      Open Source License as being fully Open Source compliant.

    6.Has it been approved by the Python Consortium?

      Yes, the Python Consortium members approved the new CNRI Open
      Source License at a meeting of the Python Consortium on Friday,
      July 21, 2000 in Monterey, California.

    7.Is it compatible with the GNU Public License (GPL)?

      Legal counsel for both CNRI and BeOpen.com believe that it is
      fully compatible with the GPL. However, the Free Software
      Foundation attorney and Richard Stallman believe there may be
      one incompatibility, i.e., the CNRI License specifies a legal
      venue to interpret its License while the GPL is silent on the
      issue of jurisdiction. Resolution of this issue is being
      pursued.

    8.So that means it has a GPL-like "copyleft" provision?

      No. "GPL-compatible" means that code licensed under the terms of
      the CNRI Open Source License may be combined with GPLed
      code. The CNRI license imposes fewer restrictions than does the
      GPL.  There is no "copyleft" provision in the CNRI Open Source
      License.

    9.So it supports proprietary ("closed source") use of Python too?

      Yes, provided you abide by the terms of the CNRI license and
      also include the CWI Copyright Notice and Permissions Statement.

   10.I have some questions about those! First, about the "click to
   accept" business. What if I have a derivative work that has no GUI?

      As the text says, "COPYING, INSTALLING OR OTHERWISE USING THE
      SOFTWARE" also constitutes agreement to the terms of the
      license, so there is no requirement to use the click to accept
      button if that is not appropriate. CNRI prefers to offer the
      software via the Internet by first presenting the License and
      having a prospective user click an Accept button. Others may
      offer it in different forms (e.g.  CD-ROM) and thus clicking the
      Accept button is one means but not the only one.

   11.Virginia is one of the few states to have adopted the Uniform
   Computer Information Transactions Act, and paragraph 7 requires
   that the license be interpreted under Virginia law.  Is the "click
   clause" a way to invoke UCITA?

      CNRI needs a body of law to define what its License means, and,
      since its headquarters are in Virginia, Virginia law is a
      logical choice. The adoption of UCITA in Virginia was not a
      motivating factor. If CNRI didn't require that its License be
      interpreted under Virginia law, then anyone could interpret the
      license under very different laws than the ones under which it
      is intended to be interpreted. In particular in a jurisdiction
      that does not recognize general disclaimers of liability (such
      as in CNRI license's paragraphs 4 and 5).

   12.Suppose I embed Python in an application such that the user
   neither knows nor cares about the existence of Python. Does the
   install process have to inform my app's users about the CNRI
   license anyway?

      No, the license does not specify this. For example, in addition
      to including the License text in the License file of a program
      (or in the installer as well), you could just include a
      reference to it in the Readme file.  There is also no need to
      include the full License text in the program (the License
      provides for an alternative reference using the specified handle
      citation). Usage of the software amounts to license acceptance.

   13.In paragraph 2, does "provided, however, that CNRI's License
   Agreement is retained in Python 1.6 beta 1, alone or in any
   derivative version prepared by Licensee" mean that I can make and
   retain a derivative version of the license instead?

      The above statement applies to derivative versions of Python 1.6
      beta 1. You cannot revise the CNRI License. You must retain the
      CNRI License (or their defined reference to it)
      verbatim. However, you can make derivative works and license
      them as a whole under a different but compatible license.

   14.Since I have to retain the CNRI license in my derivative work,
   doesn't that mean my work must be released under exactly the same
   terms as Python?

      No. Paragraph 1 explicitly names Python 1.6 beta 1 as the only
      software covered by the CNRI license.  Since it doesn't name
      your derivative work, your derivative work is not bound by the
      license (except to the extent that it binds you to meet the
      requirements with respect to your use of Python 1.6). You are,
      of course, free to add your own license distributing your
      derivative work under terms similar to the CNRI Open Source
      License, but you are not required to do so.

      In other words, you cannot change the terms under which CNRI
      licenses Python 1.6, and must retain the CNRI License Agreement
      to make that clear, but you can (via adding your own license)
      set your own terms for your derivative works. Note that there is
      no requirement to distribute the Python source code either, if
      this does not make sense for your application.

   15.Does that include, for example, releasing my derivative work
   under the GPL?

      Yes, but you must retain the CNRI License Agreement in your
      work, and it will continue to apply to the Python 1.6 beta 1
      portion of your work (as is made explicit in paragraph 1 of the
      CNRI License).

   16.With regard to paragraph 3, what does "make available to the
   public" mean? If I embed Python in an application and make it
   available for download on the Internet, does that fit the meaning
   of this clause?

      Making the application generally available for download on the
      Internet would be making it available to the public.

   17.In paragraph 3, what does "indicate in any such work the nature
   of the modifications made to Python 1.6 beta 1" mean? Do you mean I
   must publish a patch? A textual description? If a description, how
   detailed must it be? For example, is "Assorted speedups"
   sufficient? Or "Ported to new architecture"? What if I merely add a
   new Python module, or C extension module? Does that constitute "a
   modification" too? What if I just use the freeze tool to change the
   way the distribution is packaged? Or change the layout of files and
   directories from the way CNRI ships them? Or change some file names
   to match my operating system's restrictions?  What if I merely use
   the documentation, as a basis for a brand new implementation of
   Python?

      This license clause is in discussion right now. CNRI has stated
      that the intent is just to have people provide a very high level
      summary of changes, e.g. includes new features X, Y and Z. There
      is no requirement for a specific level of detail. Work is in
      progress to clarify the intent of this clause so as to be
      clearer as to what the standard is. CNRI has already indicated
      that whatever has been done in the past to indicate changes in
      python releases would be sufficient.

   18.In paragraph 6, is automatic termination of the license upon
   material breach immediate?

      Yes. CNRI preferred to give the users a 60 day period to cure
      any deficiencies, but this was deemed incompatible with the GPL
      and CNRI reluctantly agreed to use the automatic termination
      language instead.

   19.Many licenses allow a 30 to 60 day period during which breaches
   can be corrected.

      Immediate termination is actually required for GPL
      compatibility, as the GPL terminates immediately upon a material
      breach. However, there is little you can do to breach the
      license based on usage of the code, since almost any usage is
      allowed by the license. You can breach it by not including the
      appropriate License information or by misusing CNRI's name and
      logo - to give two examples. As indicated above, CNRI actually
      preferred a 60 day cure period but GPL-compatibility required
      otherwise. In practice, the immediate termination clause is
      likely to have no substantive effect. Since breaches are simple
      to cure, most will have no substantive liability associated with
      them. CNRI can take legal steps to prevent eggregious and
      persistent offenders from relicensing the code, but this is a
      step they will not take cavalierly.

   20.What if people already downloaded a million copies of my
   derivative work before CNRI informs me my license has been
   terminated? What am I supposed to do then? Contact every one of
   them and tell them to download a new copy? I won't even know who
   they are!

      This is really up to the party that chooses to enforce such
      licensing. With the cure period removed for compliance with the
      GPL, CNRI is under no obligation to inform you of a
      termination. If you repair any such breach than you are in
      conformance with the License. Enforcement of the CNRI License is
      up to CNRI. Again, there are very few ways to violate the
      license.

   21.Well, I'm not even sure what "material breach" means. What's an
   example?

      This is a well-defined legal term. Very few examples of breaches
      can be given, because the CNRI license imposes very few
      requirements on you. A clear example is if you violate the
      requirement in paragraph 2 to retain CNRI's License Agreement
      (or their defined reference to it) in derivative works.  So
      simply retain the agreement, and you'll have no problem with
      that. Also, if you don't misuse CNRI's name and logo you'll be
      fine.

   22.OK, I'll retain the License Agreement in my derivative works,
   Does that mean my users and I then enter into this license
   agreement too?

      Yes, with CNRI but not with each other. As explained in
      paragraph 1, the license is between CNRI and whoever is using
      Python 1.6 beta 1.

   23.So you mean that everyone who uses my derivative work is
   entering into a contract with CNRI?

      With respect to the Python 1.6 beta 1 portion of your work,
      yes. This is what assures their right to use the Python 1.6 beta
      1 portion of your work-- which is licensed by CNRI, not by you
      --regardless of whatever other restrictions you may impose in
      your license.

   24.In paragraph 7, is the name "Python" a "CNRI trademark or trade
   name"?

      CNRI has certain trademark rights based on its use of the name
      Python. CNRI has begun discussing an orderly transition of the
      www.python.org site with Guido and the trademark matters will be
      addressed in that context.

   25.Will the license change for Python 2.0?

      BeOpen.com, who is leading future Python development, will make
      that determination at the appropriate time. Throughout the
      licensing process, BeOpen.com will be working to keep things as
      simple and as compatible with existing licenses as
      possible. BeOpen.com will add its copyright notice to Python but
      understands the complexities of licensing and so will work to
      avoid adding any further confusion on any of these issues. This
      is why BeOpen.com and CNRI are working together now to finalize
      a license.

   26.What about the copyrights? Will CNRI assign its copyright on
   Python to BeOpen.com or to Guido? If you say you want to clarify
   the legal status of the code, establishing a single copyright
   holder would go a long way toward achieving that!

      There is no need for a single copyright holder. Most composite
      works involve licensing of rights from parties that hold the
      rights to others that wish to make use of them. CNRI will retain
      copyright to its work on Python. CNRI has also worked to get wet
      signatures for major contributions to Python which assign rights
      to it, and email agreements to use minor contributions, so that
      it can license the bulk of the Python system for the public
      good. CNRI also worked with Guido van Rossum and CWI to clarify
      the legal status with respect to permissions for Python 1.2 and
      earlier versions.

August 23, 2000


From guido@beopen.com  Thu Aug 24 00:25:57 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:25:57 -0500
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:59:50 +0200."
 <000d01c00d4d$78a81c60$f2a6b5d4@hagrid>
References: <000d01c00d4d$78a81c60$f2a6b5d4@hagrid>
Message-ID: <200008232325.SAA03130@cj20424-a.reston1.va.home.com>

> > doesn't work too well for me -- Tkinter._test() tends to hang
> > when I press quit (not every time, though).  the only way to
> > shut down the process is to reboot.
> 
> hmm.  it looks like it's more likely to hang if the program
> uses unicode strings.
> 
>     Tkinter._test() hangs about 2 times out of three
> 
>     same goes for a simple test program that passes a
>     unicode string constant (containing Latin-1 chars)
>     to a Label
> 
>     the same test program using a Latin-1 string (which,
>     I suppose, is converted to Unicode inside Tk) hangs
>     in about 1/3 of the runs.
> 
>     the same test program with a pure ASCII string
>     never hangs...
> 
> confusing.

Try going back to Tk 8.2.

We had this problem with Tk 8.3.1 in Python 1.6a1; for a2, I went back
to 8.2.x (the latest).  Then for 1.6b1 I noticed that 8.3.2 was out
and after a light test it appeared to be fine, so I switched to
8.3.2.  But I've seen this too, and maybe 8.3 still isn't stable
enough.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Thu Aug 24 00:28:03 2000
From: guido@beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:28:03 -0500
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:53:45 +0200."
 <20000823235345.C7566@xs4all.nl>
References: <20000823235345.C7566@xs4all.nl>
Message-ID: <200008232328.SAA03141@cj20424-a.reston1.va.home.com>

> While re-writing the PyNumber_InPlace*() functions in augmented assignment
> to something Guido and I agree on should be the Right Way, I found something
> that *might* be a bug. But I'm not sure.
> 
> The PyNumber_*() methods for binary operations (found in abstract.c) have
> the following construct:
> 
>         if (v->ob_type->tp_as_number != NULL) {
>                 PyObject *x = NULL;
>                 PyObject * (*f)(PyObject *, PyObject *);
>                 if (PyNumber_Coerce(&v, &w) != 0)
>                         return NULL;
>                 if ((f = v->ob_type->tp_as_number->nb_xor) != NULL)
>                         x = (*f)(v, w);
>                 Py_DECREF(v);
>                 Py_DECREF(w);
>                 if (f != NULL)
>                         return x;
>         }
> 
> (This is after a check if either argument is an instance object, so both are
> C objects here.) Now, I'm not sure how coercion is supposed to work, but I
> see one problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> object's tp_as_number pointer could be NULL. I bet it's pretty unlikely that
> (numeric) coercion of a numeric object and an unspecified object turns up a
> non-numeric object, but I don't see anything guaranteeing it won't, either.
> 
> Is this a non-issue, or should I bother with adding the extra check in the
> current binary operations (and the new inplace ones) ?

I think this currently can't happen because coercions never return
non-numeric objects, but it sounds like a good sanity check to add.

Please check this in as a separate patch (not as part of the huge
augmented assignment patch).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Thu Aug 24 00:09:41 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 24 Aug 2000 01:09:41 +0200
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <200008232309.BAA01070@loewis.home.cs.tu-berlin.de>

> hmm.  it looks like it's more likely to hang if the program
> uses unicode strings.

Are you sure it hangs? It may just take a lot of time to determine
which font is best to display the strings.

Of course, if it is not done after an hour or so, it probably hangs...
Alternatively, a debugger could tell what it is actually doing.

Regards,
Martin


From thomas@xs4all.net  Thu Aug 24 00:15:20 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 01:15:20 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: <200008232328.SAA03141@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 23, 2000 at 06:28:03PM -0500
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com>
Message-ID: <20000824011519.D7566@xs4all.nl>

On Wed, Aug 23, 2000 at 06:28:03PM -0500, Guido van Rossum wrote:

> > Now, I'm not sure how coercion is supposed to work, but I see one
> > problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> > object's tp_as_number pointer could be NULL. I bet it's pretty unlikely
> > that (numeric) coercion of a numeric object and an unspecified object
> > turns up a non-numeric object, but I don't see anything guaranteeing it
> > won't, either.

> I think this currently can't happen because coercions never return
> non-numeric objects, but it sounds like a good sanity check to add.

> Please check this in as a separate patch (not as part of the huge
> augmented assignment patch).

Alright, checking it in after 'make test' finishes. I'm also removing some
redundant PyInstance_Check() calls in PyNumber_Multiply: the first thing in
that function is a BINOP call, which expands to

        if (PyInstance_Check(v) || PyInstance_Check(w)) \
                return PyInstance_DoBinOp(v, w, opname, ropname, thisfunc)

So after the BINOP call, neither argument can be an instance, anyway.


Also, I'll take this opportunity to explain what I'm doing with the
PyNumber_InPlace* functions, for those that are interested. The comment I'm
placing in the code should be enough information:

/* The in-place operators are defined to fall back to the 'normal',
   non in-place operations, if the in-place methods are not in place, and to
   take class instances into account. This is how it is supposed to work:

   - If the left-hand-side object (the first argument) is an
     instance object, let PyInstance_DoInPlaceOp() handle it.  Pass the
     non in-place variant of the function as callback, because it will only
     be used if any kind of coercion has been done, and if an object has
     been coerced, it's a new object and shouldn't be modified in-place.

   - Otherwise, if the object has the appropriate struct members, and they
     are filled, call that function and return the result. No coercion is
     done on the arguments; the left-hand object is the one the operation is
     performed on, and it's up to the function to deal with the right-hand
     object.

   - Otherwise, if the second argument is an Instance, let
     PyInstance_DoBinOp() handle it, but not in-place. Again, pass the
     non in-place function as callback.

   - Otherwise, both arguments are C objects. Try to coerce them and call
     the ordinary (not in-place) function-pointer from the type struct.
     
   - Otherwise, we are out of options: raise a type error.

   */

If anyone sees room for unexpected behaviour under these rules, let me know
and you'll get an XS4ALL shirt! (Sorry, only ones I can offer ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From DavidA@ActiveState.com  Thu Aug 24 01:25:55 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Wed, 23 Aug 2000 17:25:55 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] [Announce] ActivePython 1.6 beta release (fwd)
Message-ID: <Pine.WNT.4.21.0008231725340.272-100000@loom>

It is my pleasure to announce the availability of the beta release of
ActivePython 1.6, build 100.

This binary distribution, based on Python 1.6b1, is available from
ActiveState's website at:

    http://www.ActiveState.com/Products/ActivePython/

ActiveState is committed to making Python easy to install and use on all
major platforms. ActivePython contains the convenience of swift
installation, coupled with commonly used modules, providing you with a
total package to meets your Python needs. Additionally, for Windows users,
ActivePython provides a suite of Windows tools, developed by Mark Hammond.

ActivePython is provided in convenient binary form for Windows, Linux and
Solaris under a variety of installation packages, available at:

    http://www.ActiveState.com/Products/ActivePython/Download.html

For support information, mailing list subscriptions and archives, a bug
reporting system, and fee-based technical support, please go to

    http://www.ActiveState.com/Products/ActivePython/

Please send us feedback regarding this release, either through the mailing
list or through direct email to ActivePython-feedback@ActiveState.com.

ActivePython is free, and redistribution of ActivePython within your
organization is allowed.  The ActivePython license is available at
http://www.activestate.com/Products/ActivePython/License_Agreement.html
and in the software packages.

We look forward to your comments and to making ActivePython suit your
Python needs in future releases.

Thank you,

-- David Ascher & the ActivePython team
   ActiveState Tool Corporation









From tim_one@email.msn.com  Thu Aug 24 04:39:43 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 23:39:43 -0400
Subject: [Python-Dev] [PEP 223]  Change the Meaning of \x Escapes
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com>

An HTML version of the attached can be viewed at

    http://python.sourceforge.net/peps/pep-0223.html

This will be adopted for 2.0 unless there's an uproar.  Note that it *does*
have potential for breaking existing code -- although no real-life instance
of incompatibility has yet been reported.  This is explained in detail in
the PEP; check your code now.

although-if-i-were-you-i-wouldn't-bother<0.5-wink>-ly y'rs  - tim


PEP: 223
Title: Change the Meaning of \x Escapes
Version: $Revision: 1.4 $
Author: tpeters@beopen.com (Tim Peters)
Status: Active
Type: Standards Track
Python-Version: 2.0
Created: 20-Aug-2000
Post-History: 23-Aug-2000


Abstract

    Change \x escapes, in both 8-bit and Unicode strings, to consume
    exactly the two hex digits following.  The proposal views this as
    correcting an original design flaw, leading to clearer expression
    in all flavors of string, a cleaner Unicode story, better
    compatibility with Perl regular expressions, and with minimal risk
    to existing code.


Syntax

    The syntax of \x escapes, in all flavors of non-raw strings, becomes

        \xhh

    where h is a hex digit (0-9, a-f, A-F).  The exact syntax in 1.5.2 is
    not clearly specified in the Reference Manual; it says

        \xhh...

    implying "two or more" hex digits, but one-digit forms are also
    accepted by the 1.5.2 compiler, and a plain \x is "expanded" to
    itself (i.e., a backslash followed by the letter x).  It's unclear
    whether the Reference Manual intended either of the 1-digit or
    0-digit behaviors.


Semantics

    In an 8-bit non-raw string,
        \xij
    expands to the character
        chr(int(ij, 16))
    Note that this is the same as in 1.6 and before.

    In a Unicode string,
        \xij
    acts the same as
        \u00ij
    i.e. it expands to the obvious Latin-1 character from the initial
    segment of the Unicode space.

    An \x not followed by at least two hex digits is a compile-time error,
    specifically ValueError in 8-bit strings, and UnicodeError (a subclass
    of ValueError) in Unicode strings.  Note that if an \x is followed by
    more than two hex digits, only the first two are "consumed".  In 1.6
    and before all but the *last* two were silently ignored.


Example

    In 1.5.2:

        >>> "\x123465"  # same as "\x65"
        'e'
        >>> "\x65"
        'e'
        >>> "\x1"
        '\001'
        >>> "\x\x"
        '\\x\\x'
        >>>

    In 2.0:

        >>> "\x123465" # \x12 -> \022, "3456" left alone
        '\0223456'
        >>> "\x65"
        'e'
        >>> "\x1"
        [ValueError is raised]
        >>> "\x\x"
        [ValueError is raised]
        >>>


History and Rationale

    \x escapes were introduced in C as a way to specify variable-width
    character encodings.  Exactly which encodings those were, and how many
    hex digits they required, was left up to each implementation.  The
    language simply stated that \x "consumed" *all* hex digits following,
    and left the meaning up to each implementation.  So, in effect, \x in C
    is a standard hook to supply platform-defined behavior.

    Because Python explicitly aims at platform independence, the \x escape
    in Python (up to and including 1.6) has been treated the same way
    across all platforms:  all *except* the last two hex digits were
    silently ignored.  So the only actual use for \x escapes in Python was
    to specify a single byte using hex notation.

    Larry Wall appears to have realized that this was the only real use for
    \x escapes in a platform-independent language, as the proposed rule for
    Python 2.0 is in fact what Perl has done from the start (although you
    need to run in Perl -w mode to get warned about \x escapes with fewer
    than 2 hex digits following -- it's clearly more Pythonic to insist on
    2 all the time).

    When Unicode strings were introduced to Python, \x was generalized so
    as to ignore all but the last *four* hex digits in Unicode strings.
    This caused a technical difficulty for the new regular expression
engine:
    SRE tries very hard to allow mixing 8-bit and Unicode patterns and
    strings in intuitive ways, and it no longer had any way to guess what,
    for example, r"\x123456" should mean as a pattern:  is it asking to
match
    the 8-bit character \x56 or the Unicode character \u3456?

    There are hacky ways to guess, but it doesn't end there.  The ISO C99
    standard also introduces 8-digit \U12345678 escapes to cover the entire
    ISO 10646 character space, and it's also desired that Python 2 support
    that from the start.  But then what are \x escapes supposed to mean?
    Do they ignore all but the last *eight* hex digits then?  And if less
    than 8 following in a Unicode string, all but the last 4?  And if less
    than 4, all but the last 2?

    This was getting messier by the minute, and the proposal cuts the
    Gordian knot by making \x simpler instead of more complicated.  Note
    that the 4-digit generalization to \xijkl in Unicode strings was also
    redundant, because it meant exactly the same thing as \uijkl in Unicode
    strings.  It's more Pythonic to have just one obvious way to specify a
    Unicode character via hex notation.


Development and Discussion

    The proposal was worked out among Guido van Rossum, Fredrik Lundh and
    Tim Peters in email.  It was subsequently explained and disussed on
    Python-Dev under subject "Go \x yourself", starting 2000-08-03.
    Response was overwhelmingly positive; no objections were raised.


Backward Compatibility

    Changing the meaning of \x escapes does carry risk of breaking existing
    code, although no instances of incompabitility have yet been discovered.
    The risk is believed to be minimal.

    Tim Peters verified that, except for pieces of the standard test suite
    deliberately provoking end cases, there are no instances of \xabcdef...
    with fewer or more than 2 hex digits following, in either the Python
    CVS development tree, or in assorted Python packages sitting on his
    machine.

    It's unlikely there are any with fewer than 2, because the Reference
    Manual implied they weren't legal (although this is debatable!).  If
    there are any with more than 2, Guido is ready to argue they were buggy
    anyway <0.9 wink>.

    Guido reported that the O'Reilly Python books *already* document that
    Python works the proposed way, likely due to their Perl editing
    heritage (as above, Perl worked (very close to) the proposed way from
    its start).

    Finn Bock reported that what JPython does with \x escapes is
    unpredictable today.  This proposal gives a clear meaning that can be
    consistently and easily implemented across all Python implementations.


Effects on Other Tools

    Believed to be none.  The candidates for breakage would mostly be
    parsing tools, but the author knows of none that worry about the
    internal structure of Python strings beyond the approximation "when
    there's a backslash, swallow the next character".  Tim Peters checked
    python-mode.el, the std tokenize.py and pyclbr.py, and the IDLE syntax
    coloring subsystem, and believes there's no need to change any of
    them.  Tools like tabnanny.py and checkappend.py inherit their immunity
    from tokenize.py.


Reference Implementation

    The code changes are so simple that a separate patch will not be
produced.
    Fredrik Lundh is writing the code, is an expert in the area, and will
    simply check the changes in before 2.0b1 is released.


BDFL Pronouncements

    Yes, ValueError, not SyntaxError.  "Problems with literal
interpretations
    traditionally raise 'runtime' exceptions rather than syntax errors."


Copyright

    This document has been placed in the public domain.




From guido@beopen.com  Thu Aug 24 06:34:15 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 00:34:15 -0500
Subject: [Python-Dev] [PEP 223] Change the Meaning of \x Escapes
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:39:43 -0400."
 <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com>
Message-ID: <200008240534.AAA00885@cj20424-a.reston1.va.home.com>

> An HTML version of the attached can be viewed at
> 
>     http://python.sourceforge.net/peps/pep-0223.html

Nice PEP!

> Effects on Other Tools
> 
>     Believed to be none.  [...]

I believe that Fredrik also needs to fix SRE's interpretation of \xhh.
Unless he's already done that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Thu Aug 24 06:31:04 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 24 Aug 2000 01:31:04 -0400
Subject: [Python-Dev] [PEP 223] Change the Meaning of \x Escapes
In-Reply-To: <200008240534.AAA00885@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEKNHBAA.tim_one@email.msn.com>

[Guido]
> Nice PEP!

Thanks!  I thought the kids could stand a simple example of what you'd like
to read <wink>.

> I believe that Fredrik also needs to fix SRE's interpretation of \xhh.
> Unless he's already done that.

I'm sure he's acutely aware of that, since that's how this started!  And
he's implementing \x in strings too.  I knew you wouldn't read it to the end
<0.9 wink>.

put-the-refman-stuff-briefly-at-the-front-and-save-the-blather-for-
    the-end-ly y'rs  - tim




From ping@lfw.org  Thu Aug 24 10:14:12 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 24 Aug 2000 04:14:12 -0500 (CDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import
 something as'
In-Reply-To: <200008231622.LAA02275@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>

On Wed, 23 Aug 2000, Guido van Rossum wrote:
> [Ping]
> > Looks potentially useful to me.  If nothing else, it's certainly
> > easier to explain than any other behaviour i could think of, since
> > assignment is already well-understood.
> 
> KISS suggests not to add it.  We had a brief discussion about this at
> our 2.0 planning meeting and nobody there thought it would be worth
> it, and several of us felt it would be asking for trouble.

What i'm trying to say is that it's *easier* to explain "import as"
with Thomas' enhancement than without it.

The current explanation of "import <x> as <y>" is something like

    Find and import the module named <x> and assign it to <y>
    in the normal way you do assignment, except <y> has to be
    a pure name.

Thomas' suggestion lifts the restriction and makes the explanation
simpler than it would have been:

    Find and import the module named <x> and assign it to <y>
    in the normal way you do assignment.

"The normal way you do assignment" is shorthand for "decide
whether to assign to the local or global namespace depending on
whether <y> has been assigned to in the current scope, unless
<y> has been declared global with a 'global' statement" -- and
that applies in any case.  Luckily, it's a concept that has
been explained before and which Python programmers already
need to understand anyway.

The net effect is essentially a direct translation to

    <y> = __import__("<x>")

> > "import foo.bar as spam" makes me uncomfortable because:
> > 
> >     (a) It's not clear whether spam should get foo or foo.bar, as
> >         evidenced by the discussion between Gordon and Thomas.
> 
> As far as I recall that conversation, it's just that Thomas (more or
> less accidentally) implemented what was easiest from the
> implementation's point of view without thinking about what it should
> mean.  *Of course* it should mean what I said if it's allowed.  Even
> Thomas agrees to that now.

Careful:

    import foo.bar          "import the package named foo and its submodule bar,
                             then put *foo* into the current namespace"
    import foo.bar as spam  "import the package named foo and its submodule bar,
                             then put *bar* into the current namespace, as spam"

Only this case causes import to import a *different* object just because
you used "as".

    import foo              "import the module named foo, then put foo into
                             the current namespace"
    import foo as spam      "import the module named foo, then put foo into
                             the current namespace, as spam"

The above, and all the other forms of "import ... as", put the *same*
object into the current namespace as they would have done, without the
"as" clause.

> >     (b) There's a straightforward and unambiguous way to express
> >         this already: "from foo import bar as spam".
> 
> Without syntax coloring that looks word soup to me.
> 
>   import foo.bar as spam
> 
> uses fewer words to say the same clearer.

But then:

        from foo import bar as spam    # give me bar, but name it spam
        import foo.bar as spam         # give me bar, but name it spam

are two ways to say the same thing -- but only if bar is a module.
If bar happens to be some other kind of symbol, the first works but
the second doesn't!

Not so without "as spam":

        from foo import bar            # give me bar
        import foo.bar                 # give me foo

> > That is, would
> > 
> >     import foo.bar as spam
> > 
> > define just spam or both foo and spam?
> 
> Aargh!  Just spam, of course!

I apologize if this is annoying you.  I hope you see the inconsistency
that i'm trying to point out, though.  If you see it and decide that
it's okay to live with the inconsistency, that's okay with me.


-- ?!ng



From thomas@xs4all.net  Thu Aug 24 11:18:58 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 12:18:58 +0200
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>; from ping@lfw.org on Thu, Aug 24, 2000 at 04:14:12AM -0500
References: <200008231622.LAA02275@cj20424-a.reston1.va.home.com> <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>
Message-ID: <20000824121858.E7566@xs4all.nl>

On Thu, Aug 24, 2000 at 04:14:12AM -0500, Ka-Ping Yee wrote:

> The current explanation of "import <x> as <y>" is something like

>     Find and import the module named <x> and assign it to <y>
>     in the normal way you do assignment, except <y> has to be
>     a pure name.

> Thomas' suggestion lifts the restriction and makes the explanation
> simpler than it would have been:

>     Find and import the module named <x> and assign it to <y>
>     in the normal way you do assignment.

> "The normal way you do assignment" is shorthand for "decide
> whether to assign to the local or global namespace depending on
> whether <y> has been assigned to in the current scope, unless
> <y> has been declared global with a 'global' statement" -- and
> that applies in any case.  Luckily, it's a concept that has
> been explained before and which Python programmers already
> need to understand anyway.

This is not true. The *current* situation already does the local/global
namespace trick, except that 'import ..' *is* a local assignment, so the
resulting name is always local (unless there is a "global" statement.)

My patch wouldn't change that one bit. It would only expand the allowable
expressions in the 'as' clause: is it a normal name-binding assignment (like
now), or a subscription-assignment, or a slice-assignment, or an
attribute-assignment. In other words, all types of assignment.

> The net effect is essentially a direct translation to

>     <y> = __import__("<x>")

Exactly :)

> Careful:

>     import foo.bar          "import the package named foo and its
>                              submodule bar, then put *foo* into the
>                              current namespace"

Wrong. What it does is: import the package named foo and its submodule bar,
and make it so you can access foo.bar via the name 'foo.bar'. That this has
to put 'foo' in the local namespace is a side issue :-) And when seen like
that,

>     import foo.bar as spam  "import the package named foo and its
>                              submodule bar, then put *bar* into the
>                              current namespace, as spam"

Becomes obvious as well.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Thu Aug 24 12:22:32 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 13:22:32 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl>
Message-ID: <39A50578.C08B9F14@lemburg.com>

Thomas Wouters wrote:
> 
> On Wed, Aug 23, 2000 at 06:28:03PM -0500, Guido van Rossum wrote:
> 
> > > Now, I'm not sure how coercion is supposed to work, but I see one
> > > problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> > > object's tp_as_number pointer could be NULL. I bet it's pretty unlikely
> > > that (numeric) coercion of a numeric object and an unspecified object
> > > turns up a non-numeric object, but I don't see anything guaranteeing it
> > > won't, either.
> 
> > I think this currently can't happen because coercions never return
> > non-numeric objects, but it sounds like a good sanity check to add.
> 
> > Please check this in as a separate patch (not as part of the huge
> > augmented assignment patch).
> 
> Alright, checking it in after 'make test' finishes. I'm also removing some
> redundant PyInstance_Check() calls in PyNumber_Multiply: the first thing in
> that function is a BINOP call, which expands to
> 
>         if (PyInstance_Check(v) || PyInstance_Check(w)) \
>                 return PyInstance_DoBinOp(v, w, opname, ropname, thisfunc)
> 
> So after the BINOP call, neither argument can be an instance, anyway.
> 
> Also, I'll take this opportunity to explain what I'm doing with the
> PyNumber_InPlace* functions, for those that are interested. The comment I'm
> placing in the code should be enough information:
> 
> /* The in-place operators are defined to fall back to the 'normal',
>    non in-place operations, if the in-place methods are not in place, and to
>    take class instances into account. This is how it is supposed to work:
> 
>    - If the left-hand-side object (the first argument) is an
>      instance object, let PyInstance_DoInPlaceOp() handle it.  Pass the
>      non in-place variant of the function as callback, because it will only
>      be used if any kind of coercion has been done, and if an object has
>      been coerced, it's a new object and shouldn't be modified in-place.
> 
>    - Otherwise, if the object has the appropriate struct members, and they
>      are filled, call that function and return the result. No coercion is
>      done on the arguments; the left-hand object is the one the operation is
>      performed on, and it's up to the function to deal with the right-hand
>      object.
> 
>    - Otherwise, if the second argument is an Instance, let
>      PyInstance_DoBinOp() handle it, but not in-place. Again, pass the
>      non in-place function as callback.
> 
>    - Otherwise, both arguments are C objects. Try to coerce them and call
>      the ordinary (not in-place) function-pointer from the type struct.
> 
>    - Otherwise, we are out of options: raise a type error.
> 
>    */
> 
> If anyone sees room for unexpected behaviour under these rules, let me know
> and you'll get an XS4ALL shirt! (Sorry, only ones I can offer ;)

I just hope that with all these new operators you haven't
closed the door for switching to argument based handling of
coercion.

One of these days (probably for 2.1), I would like to write up the
proposal I made on my Python Pages about a new coercion mechanism
as PEP. The idea behind it is to only use centralized coercion
as fall-back solution in case the arguments can't handle the
operation with the given type combination.

To implement this, all builtin types will have to be changed
to support mixed type argument slot functions (this ability will
be signalled to the interpreter using a type flag).

More infos on the proposal page at:

  http://starship.python.net/crew/lemburg/CoercionProposal.html

Is this still possible under the new code you've added ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Thu Aug 24 12:37:28 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 13:37:28 +0200
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release
 Plans)
References: <14756.4118.865603.363166@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com> <20000823205920.A7566@xs4all.nl>
Message-ID: <39A508F8.44C921D4@lemburg.com>

Thomas Wouters wrote:
> 
> On Wed, Aug 23, 2000 at 02:32:20PM -0400, Tim Peters wrote:
> > [Jeremy Hylton]
> > > I would like to see some compression in the release, but agree that it
> > > is not an essential optimization.  People have talked about it for a
> > > couple of months, and we haven't found someone to work on it because
> > > at various times pirx and /F said they were working on it.
> > >
> > > If we don't hear from /F by tomorrow promising he will finish it before
> > > the beta release, let's postpone it.
> 
> > There was an *awful* lot of whining about the size increase without this
> > optimization, and the current situation violates the "no compiler warnings!"
> > rule too (at least under MSVC 6).
> 
> For the record, you can't compile unicodedatabase.c with g++ because of it's
> size: g++ complains that the switch is too large to compile. Under gcc it
> compiles, but only by trying really really hard, and I don't know how it
> performs under other versions of gcc (in particular more heavily optimizing
> ones -- might run into other limits in those situations.)

Are you sure this is still true with the latest CVS tree version ?

I split the unicodedatabase.c static array into chunks of
4096 entries each -- that should really be managable by all
compilers.

But perhaps you are talking about the switch in unicodectype.c 
(there are no large switches in unicodedatabase.c) ? In that
case, Jack Janssen has added a macro switch which breaks that
switch in multiple parts too (see the top of that file).

It should be no problem adding a few more platforms to the list
of platforms which have this switch defined per default (currently
Macs and MS Win64).

I see no problem taking the load off of Fredrik an postponing
the patch to 2.1.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Thu Aug 24 15:00:56 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 09:00:56 -0500
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: Your message of "Thu, 24 Aug 2000 13:22:32 +0200."
 <39A50578.C08B9F14@lemburg.com>
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl>
 <39A50578.C08B9F14@lemburg.com>
Message-ID: <200008241400.JAA01806@cj20424-a.reston1.va.home.com>

> I just hope that with all these new operators you haven't
> closed the door for switching to argument based handling of
> coercion.

Far from it!  Actually, the inplace operators won't do any coercions
when the left argument supports the inplace version, and otherwise
exactly the same rules apply as for the non-inplace version.  (I
believe this isn't in the patch yet, but it will be when Thomas checks
it in.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Thu Aug 24 14:14:55 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 15:14:55 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: <200008241400.JAA01806@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 24, 2000 at 09:00:56AM -0500
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl> <39A50578.C08B9F14@lemburg.com> <200008241400.JAA01806@cj20424-a.reston1.va.home.com>
Message-ID: <20000824151455.F7566@xs4all.nl>

On Thu, Aug 24, 2000 at 09:00:56AM -0500, Guido van Rossum wrote:
> > I just hope that with all these new operators you haven't
> > closed the door for switching to argument based handling of
> > coercion.

> Far from it!  Actually, the inplace operators won't do any coercions
> when the left argument supports the inplace version, and otherwise
> exactly the same rules apply as for the non-inplace version.  (I
> believe this isn't in the patch yet, but it will be when Thomas checks
> it in.)

Exactly. (Actually, I'm again re-working the patch: If I do it the way I
intended to, you'd sometimes get the 'non in-place' error messages, instead
of the in-place ones. But the result will be the same.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Thu Aug 24 16:52:35 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 11:52:35 -0400 (EDT)
Subject: [Python-Dev] Need help with SF bug #112558
Message-ID: <14757.17603.237768.174359@cj42289-a.reston1.va.home.com>

  I'd like some help with fixing a bug in dictobject.c.  The bug is on
SourceForge as #112558, and my attempted fix is SourceForge patch
#101277.
  The original bug is that exceptions raised by an object's __cmp__()
during dictionary lookup are not cleared, and can be propogated during
a subsequent lookup attempt.  I've made more detailed comments at
SourceForge at the patch:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470

  Thanks for any suggestions!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From mal@lemburg.com  Thu Aug 24 17:53:35 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 18:53:35 +0200
Subject: [Python-Dev] Need help with SF bug #112558
References: <14757.17603.237768.174359@cj42289-a.reston1.va.home.com>
Message-ID: <39A5530F.FF2DA5C4@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   I'd like some help with fixing a bug in dictobject.c.  The bug is on
> SourceForge as #112558, and my attempted fix is SourceForge patch
> #101277.
>   The original bug is that exceptions raised by an object's __cmp__()
> during dictionary lookup are not cleared, and can be propogated during
> a subsequent lookup attempt.  I've made more detailed comments at
> SourceForge at the patch:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470
> 
>   Thanks for any suggestions!

Here are some:

* Please be very careful when patching this area of the interpreter:
  it is *very* performance sensitive.

* I'd remove the cmp variable and do a PyErr_Occurred() directly
  in all cases where PyObect_Compare() returns != 0.

* Exceptions during dict lookups are rare. I'm not sure about
  failing lookups... Valdimir ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From trentm@ActiveState.com  Thu Aug 24 18:46:27 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 24 Aug 2000 10:46:27 -0700
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
Message-ID: <20000824104627.C15992@ActiveState.com>

Hey all,

I recently checked in the Monterey stuff (patch
http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
) but the checkin did not show up on python-checkins and the comment and
status change to "Closed" did not show up on python-patches. My checkin was
about a full day ago.

Is this a potential SourceForge bug? The delay con't be *that* long.

Regards,
Trent

-- 
Trent Mick
TrentM@ActiveState.com


From fdrake@beopen.com  Thu Aug 24 19:39:01 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 14:39:01 -0400 (EDT)
Subject: [Python-Dev] CVS patch fixer?
Message-ID: <14757.27589.366614.231055@cj42289-a.reston1.va.home.com>

  Someone (don't remember who) posted a Perl script to either this
list or the patches list, perhaps a month or so ago(?), which could
massage a CVS-generated patch to make it easier to apply.
  Can anyone provide a copy of this, or a link to it?
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Thu Aug 24 20:50:53 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 21:50:53 +0200
Subject: [Python-Dev] Augmented assignment
Message-ID: <20000824215053.G7566@xs4all.nl>

I've finished rewriting the PyNumber_InPlace*() calls in the augmented
assignment patch and am about to check the entire thing in. I'll be checking
it in in parts, with the grammar/compile/ceval things last, but you might
get some weird errors in the next hour or so, depending on my link to
sourceforge. (I'm doing some last minute checks before checking it in ;)

Part of it will be docs, but not terribly much yet. I'm still working on
those, though, and I have a bit over a week before I leave on vacation, so I
think I can finish them for the most part.) I'm also checking in a test
case, and some modifications to the std library: support for += in UserList,
UserDict, UserString, and rfc822.Addresslist. Reviewers are more than
welcome, though I realize how large a patch it is. (Boy, do I realize that!)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Thu Aug 24 22:45:53 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 16:45:53 -0500
Subject: [Python-Dev] Augmented assignment
In-Reply-To: Your message of "Thu, 24 Aug 2000 21:50:53 +0200."
 <20000824215053.G7566@xs4all.nl>
References: <20000824215053.G7566@xs4all.nl>
Message-ID: <200008242145.QAA01306@cj20424-a.reston1.va.home.com>

Congratulations, Thomas!  Megathanks for carrying this proposal to a
happy ending.  I'm looking forward to using the new feature!

Nits: Lib/symbol.py and Lib/token.py need to be regenerated and
checked in; (see the comments at the top of the file).

Also, tokenizer.py probably needs to have the new tokens += etc. added
manually.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Thu Aug 24 22:09:49 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 23:09:49 +0200
Subject: [Python-Dev] Augmented assignment
In-Reply-To: <200008242145.QAA01306@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 24, 2000 at 04:45:53PM -0500
References: <20000824215053.G7566@xs4all.nl> <200008242145.QAA01306@cj20424-a.reston1.va.home.com>
Message-ID: <20000824230949.O4798@xs4all.nl>

On Thu, Aug 24, 2000 at 04:45:53PM -0500, Guido van Rossum wrote:

> Nits: Lib/symbol.py and Lib/token.py need to be regenerated and
> checked in; (see the comments at the top of the file).

Checking them in now.

> Also, tokenizer.py probably needs to have the new tokens += etc. added
> manually.

Okay. I'm not entirely sure how to do this, but I *think* this does it:
replace

Operator = group('\+', '\-', '\*\*', '\*', '\^', '~', '/', '%', '&', '\|',
                 '<<', '>>', '==', '<=', '<>', '!=', '>=', '=', '<', '>')

with

Operator = group('\+=', '\-=', '\*=', '%=', '/=', '\*\*=', '&=', '\|=',
                 '\^=', '>>=', '<<=', '\+', '\-', '\*\*', '\*', '\^', '~',
                 '/', '%', '&', '\|', '<<', '>>', '==', '<=', '<>', '!=',
                 '>=', '=', '<', '>')

Placing the augmented-assignment operators at the end doesn't work, but this
seems to do the trick. However, I can't really test this module, just check
its output. It seems okay, but I would appreciate either an 'okay' or a
more extensive test before checking it in. No, I can't start IDLE right now,
I'm working over a 33k6 leased line and my home machine doesn't have an
augmented Python yet :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jack@oratrix.nl  Thu Aug 24 22:35:38 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Thu, 24 Aug 2000 23:35:38 +0200
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
Message-ID: <20000824213543.5D902D71F9@oratrix.oratrix.nl>

Both regexp and sre don't behave well under low-memory conditions.

I noticed this because test_longexp basically ate all my memory (sigh, 
I think I'll finally have to give up my private memory allocator and
take the 15% performance hit, until I find the time to dig into
Vladimir's stuff) so the rest of the regressions tests ran under very
tight memory conditions.

test_re wasn't so bad, the only problem was that it crashed with a
"NULL return without an exception". test_regexp was worse, it crashed
my machine.

If someone feels the urge maybe they could run the testsuite on unix
with a sufficiently low memory-limit.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From jeremy@beopen.com  Thu Aug 24 23:17:56 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:17:56 -0400 (EDT)
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: <20000824104627.C15992@ActiveState.com>
References: <20000824104627.C15992@ActiveState.com>
Message-ID: <14757.40724.86552.609923@bitdiddle.concentric.net>

>>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:

  TM> Hey all,

  TM> I recently checked in the Monterey stuff (patch
  TM> http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
  TM> ) but the checkin did not show up on python-checkins and the
  TM> comment and status change to "Closed" did not show up on
  TM> python-patches. My checkin was about a full day ago.

  TM> Is this a potential SourceForge bug? The delay con't be *that*
  TM> long.

Weird.  I haven't even received the message quoted above.  There's
something very weird going on.

I have not seen a checkin message for a while, though I have made a
few checkins myself.  It looks like the problem I'm seeing here is
with between python.org and beopen.com, because the messages are in
the archive.

The problem you are seeing is different.  The most recent checkin
message from you is dated Aug. 16.  Could it be a problem with your
local mail?  The message would be sent from you account.  Perhaps
there is more info. in your system's mail log.

Jeremy



From skip@mojam.com (Skip Montanaro)  Thu Aug 24 23:05:20 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 24 Aug 2000 17:05:20 -0500 (CDT)
Subject: [Python-Dev] Check your "Accepted" patches
Message-ID: <14757.39968.498536.643301@beluga.mojam.com>

There are 8 patches with status "Accepted".  They are assigned to akuchling,
bwarsaw, jhylton, fdrake, ping and prescod.  I had not been paying attention
to that category and then saw this in the Open Items of PEP 0200:

    Get all patches out of Accepted.

I checked and found one of mine there.

Skip



From trentm@ActiveState.com  Thu Aug 24 23:32:55 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 24 Aug 2000 15:32:55 -0700
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: <14757.40724.86552.609923@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:17:56PM -0400
References: <20000824104627.C15992@ActiveState.com> <14757.40724.86552.609923@bitdiddle.concentric.net>
Message-ID: <20000824153255.B27016@ActiveState.com>

On Thu, Aug 24, 2000 at 06:17:56PM -0400, Jeremy Hylton wrote:
> >>>>> "TM" == Trent Mick <trentm@ActiveState.com> writes:
>   TM> I recently checked in the Monterey stuff (patch
>   TM> http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
>   TM> ) but the checkin did not show up on python-checkins and the
>   TM> comment and status change to "Closed" did not show up on
>   TM> python-patches. My checkin was about a full day ago.
> 
> I have not seen a checkin message for a while, though I have made a
> few checkins myself.  It looks like the problem I'm seeing here is
> with between python.org and beopen.com, because the messages are in
> the archive.
> 
> The problem you are seeing is different.  The most recent checkin
> message from you is dated Aug. 16.  Could it be a problem with your
> local mail?  The message would be sent from you account.  Perhaps

The cvs checkin message is made from my local machine?! Really? I thought
that would be on the server side. Our email *is* a little backed up here but
I don't think *that* backed up.

In any case, that does not explain why patches@python.org did not a mail
regarding my update of the patch on SourceForge. *Two* emails have gone
astray here.

I am really not so curious that I want to hunt it down. Just a heads up for
people.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From jeremy@beopen.com  Thu Aug 24 23:44:27 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:44:27 -0400 (EDT)
Subject: [Python-Dev] two tests fail
Message-ID: <14757.42315.528801.142803@bitdiddle.concentric.net>

After the augmented assignment checkin (yay!), I see two failing
tests: test_augassign and test_parser.  Do you see the same problem?

Jeremy


From thomas@xs4all.net  Thu Aug 24 23:50:35 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 00:50:35 +0200
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <14757.42315.528801.142803@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:44:27PM -0400
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
Message-ID: <20000825005035.P4798@xs4all.nl>

On Thu, Aug 24, 2000 at 06:44:27PM -0400, Jeremy Hylton wrote:
> After the augmented assignment checkin (yay!), I see two failing
> tests: test_augassign and test_parser.  Do you see the same problem?

Hm, neither is failing, for me, in a tree that has no differences with the
CVS tree according to CVS itself. I'll see if I can reproduce it by
using a different tree, just to be sure.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jeremy@beopen.com  Thu Aug 24 23:56:15 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:56:15 -0400 (EDT)
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <20000825005035.P4798@xs4all.nl>
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
 <20000825005035.P4798@xs4all.nl>
Message-ID: <14757.43023.497909.568824@bitdiddle.concentric.net>

Oops.  My mistake.  I hadn't rebuilt the parser.

Jeremy


From thomas@xs4all.net  Thu Aug 24 23:53:18 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 00:53:18 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib UserString.py,1.5,1.6
In-Reply-To: <200008242147.OAA05606@slayer.i.sourceforge.net>; from nowonder@users.sourceforge.net on Thu, Aug 24, 2000 at 02:47:36PM -0700
References: <200008242147.OAA05606@slayer.i.sourceforge.net>
Message-ID: <20000825005318.H7566@xs4all.nl>

On Thu, Aug 24, 2000 at 02:47:36PM -0700, Peter Schneider-Kamp wrote:
> Update of /cvsroot/python/python/dist/src/Lib
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv5582
> 
> Modified Files:
> 	UserString.py 
> Log Message:
> 
> simple typo that makes regression test test_userstring fail

WTF ? Hmm. I was pretty damned sure I'd fixed that one. I saw it two
times, fixed it in two trees at least, but apparently not the one I commited
:P I'll get some sleep, soon :P

> ***************
> *** 56,60 ****
>           elif isinstance(other, StringType) or isinstance(other, UnicodeType):
>               self.data += other
> !         else
>               self.data += str(other)
>           return self
> --- 56,60 ----
>           elif isinstance(other, StringType) or isinstance(other, UnicodeType):
>               self.data += other
> !         else:
>               self.data += str(other)
>           return self


-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Aug 25 00:03:49 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 01:03:49 +0200
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <14757.43023.497909.568824@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:56:15PM -0400
References: <14757.42315.528801.142803@bitdiddle.concentric.net> <20000825005035.P4798@xs4all.nl> <14757.43023.497909.568824@bitdiddle.concentric.net>
Message-ID: <20000825010349.Q4798@xs4all.nl>

On Thu, Aug 24, 2000 at 06:56:15PM -0400, Jeremy Hylton wrote:

> Oops.  My mistake.  I hadn't rebuilt the parser.

Well, you were on to something, of course. The parsermodule will have to be
modified to accept augmented assignment as well. (Or at least, so I assume.)
The test just doesn't test that part yet ;-) Fred, do you want me to do
that? I'm not sure on the parsermodule internals, but maybe if you can give
me some pointers I can work it out.

(The same goes for Tools/compiler/compiler, by the way, which I think also
needs to be taught list comprehensions.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From ping@lfw.org  Fri Aug 25 00:38:02 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 24 Aug 2000 19:38:02 -0400 (EDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import
 something as'
In-Reply-To: <20000824121858.E7566@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008241935250.1061-100000@skuld.lfw.org>

On Thu, 24 Aug 2000, Thomas Wouters wrote:
> >     import foo.bar          "import the package named foo and its
> >                              submodule bar, then put *foo* into the
> >                              current namespace"
> 
> Wrong. What it does is: import the package named foo and its submodule bar,
> and make it so you can access foo.bar via the name 'foo.bar'. That this has
> to put 'foo' in the local namespace is a side issue

I understand now.  Sorry for my thickheadedness.  Yes, when i look
at it as "please give this to me as foo.bar", it makes much more sense.

Apologies, Guido.  That's two brain-farts in a day or so.  :(


-- ?!ng



From fdrake@beopen.com  Fri Aug 25 00:36:54 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 19:36:54 -0400 (EDT)
Subject: [Python-Dev] two tests fail
In-Reply-To: <14757.42315.528801.142803@bitdiddle.concentric.net>
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
Message-ID: <14757.45462.717663.782865@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > After the augmented assignment checkin (yay!), I see two failing
 > tests: test_augassign and test_parser.  Do you see the same problem?

  I'll be taking care of the parser module update tonight (late) or
tomorrow morning.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From akuchlin@mems-exchange.org  Fri Aug 25 02:32:47 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 24 Aug 2000 21:32:47 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules pyexpat.c,2.12,2.13
In-Reply-To: <200008242157.OAA06909@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Thu, Aug 24, 2000 at 02:57:46PM -0700
References: <200008242157.OAA06909@slayer.i.sourceforge.net>
Message-ID: <20000824213247.A2318@newcnri.cnri.reston.va.us>

On Thu, Aug 24, 2000 at 02:57:46PM -0700, Fred L. Drake wrote:
>Remove the Py_FatalError() from initpyexpat(); the Guido has decreed
>that this is not appropriate.

So what is going to catch errors while initializing a module?  Or is
PyErr_Occurred() called after a module's init*() function?

--amk


From MarkH@ActiveState.com  Fri Aug 25 02:56:10 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 11:56:10 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules pyexpat.c,2.12,2.13
In-Reply-To: <20000824213247.A2318@newcnri.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCEHADGAA.MarkH@ActiveState.com>

Andrew writes:

> On Thu, Aug 24, 2000 at 02:57:46PM -0700, Fred L. Drake wrote:
> >Remove the Py_FatalError() from initpyexpat(); the Guido has decreed
> >that this is not appropriate.
>
> So what is going to catch errors while initializing a module?  Or is
> PyErr_Occurred() called after a module's init*() function?

Yes!  All errors are handled correctly (as of somewhere in the 1.5 family,
I believe)

Note that Py_FatalError() is _evil_ - it can make your program die without
a chance to see any error message or other diagnostic.  It should be
avoided if at all possible.

Mark.



From guido@beopen.com  Fri Aug 25 05:11:54 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 23:11:54 -0500
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
In-Reply-To: Your message of "Thu, 24 Aug 2000 23:35:38 +0200."
 <20000824213543.5D902D71F9@oratrix.oratrix.nl>
References: <20000824213543.5D902D71F9@oratrix.oratrix.nl>
Message-ID: <200008250411.XAA08797@cj20424-a.reston1.va.home.com>

> test_re wasn't so bad, the only problem was that it crashed with a
> "NULL return without an exception". test_regexp was worse, it crashed
> my machine.

That's regex, right?  regexp was the *really* old regular expression
module we once had.

Anyway, I don't care about regex, it's old.

The sre code needs to be robustified, but it's not a high priority for
me.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Aug 25 05:19:39 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 23:19:39 -0500
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: Your message of "Thu, 24 Aug 2000 15:32:55 MST."
 <20000824153255.B27016@ActiveState.com>
References: <20000824104627.C15992@ActiveState.com> <14757.40724.86552.609923@bitdiddle.concentric.net>
 <20000824153255.B27016@ActiveState.com>
Message-ID: <200008250419.XAA08826@cj20424-a.reston1.va.home.com>

> In any case, that does not explain why patches@python.org did not a mail
> regarding my update of the patch on SourceForge. *Two* emails have gone
> astray here.

This is compensated though by the patch and bug managers, which often
sends me two or three copies of the email for each change to an
entry.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido@beopen.com  Fri Aug 25 06:58:15 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 00:58:15 -0500
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: Your message of "Thu, 24 Aug 2000 14:41:55 EST."
Message-ID: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>

Here's a patch that Tim & I believe should solve the thread+fork
problem properly.  I'll try to explain it briefly.

I'm not checking this in yet because I need more eyeballs, and because
I don't actually have a test to prove that I've fixed the problem.
However, our theory is very hopeful.

(1) BACKGROUND: A Python lock may be released by a different thread
than who aqcuired it, and it may be acquired by the same thread
multiple times.  A pthread mutex must always be unlocked by the same
thread that locked it, and can't be locked more than once.  So, a
Python lock can't be built out of a simple pthread mutex; instead, a
Python lock is built out of a "locked" flag and a <condition variable,
mutex> pair.  The mutex is locked for at most a few cycles, to protect
the flag.  This design is Tim's (while still at KSR).

(2) PROBLEM: If you fork while another thread holds a mutex, that
mutex will never be released, because only the forking thread survives
in the child.  The LinuxThread manual recommends to use
pthread_atfork() to acquire all locks in locking order before the
fork, and release them afterwards.  A problem with Tim's design here
is that even if the forking thread has Python's global interpreter
lock, another thread trying to acquire the lock may still hold the
mutex at the time of the fork, causing it to be held forever in the
child.  Charles has posted an effective hack that allocates a new
global interpreter lock in the child, but this doesn't solve the
problem for other locks.

(3) BRAINWAVE: If we use a single mutex shared by all locks, instead
of a mutex per lock, we can lock this mutex around the fork and thus
prevent any other thread from locking it.  This is okay because, while
a condition variable always needs a mutex to go with it, there's no
rule that the same mutex can't be shared by many condition variables.
The code below implements this.

(4) MORE WORK: (a) The PyThread API also defines semaphores, which may
have a similar problem.  But I'm not aware of any use of these (I'm
not quite sure why semaphore support was added), so I haven't patched
these.  (b) The thread_pth.h file define locks in the same way; there
may be others too.  I haven't touched these.

(5) TESTING: Charles Waldman posted this code to reproduce the
problem.  Unfortunately I haven't had much success with it; it seems
to hang even when I apply Charles' patch.

    import thread
    import os, sys
    import time

    def doit(name):
	while 1:
	    if os.fork()==0:
		print name, 'forked', os.getpid()
		os._exit(0)
	    r = os.wait()

    for x in range(50):
	name = 't%s'%x
	print 'starting', name
	thread.start_new_thread(doit, (name,))

    time.sleep(300)

Here's the patch:

*** Python/thread_pthread.h	2000/08/23 21:33:05	2.29
--- Python/thread_pthread.h	2000/08/25 04:29:43
***************
*** 84,101 ****
   * and a <condition, mutex> pair.  In general, if the bit can be acquired
   * instantly, it is, else the pair is used to block the thread until the
   * bit is cleared.     9 May 1994 tim@ksr.com
   */
  
  typedef struct {
  	char             locked; /* 0=unlocked, 1=locked */
  	/* a <cond, mutex> pair to handle an acquire of a locked lock */
  	pthread_cond_t   lock_released;
- 	pthread_mutex_t  mut;
  } pthread_lock;
  
  #define CHECK_STATUS(name)  if (status != 0) { perror(name); error = 1; }
  
  /*
   * Initialization.
   */
  
--- 84,125 ----
   * and a <condition, mutex> pair.  In general, if the bit can be acquired
   * instantly, it is, else the pair is used to block the thread until the
   * bit is cleared.     9 May 1994 tim@ksr.com
+  *
+  * MODIFICATION: use a single mutex shared by all locks.
+  * This should make it easier to cope with fork() while threads exist.
+  * 24 Aug 2000 {guido,tpeters}@beopen.com
   */
  
  typedef struct {
  	char             locked; /* 0=unlocked, 1=locked */
  	/* a <cond, mutex> pair to handle an acquire of a locked lock */
  	pthread_cond_t   lock_released;
  } pthread_lock;
  
+ static pthread_mutex_t locking_mutex = PTHREAD_MUTEX_INITIALIZER;
+ 
  #define CHECK_STATUS(name)  if (status != 0) { perror(name); error = 1; }
  
  /*
+  * Callbacks for pthread_atfork().
+  */
+ 
+ static void prefork_callback()
+ {
+ 	pthread_mutex_lock(&locking_mutex);
+ }
+ 
+ static void parent_callback()
+ {
+ 	pthread_mutex_unlock(&locking_mutex);
+ }
+ 
+ static void child_callback()
+ {
+ 	pthread_mutex_unlock(&locking_mutex);
+ }
+ 
+ /*
   * Initialization.
   */
  
***************
*** 113,118 ****
--- 137,144 ----
  	pthread_t thread1;
  	pthread_create(&thread1, NULL, (void *) _noop, &dummy);
  	pthread_join(thread1, NULL);
+ 	/* XXX Is the following supported here? */
+ 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
  }
  
  #else /* !_HAVE_BSDI */
***************
*** 123,128 ****
--- 149,156 ----
  #if defined(_AIX) && defined(__GNUC__)
  	pthread_init();
  #endif
+ 	/* XXX Is the following supported everywhere? */
+ 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
  }
  
  #endif /* !_HAVE_BSDI */
***************
*** 260,269 ****
  	if (lock) {
  		lock->locked = 0;
  
- 		status = pthread_mutex_init(&lock->mut,
- 					    pthread_mutexattr_default);
- 		CHECK_STATUS("pthread_mutex_init");
- 
  		status = pthread_cond_init(&lock->lock_released,
  					   pthread_condattr_default);
  		CHECK_STATUS("pthread_cond_init");
--- 288,293 ----
***************
*** 286,294 ****
  
  	dprintf(("PyThread_free_lock(%p) called\n", lock));
  
- 	status = pthread_mutex_destroy( &thelock->mut );
- 	CHECK_STATUS("pthread_mutex_destroy");
- 
  	status = pthread_cond_destroy( &thelock->lock_released );
  	CHECK_STATUS("pthread_cond_destroy");
  
--- 310,315 ----
***************
*** 304,314 ****
  
  	dprintf(("PyThread_acquire_lock(%p, %d) called\n", lock, waitflag));
  
! 	status = pthread_mutex_lock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_lock[1]");
  	success = thelock->locked == 0;
  	if (success) thelock->locked = 1;
! 	status = pthread_mutex_unlock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_unlock[1]");
  
  	if ( !success && waitflag ) {
--- 325,335 ----
  
  	dprintf(("PyThread_acquire_lock(%p, %d) called\n", lock, waitflag));
  
! 	status = pthread_mutex_lock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_lock[1]");
  	success = thelock->locked == 0;
  	if (success) thelock->locked = 1;
! 	status = pthread_mutex_unlock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_unlock[1]");
  
  	if ( !success && waitflag ) {
***************
*** 316,330 ****
  
  		/* mut must be locked by me -- part of the condition
  		 * protocol */
! 		status = pthread_mutex_lock( &thelock->mut );
  		CHECK_STATUS("pthread_mutex_lock[2]");
  		while ( thelock->locked ) {
  			status = pthread_cond_wait(&thelock->lock_released,
! 						   &thelock->mut);
  			CHECK_STATUS("pthread_cond_wait");
  		}
  		thelock->locked = 1;
! 		status = pthread_mutex_unlock( &thelock->mut );
  		CHECK_STATUS("pthread_mutex_unlock[2]");
  		success = 1;
  	}
--- 337,351 ----
  
  		/* mut must be locked by me -- part of the condition
  		 * protocol */
! 		status = pthread_mutex_lock( &locking_mutex );
  		CHECK_STATUS("pthread_mutex_lock[2]");
  		while ( thelock->locked ) {
  			status = pthread_cond_wait(&thelock->lock_released,
! 						   &locking_mutex);
  			CHECK_STATUS("pthread_cond_wait");
  		}
  		thelock->locked = 1;
! 		status = pthread_mutex_unlock( &locking_mutex );
  		CHECK_STATUS("pthread_mutex_unlock[2]");
  		success = 1;
  	}
***************
*** 341,352 ****
  
  	dprintf(("PyThread_release_lock(%p) called\n", lock));
  
! 	status = pthread_mutex_lock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_lock[3]");
  
  	thelock->locked = 0;
  
! 	status = pthread_mutex_unlock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_unlock[3]");
  
  	/* wake up someone (anyone, if any) waiting on the lock */
--- 362,373 ----
  
  	dprintf(("PyThread_release_lock(%p) called\n", lock));
  
! 	status = pthread_mutex_lock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_lock[3]");
  
  	thelock->locked = 0;
  
! 	status = pthread_mutex_unlock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_unlock[3]");
  
  	/* wake up someone (anyone, if any) waiting on the lock */

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From DavidA@ActiveState.com  Fri Aug 25 06:07:02 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Thu, 24 Aug 2000 22:07:02 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.WNT.4.21.0008242203060.1060-100000@cr469175-a>

On Fri, 25 Aug 2000, Guido van Rossum wrote:

> (4) MORE WORK: (a) The PyThread API also defines semaphores, which may
> have a similar problem.  But I'm not aware of any use of these (I'm
> not quite sure why semaphore support was added), so I haven't patched
> these. 

IIRC, we had a discussion a while back about semaphore support in the
PyThread API and agreed that they were not implemented on enough platforms
to be a useful part of the PyThread API.  I can't find it right now, alas.

--david



From MarkH@ActiveState.com  Fri Aug 25 06:16:56 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 15:16:56 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>

Something strange is happening in my Windows Debug builds (fresh CVS tree)

If you remove "urllib.pyc", and execute 'python_d -c "import urllib"',
Python dies after printing the message:

FATAL: node type 305, required 311

It also happens for a number of other files (compileall.py will show you
:-)

Further analysis shows this deep in the compiler, and triggered by this
macro in node.h:

---
/* Assert that the type of a node is what we expect */
#ifndef Py_DEBUG
#define REQ(n, type) { /*pass*/ ; }
#else
#define REQ(n, type) \
	{ if (TYPE(n) != (type)) { \
	    fprintf(stderr, "FATAL: node type %d, required %d\n", \
		    TYPE(n), type); \
	    abort(); \
	} }
#endif
---

Is this pointing to a deeper problem, or is the assertion incorrect?

Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
find a simple way to turn it on to confirm it also exists on Linux...

Any ideas?

Mark.



From thomas@xs4all.net  Fri Aug 25 06:23:52 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:23:52 +0200
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 25, 2000 at 12:58:15AM -0500
References: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <20000825072351.I7566@xs4all.nl>

On Fri, Aug 25, 2000 at 12:58:15AM -0500, Guido van Rossum wrote:

> + 	/* XXX Is the following supported here? */
> + 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
>   }
>   
>   #else /* !_HAVE_BSDI */

To answer that question: yes. BSDI from 3.0 onward has pthread_atfork(),
though threads remain unusable until BSDI 4.1 (because of a bug in libc
where pause() stops listening to signals when compiling for threads.) I
haven't actaully tested this patch yet, just gave it a once-over ;) I will
test it on all types of machines we have, though.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Fri Aug 25 06:24:12 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 01:24:12 -0400 (EDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <14758.764.705006.937500@cj42289-a.reston1.va.home.com>

Mark Hammond writes:
 > Is this pointing to a deeper problem, or is the assertion incorrect?

  I expect that there's an incorrect assertion that was fine until one
of the recent grammar changes; the augmented assignment patch is
highly suspect given that it's the most recent.  Look for problems
handling expr_stmt nodes.

 > Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
 > find a simple way to turn it on to confirm it also exists on Linux...

  I don't think I've ever used it, either on Linux or any other Unix.
We should definately have an easy way to turn it on!  Probably at
configure time would be good.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Fri Aug 25 06:29:53 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:29:53 +0200
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 25, 2000 at 03:16:56PM +1000
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <20000825072953.J7566@xs4all.nl>

On Fri, Aug 25, 2000 at 03:16:56PM +1000, Mark Hammond wrote:

> Something strange is happening in my Windows Debug builds (fresh CVS tree)

> If you remove "urllib.pyc", and execute 'python_d -c "import urllib"',
> Python dies after printing the message:
> 
> FATAL: node type 305, required 311
> 
> It also happens for a number of other files (compileall.py will show you
> :-)

> Further analysis shows this deep in the compiler, and triggered by this
> macro in node.h:

> #define REQ(n, type) \
> 	{ if (TYPE(n) != (type)) { \
> 	    fprintf(stderr, "FATAL: node type %d, required %d\n", \
> 		    TYPE(n), type); \
> 	    abort(); \
> 	} }

> Is this pointing to a deeper problem, or is the assertion incorrect?

At first sight, I would say "yes, the assertion is wrong". That doesn't mean
it shouldn't be fixed ! It's probably caused by augmented assignment or list
comprehensions, though I have used both with Py_DEBUG enabled a few times,
so I don't know for sure. I'm compiling with debug right now, to inspect
this, though.

Another thing that might cause it is an out-of-date graminit.h file
somewhere. The one in the CVS tree is up to date, but maybe you have a copy
stashed somewhere ?

> Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
> find a simple way to turn it on to confirm it also exists on Linux...

There's undoubtedly a good way, but I usually just chicken out and add
'#define Py_DEBUG 1' at the bottom of config.h ;) That also makes sure I
don't keep it around too long, as config.h gets regenerated often enough :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Aug 25 06:44:41 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:44:41 +0200
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <20000825072953.J7566@xs4all.nl>; from thomas@xs4all.net on Fri, Aug 25, 2000 at 07:29:53AM +0200
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com> <20000825072953.J7566@xs4all.nl>
Message-ID: <20000825074440.K7566@xs4all.nl>

On Fri, Aug 25, 2000 at 07:29:53AM +0200, Thomas Wouters wrote:
> On Fri, Aug 25, 2000 at 03:16:56PM +1000, Mark Hammond wrote:

> > FATAL: node type 305, required 311

> > Is this pointing to a deeper problem, or is the assertion incorrect?
> 
> At first sight, I would say "yes, the assertion is wrong". That doesn't mean
> it shouldn't be fixed ! It's probably caused by augmented assignment or list
> comprehensions, 

Actually, it was a combination of removing UNPACK_LIST and adding
list comprehensions. I just checked in a fix for this. Can you confirm that
this fixes it for the windows build, too ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Fri Aug 25 06:44:11 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 01:44:11 -0400
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMENNHBAA.tim_one@email.msn.com>

[Guido]
> ...
> (1) BACKGROUND: A Python lock may be released by a different thread
> than who aqcuired it, and it may be acquired by the same thread
> multiple times.  A pthread mutex must always be unlocked by the same
> thread that locked it, and can't be locked more than once.

The business about "multiple times" may be misleading, as it makes Windows
geeks think of reentrant locks.  The Python lock is not reentrant.  Instead,
it's perfectly OK for a thread that has acquired a Python lock to *try* to
acquire it again (but is not OK for a thread that has locked a pthread mutex
to try to lock it again):  the acquire attempt simply blocks until *another*
thread releases the Python lock.  By "Python lock" here I mean at the Python
C API level, and as exposed by the thread module; the threading module
exposes fancier locks (including reentrant locks).

> So, a Python lock can't be built out of a simple pthread mutex; instead,
> a Python lock is built out of a "locked" flag and a <condition variable,
> mutex> pair.  The mutex is locked for at most a few cycles, to protect
> the flag.  This design is Tim's (while still at KSR).

At that time, a pthread mutex was generally implemented as a pure spin lock,
so it was important to hold a pthread mutex for as short a span as possible
(and, indeed, the code never holds a pthread mutex for longer than across 2
simple C stmts).

> ...
> (3) BRAINWAVE: If we use a single mutex shared by all locks, instead
> of a mutex per lock, we can lock this mutex around the fork and thus
> prevent any other thread from locking it.  This is okay because, while
> a condition variable always needs a mutex to go with it, there's no
> rule that the same mutex can't be shared by many condition variables.
> The code below implements this.

Before people panic <wink>, note that this is "an issue" only for those
thread_xxx.h implementations such that fork() is supported *and* the child
process nukes threads in the child, leaving its mutexes and the data they
protect in an insane state.  They're the ones creating problems, so they're
the ones that pay.

> (4) MORE WORK: (a) The PyThread API also defines semaphores, which may
> have a similar problem.  But I'm not aware of any use of these (I'm
> not quite sure why semaphore support was added), so I haven't patched
> these.

I'm almost certain we all agreed (spurred by David Ascher) to get rid of the
semaphore implementations a while back.

> (b) The thread_pth.h file define locks in the same way; there
> may be others too.  I haven't touched these.

(c) While the scheme protects mutexes from going nuts in the child, that
doesn't necessarily imply that the data mutexes *protect* won't go nuts.
For example, this *may* not be enough to prevent insanity in import.c:  if
another thread is doing imports at the time a fork() occurs,
import_lock_level could be left at an arbitrarily high value in import.c.
But the thread doing the import has gone away in the child, so can't restore
import_lock_level to a sane value there.  I'm not convinced that matters in
this specific case, just saying we've got some tedious headwork to review
all the cases.

> (5) TESTING: Charles Waldman posted this code to reproduce the
> problem.  Unfortunately I haven't had much success with it; it seems
> to hang even when I apply Charles' patch.

What about when you apply *your* patch?

>     import thread
>     import os, sys
>     import time
>
>     def doit(name):
> 	while 1:
> 	    if os.fork()==0:
> 		print name, 'forked', os.getpid()
> 		os._exit(0)
> 	    r = os.wait()
>
>     for x in range(50):
> 	name = 't%s'%x
> 	print 'starting', name
> 	thread.start_new_thread(doit, (name,))
>
>     time.sleep(300)
>
> Here's the patch:

> ...
> + static pthread_mutex_t locking_mutex = PTHREAD_MUTEX_INITIALIZER;

Anyone know whether this gimmick is supported by all pthreads
implementations?

> ...
> + 	/* XXX Is the following supported here? */
> + 	pthread_atfork(&prefork_callback, &parent_callback,
> &child_callback);

I expect we need some autoconf stuff for that, right?

Thanks for writing this up!  Even more thanks for thinking of it <wink>.




From MarkH@ActiveState.com  Fri Aug 25 06:55:42 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 15:55:42 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <20000825074440.K7566@xs4all.nl>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEHJDGAA.MarkH@ActiveState.com>

> Actually, it was a combination of removing UNPACK_LIST and adding
> list comprehensions. I just checked in a fix for this. Can you 
> confirm that
> this fixes it for the windows build, too ?

It does - thank you!

Mark.



From tim_one@email.msn.com  Fri Aug 25 09:08:23 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 04:08:23 -0400
Subject: [Python-Dev] RE: Passwords after CVS commands
In-Reply-To: <PGECLPOBGNBNKHNAGIJHAEAECEAA.andy@reportlab.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOBHBAA.tim_one@email.msn.com>

The latest version of Andy Robinson's excellent instructions for setting up
a cmdline CVS using SSH under Windows are now available:

    http://python.sourceforge.net/winssh.txt

This is also linked to from the Python-at-SourceForge FAQ:

    http://python.sourceforge.net/sf-faq.html

where it replaces the former "let's try to pretend Windows is Unix(tm)"
mish-mash.  Riaan Booysen cracked the secret of how to get the Windows
ssh-keygen to actually generate keys (ha!  don't think I can't hear you
Unix(tm) weenies laughing <wink>), and that's the main change from the last
version of these instructions I posted here.  I added a lot of words to
Riann's, admonishing you not to leave the passphrase empty, but so
unconvincingly I bet you won't heed my professional advice.

and-not-revealing-whether-i-did-ly y'rs  - tim




From thomas@xs4all.net  Fri Aug 25 12:16:20 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 13:16:20 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0203.txt,1.11,1.12
In-Reply-To: <200008251111.EAA13270@slayer.i.sourceforge.net>; from twouters@users.sourceforge.net on Fri, Aug 25, 2000 at 04:11:31AM -0700
References: <200008251111.EAA13270@slayer.i.sourceforge.net>
Message-ID: <20000825131620.B16377@xs4all.nl>

On Fri, Aug 25, 2000 at 04:11:31AM -0700, Thomas Wouters wrote:

> !     [XXX so I am accepting this, but I'm a bit worried about the
> !     argument coercion.  For x+=y, if x supports augmented assignment,
> !     y should only be cast to x's type, not the other way around!]

Oh, note that I chose not to do *any* coercion, if x supports the in-place
operation. I'm not sure how valuable coercion would be, here, at least not
in its current form. (Isn't coercion mostly used by integer types ? And
aren't they immutable ? If an in-place method wants to have its argument
coerced, it should do so itself, just like with direct method calls.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Fri Aug 25 13:04:27 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:04:27 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
Message-ID: <39A660CB.7661E20E@lemburg.com>

I've asked this question before: when are we going to see
comp.lang.python.announce back online ?

I know that everyone is busy with getting the betas ready,
but looking at www.python.org I find that the "latest"
special announcement is dated 22-Mar-2000. People will get
the false idea that Python isn't moving anywhere... at least
not in the spirit of OSS' "release early and often".

Could someone please summarize what needs to be done to
post a message to comp.lang.python.announce without taking
the path via the official (currently defunct) moderator ?

I've had a look at the c.l.p.a postings and the only special
header they include is the "Approved: fleck@informatik.uni-bonn.de"
header.

If this is all it takes to post to a moderated newsgroup,
fixing Mailman to do the trick should be really simple.

I'm willing to help here to get this done *before* the Python
2.0beta1 announcement.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Fri Aug 25 13:14:20 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 14:14:20 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: <39A660CB.7661E20E@lemburg.com>; from mal@lemburg.com on Fri, Aug 25, 2000 at 02:04:27PM +0200
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <20000825141420.C16377@xs4all.nl>

On Fri, Aug 25, 2000 at 02:04:27PM +0200, M.-A. Lemburg wrote:

> I've asked this question before: when are we going to see
> comp.lang.python.announce back online ?

Barry is working on this, by modifying Mailman to play moderator (via the
normal list-admin's post-approval mechanism.) As I'm sure he'll tell you
himself, when he wakes up ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From just@letterror.com  Fri Aug 25 14:25:02 2000
From: just@letterror.com (Just van Rossum)
Date: Fri, 25 Aug 2000 14:25:02 +0100
Subject: [Python-Dev] (214)
Message-ID: <l03102805b5cc22d9c375@[193.78.237.177]>

(Just to make sure you guys know; there's currently a thread in c.l.py
about the new 2.0 features. Not a *single* person stood up to defend PEP
214: noone seems to like it.)

Just




From mal@lemburg.com  Fri Aug 25 13:17:41 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:17:41 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <20000825141420.C16377@xs4all.nl>
Message-ID: <39A663E5.A1E85044@lemburg.com>

Thomas Wouters wrote:
> 
> On Fri, Aug 25, 2000 at 02:04:27PM +0200, M.-A. Lemburg wrote:
> 
> > I've asked this question before: when are we going to see
> > comp.lang.python.announce back online ?
> 
> Barry is working on this, by modifying Mailman to play moderator (via the
> normal list-admin's post-approval mechanism.) As I'm sure he'll tell you
> himself, when he wakes up ;)

This sounds like an aweful lot of work... wouldn't a quick hack
as intermediate solution suffice for the moment (it needen't
even go into any public Mailman release -- just the Mailman
installation at python.org which handles the announcement
list).

Ok, I'll wait for Barry to wake up ;-) ... <ringring>
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Fri Aug 25 14:30:40 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 08:30:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0203.txt,1.11,1.12
In-Reply-To: Your message of "Fri, 25 Aug 2000 13:16:20 +0200."
 <20000825131620.B16377@xs4all.nl>
References: <200008251111.EAA13270@slayer.i.sourceforge.net>
 <20000825131620.B16377@xs4all.nl>
Message-ID: <200008251330.IAA19481@cj20424-a.reston1.va.home.com>

> Oh, note that I chose not to do *any* coercion, if x supports the in-place
> operation. I'm not sure how valuable coercion would be, here, at least not
> in its current form. (Isn't coercion mostly used by integer types ? And
> aren't they immutable ? If an in-place method wants to have its argument
> coerced, it should do so itself, just like with direct method calls.)

All agreed!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Fri Aug 25 14:34:44 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 08:34:44 -0500
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: Your message of "Fri, 25 Aug 2000 14:04:27 +0200."
 <39A660CB.7661E20E@lemburg.com>
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <200008251334.IAA19600@cj20424-a.reston1.va.home.com>

> I've asked this question before: when are we going to see
> comp.lang.python.announce back online ?
> 
> I know that everyone is busy with getting the betas ready,
> but looking at www.python.org I find that the "latest"
> special announcement is dated 22-Mar-2000. People will get
> the false idea that Python isn't moving anywhere... at least
> not in the spirit of OSS' "release early and often".
> 
> Could someone please summarize what needs to be done to
> post a message to comp.lang.python.announce without taking
> the path via the official (currently defunct) moderator ?
> 
> I've had a look at the c.l.p.a postings and the only special
> header they include is the "Approved: fleck@informatik.uni-bonn.de"
> header.
> 
> If this is all it takes to post to a moderated newsgroup,
> fixing Mailman to do the trick should be really simple.
> 
> I'm willing to help here to get this done *before* the Python
> 2.0beta1 announcement.

Coincidence!  Barry just wrote the necessary hacks that allow a
Mailman list to be used to moderate a newsgroup, and installed them in
python.org.  He's testing the setup today and I expect that we'll be
able to solicit for moderators tonight!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Fri Aug 25 13:47:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:47:06 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <200008251334.IAA19600@cj20424-a.reston1.va.home.com>
Message-ID: <39A66ACA.F638215A@lemburg.com>

Guido van Rossum wrote:
> 
> > I've asked this question before: when are we going to see
> > comp.lang.python.announce back online ?
> >
> > I know that everyone is busy with getting the betas ready,
> > but looking at www.python.org I find that the "latest"
> > special announcement is dated 22-Mar-2000. People will get
> > the false idea that Python isn't moving anywhere... at least
> > not in the spirit of OSS' "release early and often".
> >
> > Could someone please summarize what needs to be done to
> > post a message to comp.lang.python.announce without taking
> > the path via the official (currently defunct) moderator ?
> >
> > I've had a look at the c.l.p.a postings and the only special
> > header they include is the "Approved: fleck@informatik.uni-bonn.de"
> > header.
> >
> > If this is all it takes to post to a moderated newsgroup,
> > fixing Mailman to do the trick should be really simple.
> >
> > I'm willing to help here to get this done *before* the Python
> > 2.0beta1 announcement.
> 
> Coincidence!  Barry just wrote the necessary hacks that allow a
> Mailman list to be used to moderate a newsgroup, and installed them in
> python.org.  He's testing the setup today and I expect that we'll be
> able to solicit for moderators tonight!

Way cool :-) Thanks.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Fri Aug 25 14:17:17 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 09:17:17 -0400 (EDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <14758.29149.992343.502526@bitdiddle.concentric.net>

>>>>> "MH" == Mark Hammond <MarkH@ActiveState.com> writes:

  MH> Does the Linux community ever run with Py_DEBUG defined?  I
  MH> couldn't even find a simple way to turn it on to confirm it also
  MH> exists on Linux...

I build a separate version of Python using make OPT="-Wall -DPy_DEBUG"

On Linux, the sre test fails.  Do you see the same problem on Windows?

Jeremy


From tim_one@email.msn.com  Fri Aug 25 14:24:40 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 09:24:40 -0400
Subject: [Python-Dev] (214)
In-Reply-To: <l03102805b5cc22d9c375@[193.78.237.177]>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>

[Just van Rossum]
> (Just to make sure you guys know; there's currently a thread in c.l.py
> about the new 2.0 features. Not a *single* person stood up to defend
> PEP 214: noone seems to like it.)

But that's not true!  I defended it <wink>.  Alas (or "thank God!",
depending on how you look at it), I sent my "In praise of" post to the
mailing list and apparently the list->news gateway dropped it on the floor.

It most reminds me of the introduction of class.__private names.  Except I
don't think *anyone* was a fan of those besides your brother (I was neutral,
but we had a long & quite fun Devil's Advocate debate anyway), and the
opposition was far more strident than it's yet gotten on PEP 214.  I liked
__private names a lot after I used them, and, as I said in my unseen post,
having used the new print gimmick several times "for real" now I don't ever
want to go back.

The people most opposed seem to be those who worked hard to learn about
sys.__stdout__ and exactly why they need a try/finally block <0.9 wink>.
Some of the Python-Dev'ers have objected too, but much more quietly --
principled objections always get lost in the noise.

doubting-that-python's-future-hangs-in-the-balance-ly y'rs  - tim




From Moshe Zadka <moshez@math.huji.ac.il>  Fri Aug 25 14:48:26 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Fri, 25 Aug 2000 16:48:26 +0300 (IDT)
Subject: [Python-Dev] Tasks
Message-ID: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>

This is a summary of problems I found with the task page:

Tasks which I was sure were complete
------------------------------------
17336 -- Add augmented assignments -- marked 80%. Thomas?
17346 -- Add poll() to selectmodule -- marked 50%. Andrew?

Duplicate tasks
---------------
17923 seems to be a duplicate of 17922

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From fdrake@beopen.com  Fri Aug 25 14:51:14 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 09:51:14 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <200008251344.GAA16623@slayer.i.sourceforge.net>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
Message-ID: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > + Accepted and in progress
...
 > +     * Support for opcode arguments > 2**16 - Charles Waldman
 > +       SF Patch 100893

  I checked this in 23 Aug.

 > +     * Range literals - Thomas Wouters
 > +       SF Patch 100902

  I thought this was done as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Fri Aug 25 14:53:34 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 15:53:34 +0200
Subject: [Python-Dev] Tasks
In-Reply-To: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 25, 2000 at 04:48:26PM +0300
References: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>
Message-ID: <20000825155334.D16377@xs4all.nl>

On Fri, Aug 25, 2000 at 04:48:26PM +0300, Moshe Zadka wrote:

> Tasks which I was sure were complete
> ------------------------------------
> 17336 -- Add augmented assignments -- marked 80%. Thomas?

It isn't complete. It's missing documentation. I'm done with meetings today
(*yay!*) so I'm in the process of updating all that, as well as working on
it :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Fri Aug 25 14:57:53 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 15:57:53 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 25, 2000 at 09:51:14AM -0400
References: <200008251344.GAA16623@slayer.i.sourceforge.net> <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <20000825155752.E16377@xs4all.nl>

On Fri, Aug 25, 2000 at 09:51:14AM -0400, Fred L. Drake, Jr. wrote:
>  > +     * Range literals - Thomas Wouters
>  > +       SF Patch 100902

>   I thought this was done as well.

No, it just hasn't been touched in a while :) I need to finish up the PEP
(move the Open Issues to "BDFL Pronouncements", and include said
pronouncements) and sync the patch with the CVS tree. Oh, and it needs to be
accepted, too ;) Tim claims he's going to review it.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jeremy@beopen.com  Fri Aug 25 15:03:16 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 10:03:16 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
 <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <14758.31908.552647.739111@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake@beopen.com> writes:

  FLD> Jeremy Hylton writes:
  >> + Accepted and in progress
  FLD> ...
  >> + * Support for opcode arguments > 2**16 - Charles Waldman
  >> + SF Patch 100893

  FLD>   I checked this in 23 Aug.

Ok.

  >> + * Range literals - Thomas Wouters
  >> + SF Patch 100902

  FLD>   I thought this was done as well.

There's still an open patch for it.

Jeremy


From mal@lemburg.com  Fri Aug 25 15:06:57 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 16:06:57 +0200
Subject: [Python-Dev] (214)
References: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <39A67D81.FD56F2C7@lemburg.com>

Tim Peters wrote:
> 
> [Just van Rossum]
> > (Just to make sure you guys know; there's currently a thread in c.l.py
> > about the new 2.0 features. Not a *single* person stood up to defend
> > PEP 214: noone seems to like it.)
> 
> But that's not true!  I defended it <wink>. 

Count me in on that one too... it's just great for adding a few
quick debugging prints into the program.

The only thing I find non-Pythonesque is that an operator
is used. I would have opted for something like:

	print on <stream> x,y,z

instead of

	print >> <stream> x,y,z

But I really don't mind since I don't use "print" in production
code for anything other than debugging anyway :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Fri Aug 25 15:26:15 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 10:26:15 -0400 (EDT)
Subject: [Python-Dev] compiling with SSL support on Windows
Message-ID: <14758.33287.507315.396536@bitdiddle.concentric.net>

https://sourceforge.net/bugs/?func=detailbug&bug_id=110683&group_id=5470

We have a bug report about compilation problems in the socketmodule on
Windows when using SSL support.  Is there any Windows user with
OpenSSL who can look into this problem?

Jeremy


From guido@beopen.com  Fri Aug 25 16:24:03 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 10:24:03 -0500
Subject: [Python-Dev] (214)
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:24:40 -0400."
 <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <200008251524.KAA19935@cj20424-a.reston1.va.home.com>

I've just posted a long response to the whole thread in c.l.py, and
added the essence (a long new section titled "More Justification by
the BDFL")) of it to the PEP.  See
http://python.sourceforge.net/peps/pep-0214.html

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
	


From guido@beopen.com  Fri Aug 25 16:32:57 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 10:32:57 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:51:14 -0400."
 <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
 <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <200008251532.KAA20007@cj20424-a.reston1.va.home.com>

>  > +     * Range literals - Thomas Wouters
>  > +       SF Patch 100902
> 
>   I thought this was done as well.

No:

$ ./python
Python 2.0b1 (#79, Aug 25 2000, 08:31:47)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> [1:10]
  File "<stdin>", line 1
    [1:10]
      ^
SyntaxError: invalid syntax
>>> 

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jack@oratrix.nl  Fri Aug 25 15:48:24 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Fri, 25 Aug 2000 16:48:24 +0200
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
 Thu, 24 Aug 2000 23:11:54 -0500 , <200008250411.XAA08797@cj20424-a.reston1.va.home.com>
Message-ID: <20000825144829.CB29FD71F9@oratrix.oratrix.nl>

Recently, Guido van Rossum <guido@beopen.com> said:
> > test_re wasn't so bad, the only problem was that it crashed with a
> > "NULL return without an exception". test_regexp was worse, it crashed
> > my machine.
> 
> That's regex, right?  regexp was the *really* old regular expression
> module we once had.
> 
> Anyway, I don't care about regex, it's old.
> 
> The sre code needs to be robustified, but it's not a high priority for
> me.

Ok, fine with me.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 


From mal@lemburg.com  Fri Aug 25 16:05:38 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 17:05:38 +0200
Subject: [Python-Dev] [PEP 224] Attribute Docstrings
Message-ID: <39A68B42.4E3F8A3D@lemburg.com>

An HTML version of the attached can be viewed at

    http://python.sourceforge.net/peps/pep-0224.html

Even though the implementation won't go into Python 2.0, it
is worthwhile discussing this now, since adding these attribute
docstrings to existing code already works: Python simply ignores
them. What remains is figuring out a way to make use of them and
this is what the proposal is all about...

--

PEP: 224
Title: Attribute Docstrings
Version: $Revision: 1.2 $
Author: mal@lemburg.com (Marc-Andre Lemburg)
Status: Draft
Type: Standards Track
Python-Version: 2.1
Created: 23-Aug-2000
Post-History:


Introduction

    This PEP describes the "attribute docstring" proposal for Python
    2.0.  This PEP tracks the status and ownership of this feature.
    It contains a description of the feature and outlines changes
    necessary to support the feature.  The CVS revision history of
    this file contains the definitive historical record.


Rationale

    This PEP proposes a small addition to the way Python currently
    handles docstrings embedded in Python code.

    Python currently only handles the case of docstrings which appear
    directly after a class definition, a function definition or as
    first string literal in a module.  The string literals are added
    to the objects in question under the __doc__ attribute and are
    from then on available for introspection tools which can extract
    the contained information for help, debugging and documentation
    purposes.

    Docstrings appearing in locations other than the ones mentioned
    are simply ignored and don't result in any code generation.

    Here is an example:

        class C:
            "class C doc-string"

            a = 1
            "attribute C.a doc-string (1)"

            b = 2
            "attribute C.b doc-string (2)"

    The docstrings (1) and (2) are currently being ignored by the
    Python byte code compiler, but could obviously be put to good use
    for documenting the named assignments that precede them.
    
    This PEP proposes to also make use of these cases by proposing
    semantics for adding their content to the objects in which they
    appear under new generated attribute names.

    The original idea behind this approach which also inspired the
    above example was to enable inline documentation of class
    attributes, which can currently only be documented in the class's
    docstring or using comments which are not available for
    introspection.


Implementation

    Docstrings are handled by the byte code compiler as expressions.
    The current implementation special cases the few locations
    mentioned above to make use of these expressions, but otherwise
    ignores the strings completely.

    To enable use of these docstrings for documenting named
    assignments (which is the natural way of defining e.g. class
    attributes), the compiler will have to keep track of the last
    assigned name and then use this name to assign the content of the
    docstring to an attribute of the containing object by means of
    storing it in as a constant which is then added to the object's
    namespace during object construction time.

    In order to preserve features like inheritance and hiding of
    Python's special attributes (ones with leading and trailing double
    underscores), a special name mangling has to be applied which
    uniquely identifies the docstring as belonging to the name
    assignment and allows finding the docstring later on by inspecting
    the namespace.

    The following name mangling scheme achieves all of the above:

        __doc__<attributename>__

    To keep track of the last assigned name, the byte code compiler
    stores this name in a variable of the compiling structure.  This
    variable defaults to NULL.  When it sees a docstring, it then
    checks the variable and uses the name as basis for the above name
    mangling to produce an implicit assignment of the docstring to the
    mangled name.  It then resets the variable to NULL to avoid
    duplicate assignments.

    If the variable does not point to a name (i.e. is NULL), no
    assignments are made.  These will continue to be ignored like
    before.  All classical docstrings fall under this case, so no
    duplicate assignments are done.

    In the above example this would result in the following new class
    attributes to be created:

        C.__doc__a__ == "attribute C.a doc-string (1)"
        C.__doc__b__ == "attribute C.b doc-string (2)"

    A patch to the current CVS version of Python 2.0 which implements
    the above is available on SourceForge at [1].


Caveats of the Implementation
    
    Since the implementation does not reset the compiling structure
    variable when processing a non-expression, e.g. a function
    definition, the last assigned name remains active until either the
    next assignment or the next occurrence of a docstring.

    This can lead to cases where the docstring and assignment may be
    separated by other expressions:

        class C:
            "C doc string"

            b = 2

            def x(self):
                "C.x doc string"
                y = 3
                return 1

            "b's doc string"

    Since the definition of method "x" currently does not reset the
    used assignment name variable, it is still valid when the compiler
    reaches the docstring "b's doc string" and thus assigns the string
    to __doc__b__.

    A possible solution to this problem would be resetting the name
    variable for all non-expression nodes.

    
Copyright

    This document has been placed in the Public Domain.


References

    [1] http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw@beopen.com  Fri Aug 25 16:12:34 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:12:34 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <14758.36066.49304.190172@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> I've asked this question before: when are we going to see
    M> comp.lang.python.announce back online ?

    M> If this is all it takes to post to a moderated newsgroup,
    M> fixing Mailman to do the trick should be really simple.

    M> I'm willing to help here to get this done *before* the Python
    M> 2.0beta1 announcement.

MAL, you must be reading my mind!

I've actually been working on some unofficial patches to Mailman that
will let list admins moderate a moderated newsgroup.  The technical
details are described in a recent post to mailman-developers[1].

I'm testing it out right now.  I first installed this on starship, but
there's no nntp server that starship can post to, so I've since moved
the list to python.org.  However, I'm still having some problems with
the upstream feed, or at least I haven't seen approved messages
appearing on deja or my ISP's server.  I'm not exactly sure why; could
just be propagation delays.

Anyway, if anybody does see my test messages show up in the newsgroup
(not the gatewayed mailing list -- sorry David), please let me know.

-Barry

[1] http://www.python.org/pipermail/mailman-developers/2000-August/005388.html


From bwarsaw@beopen.com  Fri Aug 25 16:16:30 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:16:30 -0400 (EDT)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
 <20000825141420.C16377@xs4all.nl>
 <39A663E5.A1E85044@lemburg.com>
Message-ID: <14758.36302.521877.833943@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> This sounds like an aweful lot of work... wouldn't a quick hack
    M> as intermediate solution suffice for the moment (it needen't
    M> even go into any public Mailman release -- just the Mailman
    M> installation at python.org which handles the announcement
    M> list).

Naw, it's actually the least amount of work, since all the mechanism
is already there.  You just need to add a flag and another hold
criteria.  It's unofficial because I'm in feature freeze.

    M> Ok, I'll wait for Barry to wake up ;-) ... <ringring>

Who says I'm awake?  Don't you know I'm a very effective sleep hacker?
I'm also an effective sleep gardener and sometimes the urge to snore
and plant takes over.  You should see my cucumbers!

the-only-time-in-the-last-year-i've-been-truly-awake-was-when-i
jammed-with-eric-at-ipc8-ly y'rs,
-Barry


From bwarsaw@beopen.com  Fri Aug 25 16:21:43 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:21:43 -0400 (EDT)
Subject: [Python-Dev] (214)
References: <l03102805b5cc22d9c375@[193.78.237.177]>
 <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <14758.36615.589212.75065@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one@email.msn.com> writes:

    TP> But that's not true!  I defended it <wink>.  Alas (or "thank
    TP> God!", depending on how you look at it), I sent my "In praise
    TP> of" post to the mailing list and apparently the list->news
    TP> gateway dropped it on the floor.

Can other people confirm that list->news is broken?  If so, then that
would explain my c.l.py.a moderation problems.  I know that my
approved test message showed up on CNRI's internal news server because
at least one list member of the c.l.py.a gateway got it, but I haven't
seen it upstream of CNRI.  I'll contact their admins and let them know
the upstream feed could be broken.

-Barry


From tim_one@email.msn.com  Fri Aug 25 16:34:47 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 11:34:47 -0400
Subject: [Python-Dev] (214)
In-Reply-To: <14758.36615.589212.75065@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEPFHBAA.tim_one@email.msn.com>

[Barry]
> Can other people confirm that list->news is broken?

I don't believe that it is (e.g., several of my c.l.py list mailings today
have already shown up my ISP's news server).

The post in question was mailed

    Thu 8/24/00 3:15 AM (EDT)

Aahz (a fellow mailing-list devotee) noted on c.l.py that it had never shown
up on the newsgroup, and after poking around I couldn't find it anywhere
either.

> ...
> I'll contact their admins and let them know the upstream feed could
> be broken.

Well, you can *always* let them know that <wink>.




From thomas@xs4all.net  Fri Aug 25 16:36:50 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 17:36:50 +0200
Subject: [Python-Dev] (214)
In-Reply-To: <14758.36615.589212.75065@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 25, 2000 at 11:21:43AM -0400
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net>
Message-ID: <20000825173650.G16377@xs4all.nl>

On Fri, Aug 25, 2000 at 11:21:43AM -0400, Barry A. Warsaw wrote:

> Can other people confirm that list->news is broken? 

No, not really. I can confirm that not all messages make it to the
newsgroup: I can't find Tim's posting on PEP 214 anywhere on comp.lang.py.
(and our new super-newsserver definately keeps the postings around long
enough, so I should be able to see it, and I did get it through
python-list!)

However, I *can* find some of my python-list submissions from earlier today,
so it hasn't completely gone to meet its maker, either.

I can also confirm that python-dev itself seems to be missing some messages.
I occasionally see messages quoted which I haven't seen myself, and I've
seen others complain that they haven't seen my messages, as quoted in other
mailings. Not more than a handful in the last week or two, though, and they
*could* be attributed to dementia.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Fri Aug 25 16:39:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 17:39:06 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <14758.36066.49304.190172@anthem.concentric.net>
Message-ID: <39A6931A.5B396D26@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal@lemburg.com> writes:
> 
>     M> I've asked this question before: when are we going to see
>     M> comp.lang.python.announce back online ?
> 
>     M> If this is all it takes to post to a moderated newsgroup,
>     M> fixing Mailman to do the trick should be really simple.
> 
>     M> I'm willing to help here to get this done *before* the Python
>     M> 2.0beta1 announcement.
> 
> MAL, you must be reading my mind!
> 
> I've actually been working on some unofficial patches to Mailman that
> will let list admins moderate a moderated newsgroup.  The technical
> details are described in a recent post to mailman-developers[1].

Cool... :-)
 
> I'm testing it out right now.  I first installed this on starship, but
> there's no nntp server that starship can post to, so I've since moved
> the list to python.org.  However, I'm still having some problems with
> the upstream feed, or at least I haven't seen approved messages
> appearing on deja or my ISP's server.  I'm not exactly sure why; could
> just be propagation delays.
> 
> Anyway, if anybody does see my test messages show up in the newsgroup
> (not the gatewayed mailing list -- sorry David), please let me know.

Nothing has appeared at my ISP yet. Looking at the mailing list
archives, the postings don't have the Approved: header (but
perhaps it's just the archive which doesn't include it).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From bwarsaw@beopen.com  Fri Aug 25 17:20:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:20:59 -0400 (EDT)
Subject: [Python-Dev] (214)
References: <l03102805b5cc22d9c375@[193.78.237.177]>
 <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
 <14758.36615.589212.75065@anthem.concentric.net>
 <20000825173650.G16377@xs4all.nl>
Message-ID: <14758.40171.159233.521885@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    >> Can other people confirm that list->news is broken?

    TW> No, not really. I can confirm that not all messages make it to
    TW> the newsgroup: I can't find Tim's posting on PEP 214 anywhere
    TW> on comp.lang.py.  (and our new super-newsserver definately
    TW> keeps the postings around long enough, so I should be able to
    TW> see it, and I did get it through python-list!)

    TW> However, I *can* find some of my python-list submissions from
    TW> earlier today, so it hasn't completely gone to meet its maker,
    TW> either.

    TW> I can also confirm that python-dev itself seems to be missing
    TW> some messages.  I occasionally see messages quoted which I
    TW> haven't seen myself, and I've seen others complain that they
    TW> haven't seen my messages, as quoted in other mailings. Not
    TW> more than a handful in the last week or two, though, and they
    TW> *could* be attributed to dementia.

I found Tim's message in the archives, so I'm curious whether those
missing python-dev messages are also in the archives?  If so, that's a
good indication that Mailman is working, so the problem is upstream
from there.  I'm also not seeing any errors in the log files that
would indicate a Mailman problem.

I have seen some weird behavior from Postfix on that machine:
occasionally messages to my python.org addr, which should be forwarded
to my beopen.com addr just don't get forwarded.  They get dropped in
my spool file.  I have no idea why, and the mail logs don't give a
clue.  I don't know if any of that is related, although I did just
upgrade Postfix to the latest revision.  And there are about 3k
messages sitting in Postfix's queue waiting to go out though.

Sigh.  I really don't want to spend the next week debugging this
stuff. ;/

-Barry


From bwarsaw@beopen.com  Fri Aug 25 17:22:05 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:22:05 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
 <14758.36066.49304.190172@anthem.concentric.net>
 <39A6931A.5B396D26@lemburg.com>
Message-ID: <14758.40237.49311.811744@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    M> Nothing has appeared at my ISP yet. Looking at the mailing list
    M> archives, the postings don't have the Approved: header (but
    M> perhaps it's just the archive which doesn't include it).

Correct.  They're stripped out of the archives.  My re-homed nntpd
test worked though all the way through, so one more test and we're
home free.

-Barry


From thomas@xs4all.net  Fri Aug 25 17:32:24 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 18:32:24 +0200
Subject: [Python-Dev] (214)
In-Reply-To: <14758.40171.159233.521885@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 25, 2000 at 12:20:59PM -0400
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net> <20000825173650.G16377@xs4all.nl> <14758.40171.159233.521885@anthem.concentric.net>
Message-ID: <20000825183224.N15110@xs4all.nl>

On Fri, Aug 25, 2000 at 12:20:59PM -0400, Barry A. Warsaw wrote:

> I found Tim's message in the archives, so I'm curious whether those
> missing python-dev messages are also in the archives?  If so, that's a
> good indication that Mailman is working, so the problem is upstream
> from there.  I'm also not seeing any errors in the log files that
> would indicate a Mailman problem.

Well, I saw one message from Guido, where he was replying to someone who was
replying to Mark. Guido claimed he hadn't seen that original message
(Mark's), though I am certain I did see it. The recollections on missing
messages on my part are much more vague, though, so it *still* could be
attributed to dementia (of people, MUA's or MTA's ;)

I'll keep a closer eye on it, though.

> I have seen some weird behavior from Postfix on that machine:
> occasionally messages to my python.org addr, which should be forwarded
> to my beopen.com addr just don't get forwarded.  They get dropped in
> my spool file.  I have no idea why, and the mail logs don't give a
> clue.  I don't know if any of that is related, although I did just
> upgrade Postfix to the latest revision.  And there are about 3k
> messages sitting in Postfix's queue waiting to go out though.

Sendmail, baby! <duck> We're currently running postfix on a single machine
(www.hal2001.org, which also does the Mailman for it) mostly because our
current Sendmail setup has one huge advantage: it works. And it works fine.
We just don't want to change the sendmail rules or fiddle with out
mailertable-setup, but it works ! :-) 

> Sigh.  I really don't want to spend the next week debugging this
> stuff. ;/

So don't. Do what any proper developer would do: proclaim there isn't enough
info (there isn't, unless you can find the thread I'm talking about, above.
I'll see if I can locate it for you, since I think I saved the entire thread
with 'must check this' in the back of my head) and don't fix it until it
happens again. I do not think this is Mailman related, though it might be
python.org-mailman related (as in, the postfix or the link on that machine,
or something.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Fri Aug 25 18:39:41 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 12:39:41 -0500
Subject: [Python-Dev] (214)
In-Reply-To: Your message of "Fri, 25 Aug 2000 12:20:59 -0400."
 <14758.40171.159233.521885@anthem.concentric.net>
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net> <20000825173650.G16377@xs4all.nl>
 <14758.40171.159233.521885@anthem.concentric.net>
Message-ID: <200008251739.MAA20815@cj20424-a.reston1.va.home.com>

> Sigh.  I really don't want to spend the next week debugging this
> stuff. ;/

Please don't.  This happened to me before, and eventually everything
came through -- sometimes with days delay.  So it's just slowness.

There's a new machine waiting for us at VA Linux.  I'll ask Kahn again
to speed up the transition.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From DavidA@ActiveState.com  Fri Aug 25 17:50:47 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Fri, 25 Aug 2000 09:50:47 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: <14758.36302.521877.833943@anthem.concentric.net>
Message-ID: <Pine.WNT.4.21.0008250949150.816-100000@loom>

> the-only-time-in-the-last-year-i've-been-truly-awake-was-when-i
> jammed-with-eric-at-ipc8-ly y'rs,

And that was really good!  You should do it more often!

Let's make sure we organize a jam session in advance for icp9 -- that way
we can get more folks to bring instruments, berries, sugar, bread, butter,
etc.

i-don't-jam-i-listen-ly y'rs,

--david




From bwarsaw@beopen.com  Fri Aug 25 17:56:12 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:56:12 -0400 (EDT)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <14758.36302.521877.833943@anthem.concentric.net>
 <Pine.WNT.4.21.0008250949150.816-100000@loom>
Message-ID: <14758.42284.829235.406950@anthem.concentric.net>

>>>>> "DA" == David Ascher <DavidA@ActiveState.com> writes:

    DA> And that was really good!  You should do it more often!

Thanks!

    DA> Let's make sure we organize a jam session in advance for icp9
    DA> -- that way we can get more folks to bring instruments,
    DA> berries, sugar, bread, butter, etc.

    DA> i-don't-jam-i-listen-ly y'rs,

Okay, so who's gonna webcast IPC9? :)

-B


From bwarsaw@beopen.com  Fri Aug 25 18:05:22 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 13:05:22 -0400 (EDT)
Subject: [Python-Dev] The resurrection of comp.lang.python.announce
Message-ID: <14758.42834.289193.548978@anthem.concentric.net>

Well, after nearly 6 months of inactivity, I'm very happy to say that
comp.lang.python.announce is being revived.  It will now be moderated
by a team of volunteers (see below) using a Mailman mailing list.
Details about comp.lang.python.announce, and its mailing list gateway
python-announce-list@python.org can be found at

   http://www.python.org/psa/MailingLists.html#clpa

Posting guidelines can be found at

   ftp://rtfm.mit.edu/pub/usenet/comp.lang.python.announce/python-newsgroup-faq

This message also serves as a call for moderators.  I am looking for 5
experienced Python folks who would like to team moderator the
newsgroup.  It is a big plus if you've moderated newsgroups before.

If you are interested in volunteering, please email me directly.  Once
I've chosen the current crop of moderators, I'll give you instructions
on how to do it.  Don't worry if you don't get chosen this time
around; I'm sure we'll have some rotation in the moderators ranks as
time goes on.

Cheers,
-Barry


From guido@beopen.com  Fri Aug 25 19:12:28 2000
From: guido@beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 13:12:28 -0500
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:50:47 MST."
 <Pine.WNT.4.21.0008250949150.816-100000@loom>
References: <Pine.WNT.4.21.0008250949150.816-100000@loom>
Message-ID: <200008251812.NAA21141@cj20424-a.reston1.va.home.com>

> And that was really good!  You should do it more often!

Agreed!

> Let's make sure we organize a jam session in advance for icp9 -- that way
> we can get more folks to bring instruments, berries, sugar, bread, butter,
> etc.

This sounds much more fun (and more Pythonic) than a geeks-with-guns
event! :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Fri Aug 25 18:25:13 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 13:25:13 -0400 (EDT)
Subject: [Python-Dev] warning in initpyexpat
Message-ID: <14758.44025.333241.758233@bitdiddle.concentric.net>

gcc -Wall is complaining about possible use of errors_module without
initialization in the initpyexpat function.  Here's the offending code:

    sys_modules = PySys_GetObject("modules");
    {
        PyObject *errmod_name = PyString_FromString("pyexpat.errors");

        if (errmod_name != NULL) {
            errors_module = PyDict_GetItem(d, errmod_name);
            if (errors_module == NULL) {
                errors_module = PyModule_New("pyexpat.errors");
                if (errors_module != NULL) {
                    PyDict_SetItemString(d, "errors", errors_module);
                    PyDict_SetItem(sys_modules, errmod_name, errors_module);
                }
            }
            Py_DECREF(errmod_name);
            if (errors_module == NULL)
                /* Don't code dump later! */
                return;
        }
    }
    errors_dict = PyModule_GetDict(errors_module);

It is indeed the case that errors_module can be used without
initialization.  If PyString_FromString("pyexpat.errors") fails, you
ignore the error and will immediately call PyModule_GetDict with an
uninitialized variable.

You ought to check for the error condition and bail cleanly, rather
than ignoring it and failing somewhere else.

I also wonder why the code that does this check is in its own set of
curly braces; thus, the post to python-dev to discuss the style issue.
Why did you do this?  Is it approved Python style?  It looks cluttered
to me.

Jeremy



From fdrake@beopen.com  Fri Aug 25 18:36:53 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 13:36:53 -0400 (EDT)
Subject: [Python-Dev] Re: warning in initpyexpat
In-Reply-To: <14758.44025.333241.758233@bitdiddle.concentric.net>
References: <14758.44025.333241.758233@bitdiddle.concentric.net>
Message-ID: <14758.44725.345785.430141@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > It is indeed the case that errors_module can be used without
 > initialization.  If PyString_FromString("pyexpat.errors") fails, you
 > ignore the error and will immediately call PyModule_GetDict with an
 > uninitialized variable.

  I'll fix that.

 > I also wonder why the code that does this check is in its own set of
 > curly braces; thus, the post to python-dev to discuss the style issue.
 > Why did you do this?  Is it approved Python style?  It looks cluttered
 > to me.

  I don't like it either.  ;)  I just wanted a temporary variable, but
I can declare that at the top of initpyexpat().  This will be
corrected as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From gward@mems-exchange.org  Fri Aug 25 19:16:24 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Fri, 25 Aug 2000 14:16:24 -0400
Subject: [Python-Dev] If you thought there were too many PEPs...
Message-ID: <20000825141623.G17277@ludwig.cnri.reston.va.us>

...yow: the Perl community is really going overboard in proposing
enhancements:

[from the Perl "daily" news]
>   [3] Perl 6 RFCs Top 150 Mark; New Perl 6 Lists Added [Links]
> 
>         The number of [4]Perl 6 RFCs hit 161 today. The 100th RFC was
>         [5]Embed full URI support into Perl by Nathan Wiger, allowing
>         URIs like "file:///local/etc/script.conf" to be passed to builtin
>         file functions and operators. The 150th was [6]Extend regex
>         syntax to provide for return of a hash of matched subpatterns by
>         Kevin Walker, and the latest, 161, is [7]OO Integration/Migration
>         Path by Matt Youell.
> 
>         New [8]Perl 6 mailing lists include perl6-language- sublists
>         objects, datetime, errors, data, and regex. perl6-bootstrap is
>         being closed, and perl6-meta is taking its place (the subscriber
>         list will not be transferred).
[...]
>    3. http://www.news.perl.org/perl-news.cgi?item=967225716%7C10542
>    4. http://dev.perl.org/rfc/
>    5. http://dev.perl.org/rfc/100.pod
>    6. http://dev.perl.org/rfc/150.pod
>    7. http://dev.perl.org/rfc/161.pod
>    8. http://dev.perl.org/lists.shtml

-- 
Greg Ward - software developer                gward@mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367


From gvwilson@nevex.com  Fri Aug 25 19:30:53 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Fri, 25 Aug 2000 14:30:53 -0400 (EDT)
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
In-Reply-To: <20000825141623.G17277@ludwig.cnri.reston.va.us>
Message-ID: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>

> On Fri, 25 Aug 2000, Greg Ward wrote:
> >         The number of [4]Perl 6 RFCs hit 161 today...
> >         New [8]Perl 6 mailing lists include perl6-language- sublists
> >         objects, datetime, errors, data, and regex. perl6-bootstrap is
> >         being closed, and perl6-meta is taking its place (the subscriber
> >         list will not be transferred).

I've heard from several different sources that when Guy Steele Jr was
hired by Sun to help define the Java language standard, his first proposal
was that the length of the standard be fixed --- anyone who wanted to add
a new feature had to identify an existing feature that would be removed
from the language to make room.  Everyone said, "That's so cool --- but of
course we can't do it..."

Think how much simpler Java would be today if...

;-)

Greg



From Fredrik Lundh" <effbot@telia.com  Fri Aug 25 20:11:16 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 25 Aug 2000 21:11:16 +0200
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
References: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>
Message-ID: <01b701c00ec8$3f47ebe0$f2a6b5d4@hagrid>

greg wrote:
> I've heard from several different sources that when Guy Steele Jr was
> hired by Sun to help define the Java language standard, his first proposal
> was that the length of the standard be fixed.

    "C. A. R. Hoare has suggested that as a rule of
    thumb a language is too complicated if it can't
    be described precisely and readably in fifty
    pages. The Modula-3 committee elevated this to a
    design principle: we gave ourselves a
    "complexity budget" of fifty pages, and chose
    the most useful features that we could
    accommodate within this budget. In the end, we
    were over budget by six lines plus the syntax
    equations. This policy is a bit arbitrary, but
    there are so many good ideas in programming
    language design that some kind of arbitrary
    budget seems necessary to keep a language from
    getting too complicated."

    from "Modula-3: Language definition"
    http://research.compaq.com/SRC/m3defn/html/complete.html

</F>



From akuchlin@mems-exchange.org  Fri Aug 25 20:05:10 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 25 Aug 2000 15:05:10 -0400
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
In-Reply-To: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>; from gvwilson@nevex.com on Fri, Aug 25, 2000 at 02:30:53PM -0400
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>
Message-ID: <20000825150510.A22028@kronos.cnri.reston.va.us>

On Fri, Aug 25, 2000 at 02:30:53PM -0400, Greg Wilson wrote:
>was that the length of the standard be fixed --- anyone who wanted to add
>a new feature had to identify an existing feature that would be removed
>from the language to make room.  Everyone said, "That's so cool --- but of

Something similar was done with Modula-3, as GvR is probably well
aware; one of the goals was to keep the language spec less than 50
pages.  In the end I think it winds up being a bit larger, but it was
good discipline anyway.

--amk


From jeremy@beopen.com  Fri Aug 25 21:44:44 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 16:44:44 -0400 (EDT)
Subject: [Python-Dev] Python 1.6 bug fix strategy
Message-ID: <14758.55996.11900.114220@bitdiddle.concentric.net>

We have gotten several bug reports recently based on 1.6b1.  What
plans, if any, are there to fix these bugs before the 1.6 final
release?  We clearly need to fix them for 2.0b1, but I don't know
about 1.6 final.

Among the bugs are 111403 and 11860, which cause core dumps.  The
former is an obvious bug and has a fairly clear fix.

Jeremy

PS Will 1.6 final be released before 2.0b1?




From tim_one@email.msn.com  Sat Aug 26 00:16:00 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 19:16:00 -0400
Subject: [Python-Dev] Python 1.6 bug fix strategy
In-Reply-To: <14758.55996.11900.114220@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com>

[Jeremy Hylton]
> We have gotten several bug reports recently based on 1.6b1.  What
> plans, if any, are there to fix these bugs before the 1.6 final
> release?

My understanding is that 1.6final is done, except for plugging in a license;
i.e., too late even for bugfixes.  If true, "Fixed in 2.0" will soon be a
popular response to all sorts of things -- unless CNRI intends to do its own
work on 1.6.




From MarkH@ActiveState.com  Sat Aug 26 00:57:48 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Sat, 26 Aug 2000 09:57:48 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <14758.29149.992343.502526@bitdiddle.concentric.net>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>

[Jeremy]

> On Linux, the sre test fails.  Do you see the same problem on Windows?

Not with either debug or release builds.

Mark.



From skip@mojam.com (Skip Montanaro)  Sat Aug 26 01:08:52 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 25 Aug 2000 19:08:52 -0500 (CDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>
References: <14758.29149.992343.502526@bitdiddle.concentric.net>
 <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>
Message-ID: <14759.2708.62485.72631@beluga.mojam.com>

    Mark> [Jeremy]
    >> On Linux, the sre test fails.  Do you see the same problem on Windows?

    Mark> Not with either debug or release builds.

Nor I on Mandrake Linux.

Skip



From cgw@fnal.gov  Sat Aug 26 01:34:23 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 25 Aug 2000 19:34:23 -0500 (CDT)
Subject: [Python-Dev] Compilation failure, current CVS
Message-ID: <14759.4239.276417.473973@buffalo.fnal.gov>

Just a heads-up - I suspect this is a trivial problem, but I don't
have time to investigate right now ("real life").

Linux buffalo.fnal.gov 2.2.16 #31 SMP
gcc version 2.95.2 19991024 (release)

After cvs update and make distclean, I get this error:

make[1]: Entering directory `/usr/local/src/Python-CVS/python/dist/src/Python'
gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c errors.c -o errors.o
errors.c:368: arguments given to macro `PyErr_BadInternalCall'
make[1]: *** [errors.o] Error 1



From cgw@fnal.gov  Sat Aug 26 02:23:08 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Fri, 25 Aug 2000 20:23:08 -0500 (CDT)
Subject: [Python-Dev] CVS weirdness (was:  Compilation failure, current CVS)
In-Reply-To: <14759.4239.276417.473973@buffalo.fnal.gov>
References: <14759.4239.276417.473973@buffalo.fnal.gov>
Message-ID: <14759.7164.55022.134730@buffalo.fnal.gov>

I blurted out:

 > After cvs update and make distclean, I get this error:
 > 
 > make[1]: Entering directory `/usr/local/src/Python-CVS/python/dist/src/Python'
 > gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c errors.c -o errors.o
 > errors.c:368: arguments given to macro `PyErr_BadInternalCall'
 > make[1]: *** [errors.o] Error 1

There is (no surprise) no problem with Python; but there *is* some
problem with me or my setup or some tool I use or the CVS server.  cvs
update -dAP fixed my problems.  This is the second time I've gotten
these sticky CVS date tags which I never meant to set.

Sorry-for-the-false-alarm-ly yr's,
			     -C



From tim_one@email.msn.com  Sat Aug 26 03:12:11 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 22:12:11 -0400
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
Message-ID: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>

Somebody recently added DL_IMPORT macros to two module init functions that
already used their names in DL_EXPORT macros (pyexpat.c and parsermodule.c).
On Windows, that yields the result I (naively?) expected:  compiler warnings
about inconsistent linkage declarations.

This is your basic Undocumented X-Platform Macro Hell, and I suppose the
Windows build should be #define'ing USE_DL_EXPORT for these subprojects
anyway (?), but if I don't hear a good reason for *why* both macros are used
on the same name in the same file, I'll be irresistibly tempted to just
delete the new DL_IMPORT lines.  That is, why would we *ever* use DL_IMPORT
on the name of a module init function?  They only exist to be exported.

baffled-in-reston-ly y'rs  - tim




From fdrake@beopen.com  Sat Aug 26 03:49:30 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 22:49:30 -0400 (EDT)
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
Message-ID: <14759.12346.778540.252012@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Somebody recently added DL_IMPORT macros to two module init functions that
 > already used their names in DL_EXPORT macros (pyexpat.c and parsermodule.c).

  That was me.

 > On Windows, that yields the result I (naively?) expected:  compiler warnings
 > about inconsistent linkage declarations.

  Ouch.

 > This is your basic Undocumented X-Platform Macro Hell, and I suppose the
 > Windows build should be #define'ing USE_DL_EXPORT for these subprojects
 > anyway (?), but if I don't hear a good reason for *why* both macros are used
 > on the same name in the same file, I'll be irresistibly tempted to just
 > delete the new DL_IMPORT lines.  That is, why would we *ever* use DL_IMPORT
 > on the name of a module init function?  They only exist to be exported.

  Here's how I arrived at it, but appearantly this doesn't make sense,
because Windows has too many linkage options.  ;)
  Compiling with gcc using the -Wmissing-prototypes option causes a
warning to be printed if there isn't a prototype at all:

cj42289-a(.../linux-beowolf/Modules); gcc -fpic  -g -ansi -Wall -Wmissing-prototypes  -O2 -I../../Include -I.. -DHAVE_CONFIG_H -c ../../Modules/parsermodule.c
../../Modules/parsermodule.c:2852: warning: no previous prototype for `initparser'

  I used the DL_IMPORT since that's how all the prototypes in the
Python headers are set up.  I can either change these to "normal"
prototypes (no DL_xxPORT macros), DL_EXPORT prototypes, or remove the
prototypes completely, and we'll just have to ignore the warning.
  If you can write a few sentences explaining each of these macros and
when they should be used, I'll make sure they land in the
documentation.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From MarkH@ActiveState.com  Sat Aug 26 05:06:40 2000
From: MarkH@ActiveState.com (Mark Hammond)
Date: Sat, 26 Aug 2000 14:06:40 +1000
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEKHDGAA.MarkH@ActiveState.com>

> This is your basic Undocumented X-Platform Macro Hell, and I suppose the
> Windows build should be #define'ing USE_DL_EXPORT for these subprojects
> anyway (?), but if I don't hear a good reason for *why* both
> macros are used

This is a mess that should be cleaned up.

I take some blame for DL_IMPORT :-(  Originally (and still, as far as I can
tell), DL_IMPORT really means "Python symbol visible outside the core" -
ie, any symbol a dynamic module or embedded application may ever need
(documented, or not :-)

The "import" part of DL_IMPORT is supposed to be from the _clients_ POV.
These apps/extensions are importing these definitions.

This is clearly a poor choice of names, IMO, as the macro USE_DL_EXPORT
changes the meaning from import to export, which is clearly confusing.


DL_EXPORT, on the other hand, seems to have grown while I wasnt looking :-)
As far as I can tell:
* It is used in ways where the implication is clearly "export this symbol
always".
* It is used for extension modules, whether they are builtin or not (eg,
"array" etc use it.
* It behaves differently under Windows than under BeOS, at least.  BeOS
unconditionally defines it as an exported symbol.  Windows only defines it
when building the core.  Extension modules attempting to use this macro to
export them do not work - eg, "winsound.c" uses DL_EXPORT, but is still
forced to add "export:initwinsound" to the linker to get the symbol public.

The ironic thing is, that in Windows at least, DL_EXPORT is working the
exact opposite of how we want it - when it is used for functions built into
the core (eg, builting modules), these symbols do _not_ need to be
exported, but where it is used on extension modules, it fails to make them
public.

So, as you guessed, we have the situation that we have 2 macros that given
their names, are completely misleading :-(

I think that we should make the following change (carefully, of course :-)

* DL_IMPORT -> PYTHON_API
* DL_EXPORT -> PYTHON_MODULE_INIT.

Obviously, the names are up for grabs, but we should change the macros to
what they really _mean_, and getting the correct behaviour shouldn't be a
problem.  I don't see any real cross-platform issues, as long as the macro
reflects what it actually means!

Shall I check in the large number of files affected now?

Over-the-release-manager's-dead-body<wink> ly,

Mark.



From fdrake@beopen.com  Sat Aug 26 06:40:01 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Sat, 26 Aug 2000 01:40:01 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
Message-ID: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>

  I've been playing with dictionaries lately trying to stamp out a
bug:

http://sourceforge.net/bugs/?func=detailbug&bug_id=112558&group_id=5470

  It looks like any fix that really works risks a fair bit of
performance, and that's not good.  My best-effort fix so far is on
SourceForge:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470

but doesn't quite work, according to Guido (I've not yet received
instructions from him about how to reproduce the observed failure).
  None the less, performance is an issue for dictionaries, so I came
up with the idea to use a specialized version for string keys.  When I
saw how few of the dictionaries created by the regression test ever
had anything else, I tried to simply make all dictionaries the
specialized variety (they can degrade themselves as needed).  What I
found was that just over 2% of the dictionaries created by running the
regression test ever held any non-string keys; this may be very
different for "real" programs, but I'm curious about how different.
  I've also done *no* performance testing on my patch for this yet,
and don't expect it to be a big boost without something like the bug
fix I mentioned above, but I could be wrong.  If anyone would like to
play with the idea, I've posted my current patch at:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101309&group_id=5470

  Enjoy!  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From fleck@triton.informatik.uni-bonn.de  Sat Aug 26 09:14:11 2000
From: fleck@triton.informatik.uni-bonn.de (Markus Fleck)
Date: Sat, 26 Aug 2000 10:14:11 +0200 (MET DST)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
In-Reply-To: <39A660CB.7661E20E@lemburg.com> from "M.-A. Lemburg" at Aug 25, 2000 02:04:27 PM
Message-ID: <200008260814.KAA06267@hera.informatik.uni-bonn.de>

M.-A. Lemburg:
> Could someone please summarize what needs to be done to
> post a message to comp.lang.python.announce without taking
> the path via the official (currently defunct) moderator ?

I'm not really defunct, I'm jut not posting any announcements
because I'm not receiving them any more. ;-)))

> I've had a look at the c.l.p.a postings and the only special
> header they include is the "Approved: fleck@informatik.uni-bonn.de"
> header.

Basically, that's all it takes to post to a "moderated" newsgroup.
(Talking about a case of "security by obscurity" here... :-/)
Actually, the string following the "Approved: " may even be random...

In principle, I do have the time again to do daily moderation of incoming
postings for c.l.py.a. Unfortunately, I currently lack the infrastructure
(i.e. the moderation program), which went down together with the old
starship. I was basically waiting for a version of Mailman that could be
used to post to moderated newsgroups. (I should probably have been more
vocal about that, or even should have started hacking Mailman myself... I
*did* start to write something that would grab new announcements daily from
Parnassus and post them to c.l.py.a, and I may even come to finish this in
September, but that doesn't substitute for a "real" moderation tool for
user-supplied postings. Also, it would probably be a lot easier for
Parnassus postings to be built directly from the Parnassus database, instead
from its [generated] HTML pages - the Parnassus author intended to supply
such functionality, but I didn't hear from him yet, either.)

So what's needed now? Primarily, a Mailman installation that can post to
moderated newsgroups (and maybe also do the mail2list gatewaying for
c.l.py.a), and a mail alias that forwards mail for
python-announce@python.org to that Mailman address. Some "daily digest"
generator for Parnassus announcements would be nice to have, too, but
that can only come once the other two things work.

Anyway, thanks for bringing this up again - it puts c.l.py.a at the
top of my to-do list again (where it should be, of course ;-).

Yours,
Markus.


From tim_one@email.msn.com  Sat Aug 26 09:14:48 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 04:14:48 -0400
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <14759.12346.778540.252012@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOECBHCAA.tim_one@email.msn.com>

[Tim, gripes about someone putting module init function names in
 both DL_IMPORT and DL_EXPORT macros]

[Fred Drake]
> That was me.

My IRC chat buddy Fred?  Well, can't get mad at *you*!

>> On Windows, that yields the result I (naively?) expected:
>> compiler warnings about inconsistent linkage declarations.

> Ouch.

Despite that-- as MarkH said later --these macros are as damnably confusing
as original sin, that one says "IMPORT" and the other "EXPORT" *may* have
been cause to guess they might not play well together when applied to a
single name.

> ...
>   Compiling with gcc using the -Wmissing-prototypes option causes a
> warning to be printed if there isn't a prototype at all:

Understood, and your goal is laudable.  I have a question, though:  *all*
module init functions use DL_EXPORT today, and just a few days ago *none* of
them used DL_IMPORT inside the file too.  So how come gcc only warned about
two modules?  Or does it actually warn about all of them, and you snuck this
change into pyexpat and parsermodule while primarily doing other things to
them?

> I can either change these to "normal" prototypes (no DL_xxPORT macros),
> DL_EXPORT prototypes,

I already checked that one in.

> or remove the prototypes completely, and we'll just have to ignore
> the warning.

No way.  "No warnings" is non-negotiable with me -- but since I no longer
get any warnings, I can pretend not to know that you get them under gcc
<wink>.

>   If you can write a few sentences explaining each of these macros and
> when they should be used, I'll make sure they land in the
> documentation.  ;)

I can't -- that's why I posted for help.  The design is currently
incomprehensible; e.g., from the PC config.h:

#ifdef USE_DL_IMPORT
#define DL_IMPORT(RTYPE) __declspec(dllimport) RTYPE
#endif
#ifdef USE_DL_EXPORT
#define DL_IMPORT(RTYPE) __declspec(dllexport) RTYPE
#define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE
#endif

So if you say "use import", the import macro does set up an import, but the
export macro is left undefined (turns out it's later set to an identity
expansion in Python.h, in that case).  But if you say "use export", both
import(!) and export macros are set up to do an export.  It's apparently
illegal to say "use both", but that has to be deduced from the compiler
error that *would* result from redefining the import macro in an
incompatible way.  And if you say neither, the trail snakes back to an
earlier blob of code, where "use import" is magically defined whenever "use
export" *isn't* -- but only if MS_NO_COREDLL is *not* defined.  And the test
of MS_NO_COREDLL is immediately preceded by the comment

    ... MS_NO_COREDLL (do not test this macro)

That covered one of the (I think) four sections in the now 750-line PC
config file that defines these things.  By the time I look at another config
file, my brain is gone.

MarkH is right:  we have to figure what these things are actually trying to
*accomplish*, then gut the code and spell whatever that is in a clear way.
Or, failing that, at least a documented way <wink>.




From tim_one@email.msn.com  Sat Aug 26 09:25:11 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 04:25:11 -0400
Subject: [Python-Dev] Fixing test_poll.py for me just broke it for you
Message-ID: <LNBBLJKPBEHFEDALKOLCKECCHCAA.tim_one@email.msn.com>

Here's the checkin comment.  See test/README for an expanded explanation if
the following isn't clear:


Another new test using "from test.test_support import ...", causing
subtle breakage on Windows (the test is skipped here, but the TestSkipped
exception wasn't recognized as such, because of duplicate copies of
test_support got loaded; so the test looks like a failure under Windows
instead of a skip).
Repaired the import, but

        THIS TEST *WILL* FAIL ON OTHER SYSTEMS NOW!

Again due to the duplicate copies of test_support, the checked-in
"expected output" file actually contains verbose-mode output.  I can't
generate the *correct* non-verbose output on my system.  So, somebody
please do that.




From mal@lemburg.com  Sat Aug 26 09:31:05 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 26 Aug 2000 10:31:05 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <200008260814.KAA06267@hera.informatik.uni-bonn.de>
Message-ID: <39A78048.DA793307@lemburg.com>

Markus Fleck wrote:
> 
> M.-A. Lemburg:
> > I've had a look at the c.l.p.a postings and the only special
> > header they include is the "Approved: fleck@informatik.uni-bonn.de"
> > header.
> 
> Basically, that's all it takes to post to a "moderated" newsgroup.
> (Talking about a case of "security by obscurity" here... :-/)
> Actually, the string following the "Approved: " may even be random...

Wow, so much for spam protection.
 
> In principle, I do have the time again to do daily moderation of incoming
> postings for c.l.py.a. Unfortunately, I currently lack the infrastructure
> (i.e. the moderation program), which went down together with the old
> starship. I was basically waiting for a version of Mailman that could be
> used to post to moderated newsgroups. (I should probably have been more
> vocal about that, or even should have started hacking Mailman myself... I
> *did* start to write something that would grab new announcements daily from
> Parnassus and post them to c.l.py.a, and I may even come to finish this in
> September, but that doesn't substitute for a "real" moderation tool for
> user-supplied postings. Also, it would probably be a lot easier for
> Parnassus postings to be built directly from the Parnassus database, instead
> from its [generated] HTML pages - the Parnassus author intended to supply
> such functionality, but I didn't hear from him yet, either.)
> 
> So what's needed now? Primarily, a Mailman installation that can post to
> moderated newsgroups (and maybe also do the mail2list gatewaying for
> c.l.py.a), and a mail alias that forwards mail for
> python-announce@python.org to that Mailman address. Some "daily digest"
> generator for Parnassus announcements would be nice to have, too, but
> that can only come once the other two things work.
> 
> Anyway, thanks for bringing this up again - it puts c.l.py.a at the
> top of my to-do list again (where it should be, of course ;-).

Barry has just installed a Mailman patch that allows gatewaying
to a moderated newsgroup.

He's also looking for volunteers to do the moderation. I guess
you should apply by sending Barry a private mail (see the
announcement on c.l.p.a ;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Sat Aug 26 10:56:20 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 26 Aug 2000 11:56:20 +0200
Subject: [Python-Dev] New dictionaries patch on SF
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
Message-ID: <39A79444.D701EF84@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   I've been playing with dictionaries lately trying to stamp out a
> bug:
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112558&group_id=5470
> 
>   It looks like any fix that really works risks a fair bit of
> performance, and that's not good.  My best-effort fix so far is on
> SourceForge:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470
> 
> but doesn't quite work, according to Guido (I've not yet received
> instructions from him about how to reproduce the observed failure).

The solution to all this is not easy, since dictionaries can
effectively also be used *after* interpreter finalization (no
thread state). The current PyErr_* APIs all rely on having the
thread state available, so the dictionary implementation would
have to add an extra check for the thread state.

All this will considerably slow down the interpreter and then
only to solve a rare problem... perhaps we should reenable
passing back exceptions viy PyDict_GetItem() instead ?!
This will slow down the interpreter too, but it'll at least
not cause the troubles with hacking the dictionary implementation
to handle exceptions during compares.

>   None the less, performance is an issue for dictionaries, so I came
> up with the idea to use a specialized version for string keys.  When I
> saw how few of the dictionaries created by the regression test ever
> had anything else, I tried to simply make all dictionaries the
> specialized variety (they can degrade themselves as needed).  What I
> found was that just over 2% of the dictionaries created by running the
> regression test ever held any non-string keys; this may be very
> different for "real" programs, but I'm curious about how different.
>   I've also done *no* performance testing on my patch for this yet,
> and don't expect it to be a big boost without something like the bug
> fix I mentioned above, but I could be wrong.  If anyone would like to
> play with the idea, I've posted my current patch at:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101309&group_id=5470

I very much like the idea of having a customizable lookup
method for builtin dicts.

This would allow using more specific lookup function for
different tasks (it would even be possible switching the
lookup functions at run-time via a new dict method), e.g.
one could think of optimizing string lookups using a
predefined set of slots or by assuring that the stored
keys map 1-1 by using an additional hash value modifier
which is automatically tuned to assure this feature. This
would probably greatly speed up lookups for both successful and
failing searches.

We could add also add special lookup functions for keys
which are known not to raise exceptions during compares
(which is probably what motivated your patch, right ?)
and then fall back to a complicated and slow variant
for the general case.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Moshe Zadka <moshez@math.huji.ac.il>  Sat Aug 26 11:01:40 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Sat, 26 Aug 2000 13:01:40 +0300 (IDT)
Subject: [Python-Dev] Fixing test_poll.py for me just broke it for you
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKECCHCAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008261301090.20214-100000@sundial>

On Sat, 26 Aug 2000, Tim Peters wrote:

> Again due to the duplicate copies of test_support, the checked-in
> "expected output" file actually contains verbose-mode output.  I can't
> generate the *correct* non-verbose output on my system.  So, somebody
> please do that.

Done.

--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Sat Aug 26 11:27:48 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 26 Aug 2000 12:27:48 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
In-Reply-To: <39A78048.DA793307@lemburg.com>; from mal@lemburg.com on Sat, Aug 26, 2000 at 10:31:05AM +0200
References: <200008260814.KAA06267@hera.informatik.uni-bonn.de> <39A78048.DA793307@lemburg.com>
Message-ID: <20000826122748.M16377@xs4all.nl>

On Sat, Aug 26, 2000 at 10:31:05AM +0200, M.-A. Lemburg wrote:
> Markus Fleck wrote:
> > > I've had a look at the c.l.p.a postings and the only special
> > > header they include is the "Approved: fleck@informatik.uni-bonn.de"
> > > header.

> > Basically, that's all it takes to post to a "moderated" newsgroup.
> > (Talking about a case of "security by obscurity" here... :-/)
> > Actually, the string following the "Approved: " may even be random...

Yes, it can be completely random. We're talking about USENET here, it wasn't
designed for complicated procedures :-)

> Wow, so much for spam protection.

Well, we have a couple of 'moderated' lists locally, and I haven't, in 5
years, seen anyone fake an Approved: header. Of course, the penalty of doing
so would be severe, but we haven't even had to warn anyone, either, so how
could they know that ? :)

I also think most news-administrators are quite uhm, strict, in that kind of
thing. If any of our clients were found faking Approved: headers, they'd get
a not-very-friendly warning. If they do it a second time, they lose their
account. The news administrators I talked with at SANE2000 (sysadmin
conference) definately shared the same attitude. This isn't email, with
arbitrary headers and open relays and such, this is usenet, where you have
to have a fair bit of clue to keep your newsserver up and running :)

And up to now, spammers have been either too dumb or too smart to figure out
how to post to moderated newsgroups... I hope that if anyone ever does, the
punishment will be severe enough to scare away the rest ;P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Sat Aug 26 12:48:59 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sat, 26 Aug 2000 06:48:59 -0500
Subject: [Python-Dev] Python 1.6 bug fix strategy
In-Reply-To: Your message of "Fri, 25 Aug 2000 19:16:00 -0400."
 <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com>
Message-ID: <200008261148.GAA07398@cj20424-a.reston1.va.home.com>

> [Jeremy Hylton]
> > We have gotten several bug reports recently based on 1.6b1.  What
> > plans, if any, are there to fix these bugs before the 1.6 final
> > release?
> 
> My understanding is that 1.6final is done, except for plugging in a license;
> i.e., too late even for bugfixes.  If true, "Fixed in 2.0" will soon be a
> popular response to all sorts of things -- unless CNRI intends to do its own
> work on 1.6.

Applying the fix for writelines is easy, and I'll take care of it.

The other patch that jeremy mentioned
(http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=111403)
has no fix that I know of, is not easily reproduced, and was only
spotted in embedded code, so it might be the submittor's fault.
Without a reproducible test case it's unlikely to get fixed, so I'll
let that one go.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From skip@mojam.com (Skip Montanaro)  Sat Aug 26 16:11:12 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Sat, 26 Aug 2000 10:11:12 -0500 (CDT)
Subject: [Python-Dev] Is Python moving too fast? (was Re: Is python commercializationazing? ...)
In-Reply-To: <8o8101020mk@news1.newsguy.com>
References: <Pine.GSO.4.10.10008251845380.13902-100000@sundial>
 <8o66m9$cmn$1@slb3.atl.mindspring.net>
 <slrn8qdfq2.2ko.thor@localhost.localdomain>
 <39A6B447.3AFC880E@seebelow.org>
 <8o8101020mk@news1.newsguy.com>
Message-ID: <14759.56848.238001.346327@beluga.mojam.com>

    Alex> When I told people that the 1.5.2 release I was using, the latest
    Alex> one, had been 100% stable for over a year, I saw lights of wistful
    Alex> desire lighting in their eyes (at least as soon as they understood
    Alex> that here, for once, 'stable' did NOT mean 'dead':-)....  Oh well,
    Alex> it was nice while it lasted; now, the perception of Python will
    Alex> switch back from "magically stable and sound beyond ordinary
    Alex> mortals' parameters" to "quite ready to change core language for
    Alex> the sake of a marginal and debatable minor gain", i.e., "just
    Alex> another neat thing off the net".

I began using Python in early 1994, probably around version 1.0.1.  In the
intervening 6+ years, Python has had what I consider to be five significant
releases: 1.1 (10/11/94), 1.2 (4/10/95), 1.3 (10/8/95), 1.4 (10/25/96) and
1.5 (12/31/97).  (1.5.1 was released 4/13/98 and 1.5.2 was released
4/13/99).  So, while it's been a bit over a year since 1.5.2 was released,
Python really hasn't changed much in over 2.5 years. Guido and his core team
have been very good at maintaining backward compatibility while improving
language features and performance and keeping the language accessible to new
users.

We are now in the midst of several significant changes to the Python
development environment.  From my perspective as a friendly outsider, here's
what I see:

    1.  For the first time in it's 10+ year history, the language actually
        has a team of programmers led by Guido whose full-time job is to
        work on the language.  To the best of my knowledge, Guido's work at
        CNRI and CWI focused on other stuff, to which Python was applied as
        one of the tools.  The same observation can be made about the rest
        of the core PythonLabs team: Tim, Barry, Fred & Jeremy.  All had
        other duties at their previous positions.  Python was an important
        tool in what they did, but it wasn't what they got measured by in
        yearly performance reviews.

    2.  For the first time in its history, a secondary development team has
        surfaced in a highly visible and productive way, thanks to the
        migration to the SourceForge CVS repository.  Many of those people
        have been adding new ideas and code to the language all along, but
        the channel between their ideas and the core distribution was a very
        narrow one.  In the past, only the people at CNRI (and before that,
        CWI) could make direct changes to the source code repository.  In
        fact, I believe Guido used to be the sole filter of every new
        contribution to the tree.  Everything had to pass his eyeballs at
        some point.  That was a natural rate limiter on the pace of change,
        but I believe it probably also filtered out some very good ideas.

	While the SourceForge tools aren't perfect, their patch manager and
	bug tracking system, coupled with the externally accessible CVS
	repository, make it much easier for people to submit changes and for
	developers to manage those changes.  At the moment, browsing the
	patch manager with all options set to "any" shows 22 patches,
	submitted by 11 different people, which have been assigned to 9
	different people (there is a lot of overlap betwee the gang of 9 and
	the gang of 11).  That amount of parallelism in the development just
	wasn't possible before.

    3.  Python is now housed in a company formed to foster open source
        software development.  I won't pretend I understand all the
        implications of that move beyond the obvious reasons stated in item
        one, but there is bound to be some desire by BeOpen to put their
        stamp on the language.  I believe that there are key changes to the
        language that would not have made it into 2.0 had the license
        wrangling between CNRI and BeOpen not dragged out as long as it did.
        Those of us involved as active developers took advantage of that
        lull.  (I say "we", because I was a part of that.  I pushed Greg
        Ewing's original list comprehensions prototype along when the
        opportunity arose.)

    4.  Python's user and programmer base has grown dramatically in the past
        several years.  While it's not possible to actually measure the size
        of the user community, you can get an idea of its growth by looking
        at the increase in list traffic.  Taking a peek at the posting
        numbers at

            http://www.egroups.com/group/python-list

        is instructive.  In January of 1994 there were 76 posts to the list.
        In January of 2000 that number grew to 2678.  (That's with much less
        relative participation today by the core developers than in 1994.)

        In January of 1994 I believe the python-list@cwi.nl (with a possible
        Usenet gateway) was the only available discussion forum about
        Python.  Egroups lists 45 Python-related lists today (I took their
        word for it - they may stretch things a bit).  There are at least
        three (maybe four) distinct dialects of the language as well, not to
        mention the significant growth in supportef platforms in the past
        six years.

All this adds up to a system that is due for some significant change.  Those
of us currently involved are still getting used to the new system, so
perhaps things are moving a bit faster than if we were completely familiar
with this environment.  Many of the things that are new in 2.0 have been
proposed on the list off and on for a long time.  Unicode support, list
comprehensions, augmented assignment and extensions to the print statement
come to mind.  They are not new ideas tossed in with a beer chaser (like
"<blink>").  From the traffic on python-dev about Unicode support, I believe
it was the most challenging thing to add to the language.  By comparison,
the other three items I mentioned above were relatively simple concepts to
grasp and implement.

All these ideas were proposed to the community in the past, but have only
recently gained their own voice (so to speak) with the restructuring of the
development environment and growth in the base of active developers.

This broadening of the channel between the development community and the CVS
repository will obviously take some getting used to.  Once 2.0 is out, I
don't expect this (relatively) furious pace to continue.

-- 
Skip Montanaro (skip@mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/

[Completely unrelated aside: I've never voiced an opinion - pro or con -
about the new print syntax, either on python-list or python-dev.  This will
be my only observation.

I have used the following print statement format for several years when I
wanted to insert some temporary debugging statements that I knew I would
later remove or comment out:

    print ">>", this, that, and, the, other, stuff

because it would make it easier to locate them with a text editor.  (Right
shift, while a very useful construct, is hardly common in my programming.)
Now, I'm happy to say, I will no longer have to quote the ">>" and it will
be easier to get the output to go to sys.stderr...]


From Fredrik Lundh" <effbot@telia.com  Sat Aug 26 16:31:54 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 26 Aug 2000 17:31:54 +0200
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
Message-ID: <001801c00f72$c72d5860$f2a6b5d4@hagrid>

summary: Tkinter passes 8-bit strings to Tk without any
preprocessing.  Tk itself expects UTF-8, but passes bogus
UTF-8 data right through...  or in other words, Tkinter
treats any 8-bit string that doesn't contain valid UTF-8
as an ISO Latin 1 string...

:::

maybe Tkinter should raise a UnicodeError instead (just
like string comparisions etc).  example:

    w = Label(text="<cp1250 string>")
    UnicodeError: ASCII decoding error: ordinal not in range(128)

this will break existing code, but I think that's better than
confusing the hell out of anyone working on a non-Latin-1
platform...

+0 from myself -- there's no way we can get a +1 solution
(source encoding) into 2.0 without delaying the release...

:::

for some more background, see the bug report below, and
my followup.

</F>

---

Summary: Impossible to get Win32 default font
encoding in widgets

Details: I did not managed to obtain correct font
encoding in widgets on Win32 (NT Workstation,
Polish version, default encoding cp1250). All cp1250
Polish characters were displayed incorrectly. I think,
all characters that do not belong to Latin-1 will be
displayed incorrectly. Regarding Python1.6b1, I
checked the Tcl/Tk installation (8.3.2). The pure
Tcl/Tk programs DO display characters in cp1250
correctly.

As far as I know, the Tcl interpreter woks with
UTF-8 encoded strings. Does Python1.6b1 really
know about it?

---

Follow-Ups:

Date: 2000-Aug-26 08:04
By: effbot

Comment:
this is really a "how do I", rather than a bug
report ;-)

:::

In 1.6 and beyond, Python's default 8-bit
encoding is plain ASCII.  this encoding is only
used when you're using 8-bit strings in "unicode
contexts" -- for example, if you compare an
8-bit string to a unicode string, or pass it to
a subsystem designed to use unicode strings.

If you pass an 8-bit string containing
characters outside the ASCII range to a function
expecting a unicode string, the result is
undefined (it's usually results in an exception,
but some subsystems may have other ideas).

Finally, Tkinter now supports Unicode.  In fact,
it assumes that all strings passed to it are
Unicode.  When using 8-bit strings, it's only
safe to use plain ASCII.

Tkinter currently doesn't raise exceptions for
8-bit strings with non-ASCII characters, but it
probably should.  Otherwise, Tk will attempt to
parse the string as an UTF-8 string, and if that
fails, it assumes ISO-8859-1.

:::

Anyway, to write portable code using characters
outside the ASCII character set, you should use
unicode strings.

in your case, you can use:

   s = unicode("<a cp1250 string>", "cp1250")

to get the platform's default encoding, you can do:

   import locale
   language, encoding = locale.getdefaultlocale()

where encoding should be "cp1250" on your box.

:::

The reason this work under Tcl/Tk is that Tcl
assumes that your source code uses the
platform's default encoding, and converts things
to Unicode (not necessarily UTF-8) for you under
the hood.  Python 2.1 will hopefully support
*explicit* source encodings, but 1.6/2.0
doesn't.

-------------------------------------------------------

For detailed info, follow this link:
http://sourceforge.net/bugs/?func=detailbug&bug_id=112265&group_id=5470



From Fredrik Lundh" <effbot@telia.com  Sat Aug 26 16:43:38 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sat, 26 Aug 2000 17:43:38 +0200
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
References: <001801c00f72$c72d5860$f2a6b5d4@hagrid>
Message-ID: <002401c00f74$6896a520$f2a6b5d4@hagrid>

>     UnicodeError: ASCII decoding error: ordinal not in range(128)

btw, what the heck is an "ordinal"?

(let's see: it's probably not "a book of rites for the ordination of
deacons, priests, and bishops".  how about an "ordinal number"?
that is, "a number designating the place (as first, second, or third)
occupied by an item in an ordered sequence".  hmm.  does this
mean that I cannot use strings longer than 128 characters?  but
this string was only 12 characters long.  wait, there's another
definition here: "a number assigned to an ordered set that de-
signates both the order of its elements and its cardinal number".
hmm.  what's a "cardinal"?  "a high ecclesiastical official of the
Roman Catholic Church who ranks next below the pope and is
appointed by him to assist him as a member of the college of
cardinals"?  ... oh, here it is: "a number (as 1, 5, 15) that is
used in simple counting and that indicates how many elements
there are in an assemblage".  "assemblage"?)

:::

wouldn't "character" be easier to grok for mere mortals?

...and isn't "range(128)" overly cute?

:::

how about:

UnicodeError: ASCII decoding error: character not in range 0-127

</F>



From tim_one@email.msn.com  Sat Aug 26 21:45:27 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 16:45:27 -0400
Subject: [Python-Dev] test_gettext fails on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDCHCAA.tim_one@email.msn.com>

Don't know whether this is unique to Win98.

test test_gettext failed -- Writing: 'mullusk', expected: 'bacon\012T'

Here's -v output:

test_gettext
installing gettext
calling bindtextdomain with localedir .
.
None
gettext
gettext
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
This module provides internationalization and localization
support for your Python programs by providing an interface to the GNU
gettext message catalog library.
nudge nudge
1
nudge nudge

Has almost nothing in common with the expected output!




From tim_one@email.msn.com  Sat Aug 26 21:59:42 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 16:59:42 -0400
Subject: [Python-Dev] test_gettext fails on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEDCHCAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEDFHCAA.tim_one@email.msn.com>

> ...
> Has almost nothing in common with the expected output!

OK, I understand this now:  the setup function opens a binary file for
writing but neglected to *say* it was binary in the "open".  Huge no-no for
portability.  About to check in the fix.





From thomas@xs4all.net  Sat Aug 26 22:12:31 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sat, 26 Aug 2000 23:12:31 +0200
Subject: [Python-Dev] cPickle
Message-ID: <20000826231231.P16377@xs4all.nl>

I just noticed that test_cpickle makes Python crash (with a segmentation
fault) when there is no copy_reg. The funny bit is this:

centurion:~ > ./python Lib/test/regrtest.py test_cpickle
test_cpickle
test test_cpickle skipped --  No module named copy_reg
1 test skipped: test_cpickle

centurion:~ > ./python Lib/test/regrtest.py test_cookie test_cpickle
test_cookie
test test_cookie skipped --  No module named copy_reg
test_cpickle
Segmentation fault (core dumped)

I suspect there is a bug in the import code, in the case of failed imports. 

Holmes-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim_one@email.msn.com  Sat Aug 26 22:14:37 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 17:14:37 -0400
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
In-Reply-To: <002401c00f74$6896a520$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDFHCAA.tim_one@email.msn.com>

>>     UnicodeError: ASCII decoding error: ordinal not in range(128)

> btw, what the heck is an "ordinal"?

It's a technical term <wink>.  But it's used consistently in Python, e.g.,
that's where the name of the builtin ord function comes from!

>>> print ord.__doc__
ord(c) -> integer

Return the integer ordinal of a one character string.
>>>

> ...
> how about an "ordinal number"?  that is, "a number designating the
> place (as first, second, or third) occupied by an item in an
> ordered sequence".

Exactly.  Each character has an arbitrary but fixed position in an arbitrary
but ordered sequence of all characters.  This isn't hard.

> wouldn't "character" be easier to grok for mere mortals?

Doubt it -- they're already confused about the need to distinguish between a
character and its encoding, and the *character* is most certainly not "in"
or "out" of any range of integers.

> ...and isn't "range(128)" overly cute?

Yes.

> UnicodeError: ASCII decoding error: character not in range 0-127

As above, it makes no sense.  How about compromising on

> UnicodeError: ASCII decoding error: ord(character) > 127

?




From tim_one@email.msn.com  Sun Aug 27 10:57:42 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 27 Aug 2000 05:57:42 -0400
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <20000825141623.G17277@ludwig.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>

[Greg Ward]
> ...yow: the Perl community is really going overboard in proposing
> enhancements:
> ...
>    4. http://dev.perl.org/rfc/

Following that URL is highly recommended!  There's a real burst of
creativity blooming there, and everyone weary of repeated Python debates
should find it refreshing to discover exactly the same arguments going on
over there (lazy lists, curried functions, less syntax, more syntax, less
explicit, more explicit, go away this isn't stinking LISP, ya but maybe it
oughta be, yadda yadda yadda).  Except the *terms* of the debate are
inverted in so many ways!  For example, this is my favorite Killer Appeal to
Principle so far:

    Perl is really hard for a machine to parse.  *Deliberately*.  If
    you think it shouldn't be, you're missing something.

Certainly a good antidote to Python inbreeding <wink>.

Compared to our PEPs, the Perl RFCs are more a collection of wishlists --
implementation details are often sketchy, or even ignored.  But they're in a
brainstorming mode, so I believe that's both expected & encouraged now.

I was surprised by how often Python gets mentioned, and somtimes by how
confusedly.  For example, in the Perl Coroutines RFC:

    Unlike coroutines as defined by Knuth, and implemented in laguages
    such as Simula or Python, perl does not have an explicit "resume"
    call for invoking coroutines.

Mistake -- or Guido's time machine <wink>?

Those who hate Python PEP 214 should check out Perl RFC 39, which proposes
to introduce

    ">" LIST "<"

as a synonym for

    "print" LIST

My favorite example:

    perl -e '><><' # cat(1)

while, of course

    ><;

prints the current value of $_.

I happen to like Perl enough that I enjoy this stuff.  You may wish to take
a different lesson from it <wink>.

whichever-it's-a-mistake-to-ignore-people-having-fun-ly y'rs  - tim




From tim_one@email.msn.com  Sun Aug 27 12:13:35 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 27 Aug 2000 07:13:35 -0400
Subject: [Python-Dev] Is Python moving too fast? (was Re: Is python commercializationazing? ...)
In-Reply-To: <14759.56848.238001.346327@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEFHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> I began using Python in early 1994, probably around version 1.0.1.

And it's always good to hear a newcomer's perspective <wink>.  Seriously, it
was a wonderful sane sketch of what's been happening lately.  Some glosses:

> ...
> From my perspective as a friendly outsider, ...

Nobody fall for that ingratiating ploy:  Skip's a Python Developer at
SourceForge too.  And glad to have him there!

> ...
>     3.  Python is now housed in a company formed to foster open source
>         software development.  I won't pretend I understand all the
>         implications of that move ... but there is bound to be some
>         desire by BeOpen to put their stamp on the language.

There is more desire on BeOpen's part-- at least at first --to just pay our
salaries.  Many people have asked for language or library changes or
enhancements in the past based on demanding real-world needs, but until very
recently the only possible response was "huh -- send in a patch, and maybe
Guido will check it in".  Despite the high approachability of Python's
implementation, often that's just too much of a task for the people seeking
solutions.  But if they want it enough to pay for it, or aren't even sure
exactly what they need, they can hire us to do it now (shameless plug:
mailto:pythonlabs-info@beopen.com).  I doubt there's any team better
qualified, and while I've been a paid prostitute my whole career, you can
still trust Guido to keep us honest <wink>.  For example, that's how
Python's Unicode features got developed (although at CNRI).

> I believe that there are key changes to the language that would not
> have made it into 2.0 had the license wrangling between CNRI and
> BeOpen not dragged out as long as it did.

Absolutely.  You may <snort> have missed some of the endless posts on this
topic:  we were *going* to release 2.0b1 on July 1st.  I was at Guido's
house late the night before, everything was cooking, and we were mere hours
away from uploading the 2.0b1 tarball for release.  Then CNRI pulled the
plug in an email, and we've been trying to get it back into the outlet ever
since.  When it became clear that things weren't going to settle at once,
and that we needed to produce a 1.6 release too with *only* the stuff
developed under CNRI's tenure, that left us twiddling our thumbs.  There
were a pile of cool (but, as you said later, old!) ideas Guido wanted to get
in anyway, so he opened the door.  Had things turned out as we *hoped*, they
would have gone into 2.1 instead, and that's all there was to that.

> ...
> All this adds up to a system that is due for some significant change.

Sure does.  But it's working great so far, so don't jinx it by questioning
*anything* <wink>.

> ...
> Once 2.0 is out, I don't expect this (relatively) furious pace to
> continue.

I suspect it will continue-- maybe even accelerate --but *shift*.  We're
fast running out of *any* feasible (before P3K) "core language" idea that
Guido has ever had a liking for, so I expect the core language changes to
slow waaaaay down again.  The libraries may be a different story, though.
For example, there are lots of GUIs out there, and Tk isn't everyone's
favorite yet remains especially favored in the std distribution; Python is
being used in new areas where it's currently harder to use than it should be
(e.g., deeply embedded systems); some of the web-related modules could
certainly stand a major boost in consistency, functionality and ease-of-use;
and fill in the blank _________.  There are infrastructure issues too, like
what to do on top of Distutils to make it at least as useful as CPAN.  Etc
etc etc ... there's a *ton* of stuff to be done beyond fiddling with the
language per se.  I won't be happy until there's a Python in every toaster
<wink>.

although-*perhaps*-light-bulbs-don't-really-need-it-ly y'rs  - tim




From thomas@xs4all.net  Sun Aug 27 12:42:28 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 27 Aug 2000 13:42:28 +0200
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 27, 2000 at 05:57:42AM -0400
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>
Message-ID: <20000827134228.A500@xs4all.nl>

On Sun, Aug 27, 2000 at 05:57:42AM -0400, Tim Peters wrote:
> [Greg Ward]
> > ...yow: the Perl community is really going overboard in proposing
> > enhancements:
> > ...
> >    4. http://dev.perl.org/rfc/

> Following that URL is highly recommended!

Indeed. Thanx for pointing it out again (and Greg, too), I've had a barrel
of laughs (and good impressions, both) already :)

> I was surprised by how often Python gets mentioned, and somtimes by how
> confusedly.

Well, 'python' is mentioned explicitly 12 times, in 7 different RFCs.
There'll be some implicit ones, of course, but it's not as much as I would
have expected, based on howmany times I hear my perl-hugging colleague
comment on how cool a particular Python feature is ;)

> For example, in the Perl Coroutines RFC:
> 
>     Unlike coroutines as defined by Knuth, and implemented in laguages
>     such as Simula or Python, perl does not have an explicit "resume"
>     call for invoking coroutines.
> 
> Mistake -- or Guido's time machine <wink>?

Neither. Someone elses time machine, as the URL given in the references
section shows: they're not talking about coroutines in the core, but as
'addon'. And not necessarily as stackless, either, there are a couple of
implementations.

(Other than that I don't like the Perl coroutine proposal: I think
single process coroutines make a lot more sense, though I can see why they
are arguing for such a 'i/o-based' model.)

My personal favorite, up to now, is RFC 28: Perl should stay Perl. Anyone
upset by the new print statement should definately read it ;) The other RFCs
going "don't change *that*" are good too, showing that not everyone is
losing themselves in wishes ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Fredrik Lundh" <effbot@telia.com  Sun Aug 27 16:20:08 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Sun, 27 Aug 2000 17:20:08 +0200
Subject: [Python-Dev] If you thought there were too many PEPs...
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com> <20000827134228.A500@xs4all.nl>
Message-ID: <000901c0103a$4a48b380$f2a6b5d4@hagrid>

thomas wrote:
> My personal favorite, up to now, is RFC 28: Perl should stay Perl.

number 29 is also a good one: don't ever add an alias
for "unlink" (written by someone who have never ever
read the POSIX or ANSI C standards ;-)

:::

btw, Python's remove/unlink implementation is slightly
broken -- they both map to unlink, but that's not the
right way to do it:

from SUSv2:

    int remove(const char *path);

    If path does not name a directory, remove(path)
    is equivalent to unlink(path). 

    If path names a directory, remove(path) is equi-
    valent to rmdir(path). 

should I fix this?

</F>



From guido@beopen.com  Sun Aug 27 19:28:46 2000
From: guido@beopen.com (Guido van Rossum)
Date: Sun, 27 Aug 2000 13:28:46 -0500
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: Your message of "Sun, 27 Aug 2000 17:20:08 +0200."
 <000901c0103a$4a48b380$f2a6b5d4@hagrid>
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com> <20000827134228.A500@xs4all.nl>
 <000901c0103a$4a48b380$f2a6b5d4@hagrid>
Message-ID: <200008271828.NAA14847@cj20424-a.reston1.va.home.com>

> btw, Python's remove/unlink implementation is slightly
> broken -- they both map to unlink, but that's not the
> right way to do it:
> 
> from SUSv2:
> 
>     int remove(const char *path);
> 
>     If path does not name a directory, remove(path)
>     is equivalent to unlink(path). 
> 
>     If path names a directory, remove(path) is equi-
>     valent to rmdir(path). 
> 
> should I fix this?

That's a new one -- didn't exist when I learned Unix.

I guess we can fix this in 2.1.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From dgoodger@bigfoot.com  Sun Aug 27 20:27:22 2000
From: dgoodger@bigfoot.com (David Goodger)
Date: Sun, 27 Aug 2000 15:27:22 -0400
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <39A68B42.4E3F8A3D@lemburg.com>
References: <39A68B42.4E3F8A3D@lemburg.com>
Message-ID: <B5CEE3D9.81F2%dgoodger@bigfoot.com>

Some comments:

1. I think the idea of attribute docstrings is a great one. It would assist
in the auto-documenting of code immeasurably.

2. I second Frank Niessink (frankn=nuws@cs.vu.nl), who wrote:

> wouldn't the naming
> scheme <attributename>.__doc__ be a better one?
> 
> So if:
> 
> class C:
>   a = 1
>   """Description of a."""
> 
> then:
> 
> C.a.__doc__ == "Description of a."

'C.a.__doc__' is far more natural and Pythonic than 'C.__doc__a__'. The
latter would also require ugly tricks to access.

3. However, what would happen to C.a.__doc__ (or C.__doc__a__ for that
matter) when attribute 'a' is reassigned? For example:

    class C:
        a = 1  # class attribute, default value for instance attribute
        """Description of a."""

        def __init__(self, arg=None):
            if arg is not None:
                self.a = arg  # instance attribute
            self.b = []
            """Description of b."""

    instance = C(2)

What would instance.a.__doc__ (instance.__doc__a__) be? Would the __doc__ be
wiped out by the reassignment, or magically remain unless overridden?

4. How about instance attributes that are never class attributes? Like
'instance.b' in the example above?

5. Since docstrings "belong" to the attribute preceeding them, wouldn't it
be more Pythonic to write:

    class C:
        a = 1
            """Description of a."""

? (In case of mail viewer problems, each line above is indented relative to
the one before.) This emphasizes the relationship between the docstring and
the attribute. Of course, such an approach may entail a more complicated
modification to the Python source, but also more complete IMHO.

6. Instead of mangling names, how about an alternative approach? Each class,
instance, module, and function gets a single special name (call it
'__docs__' for now), a dictionary of attribute-name to docstring mappings.
__docs__ would be the docstring equivalent to __dict__. These dictionary
entries would not be affected by reassignment unless a new docstring is
specified. So, in the example from (3) above, we would have:

    >>> instance.__docs__
    {'b': 'Description of b.'}
    >>> C.__docs__
    {'a': 'Description of a.'}

Just as there is a built-in function 'dir' to apply Inheritance rules to
instance.__dict__, there would have to be a function 'docs' to apply
inheritance to instance.__docs__:

    >>> docs(instance)
    {'a': 'Description of a.', 'b': 'Description of b.'}

There are repercussions here. A module containing the example from (3) above
would have a __docs__ dictionary containing mappings for docstrings for each
top-level class and function defined, in addition to docstrings for each
global variable.


In conclusion, although this proposal has great promise, it still needs
work. If it's is to be done at all, better to do it right.

This could be the first true test of the PEP system; getting input from the
Python user community as well as the core PythonLabs and Python-Dev groups.
Other PEPs have been either after-the-fact or, in the case of those features
approved for inclusion in Python 2.0, too rushed for a significant
discussion.

-- 
David Goodger    dgoodger@bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)



From thomas@xs4all.net  Mon Aug 28 00:16:24 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 01:16:24 +0200
Subject: [Python-Dev] Python keywords
Message-ID: <20000828011624.E500@xs4all.nl>

--3MwIy2ne0vdjdPXF
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline


Mark, (and the rest of python-dev)

There was a thread here a few weeks ago (or so, I seem to have misplaced
that particular thread :P) about using Python keywords as identifiers in
some cases. You needed that ability for .NET-Python, where the specs say any
identifier should be possible as methods and attributes, and there were some
comments on the list on how to do that (by Guido, for one.)

Well, the attached patch sort-of does that. I tried making it a bit nicer,
but that involved editing all places that currently use the NAME-type node,
and most of those don't advertise that they're doing that :-S The attached
patch is in no way nice, but it does work:

>>> class X:
...     def print(self, x):
...             print "printing", x
... 
>>> x = X()
>>> x.print(1)
printing 1
>>> x.print
<method X.print of X instance at 0x8207fc4>
>>> x.assert = 1
>>>

However, it also allows this at the top level, currently:
>>> def print(x):
...     print "printing", x
... 

which results in some unexpected behaviour:
>>> print(1)
1
>>> globals()['print'](1)
printing 1

But when combining it with modules, it does work as expected, of course:

# printer.py:
def print(x, y):
        print "printing", x, "and", y
#

>>> import printer
>>> printer.print
<function print at 0x824120c>
>>> printer.print(1, 2)
printing 1 and 2

Another plus-side of this particular method is that it's simple and
straightforward, if a bit maintenance-intensive :-) But the big question is:
is this enough for what you need ? Or do you need the ability to use
keywords in *all* identifiers, including variable names and such ? Because
that is quite a bit harder ;-P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!

--3MwIy2ne0vdjdPXF
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="grammar.patch"

Index: Grammar/Grammar
===================================================================
RCS file: /cvsroot/python/python/dist/src/Grammar/Grammar,v
retrieving revision 1.41
diff -c -r1.41 Grammar
*** Grammar/Grammar	2000/08/24 20:11:30	1.41
--- Grammar/Grammar	2000/08/27 23:15:53
***************
*** 19,24 ****
--- 19,28 ----
  #diagram:output\textwidth 20.04cm\oddsidemargin  0.0cm\evensidemargin 0.0cm
  #diagram:rules
  
+ # for reference: everything allowed in a 'def' or trailer expression.
+ # (I might have missed one or two ;)
+ # ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally')
+ 
  # Start symbols for the grammar:
  #	single_input is a single interactive statement;
  #	file_input is a module or sequence of commands read from an input file;
***************
*** 28,34 ****
  file_input: (NEWLINE | stmt)* ENDMARKER
  eval_input: testlist NEWLINE* ENDMARKER
  
! funcdef: 'def' NAME parameters ':' suite
  parameters: '(' [varargslist] ')'
  varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']
  fpdef: NAME | '(' fplist ')'
--- 32,38 ----
  file_input: (NEWLINE | stmt)* ENDMARKER
  eval_input: testlist NEWLINE* ENDMARKER
  
! funcdef: 'def' ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally') parameters ':' suite
  parameters: '(' [varargslist] ')'
  varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']
  fpdef: NAME | '(' fplist ')'
***************
*** 87,93 ****
  atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist '`' | NAME | NUMBER | STRING+
  listmaker: test ( list_for | (',' test)* [','] )
  lambdef: 'lambda' [varargslist] ':' test
! trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
  subscriptlist: subscript (',' subscript)* [',']
  subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
  sliceop: ':' [test]
--- 91,97 ----
  atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist '`' | NAME | NUMBER | STRING+
  listmaker: test ( list_for | (',' test)* [','] )
  lambdef: 'lambda' [varargslist] ':' test
! trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally')
  subscriptlist: subscript (',' subscript)* [',']
  subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
  sliceop: ':' [test]

--3MwIy2ne0vdjdPXF--


From greg@cosc.canterbury.ac.nz  Mon Aug 28 04:16:35 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 28 Aug 2000 15:16:35 +1200 (NZST)
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <B5CEE3D9.81F2%dgoodger@bigfoot.com>
Message-ID: <200008280316.PAA16831@s454.cosc.canterbury.ac.nz>

David Goodger <dgoodger@bigfoot.com>:

> 6. Instead of mangling names, how about an alternative approach? Each class,
> instance, module, and function gets a single special name (call it
> '__docs__' for now), a dictionary of attribute-name to docstring
> mappings.

Good idea!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From Moshe Zadka <moshez@math.huji.ac.il>  Mon Aug 28 07:30:23 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Mon, 28 Aug 2000 09:30:23 +0300 (IDT)
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <000901c0103a$4a48b380$f2a6b5d4@hagrid>
Message-ID: <Pine.GSO.4.10.10008280930000.5796-100000@sundial>

On Sun, 27 Aug 2000, Fredrik Lundh wrote:

> btw, Python's remove/unlink implementation is slightly
> broken -- they both map to unlink, but that's not the
> right way to do it:
> 
> from SUSv2:
> 
>     int remove(const char *path);
> 
>     If path does not name a directory, remove(path)
>     is equivalent to unlink(path). 
> 
>     If path names a directory, remove(path) is equi-
>     valent to rmdir(path). 
> 
> should I fix this?

1. Yes.
2. After the feature freeze.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From tanzer@swing.co.at  Mon Aug 28 07:32:17 2000
From: tanzer@swing.co.at (Christian Tanzer)
Date: Mon, 28 Aug 2000 08:32:17 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: Your message of "Fri, 25 Aug 2000 17:05:38 +0200."
 <39A68B42.4E3F8A3D@lemburg.com>
Message-ID: <m13TISv-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal@lemburg.com> wrote:

>     This PEP proposes a small addition to the way Python currently
>     handles docstrings embedded in Python code.
(snip)
>     Here is an example:
> =

>         class C:
>             "class C doc-string"
> =

>             a =3D 1
>             "attribute C.a doc-string (1)"
> =

>             b =3D 2
>             "attribute C.b doc-string (2)"
> =

>     The docstrings (1) and (2) are currently being ignored by the
>     Python byte code compiler, but could obviously be put to good use
>     for documenting the named assignments that precede them.
>     =

>     This PEP proposes to also make use of these cases by proposing
>     semantics for adding their content to the objects in which they
>     appear under new generated attribute names.

Great proposal. This would make docstrings even more useful.

>     In order to preserve features like inheritance and hiding of
>     Python's special attributes (ones with leading and trailing double
>     underscores), a special name mangling has to be applied which
>     uniquely identifies the docstring as belonging to the name
>     assignment and allows finding the docstring later on by inspecting
>     the namespace.
> =

>     The following name mangling scheme achieves all of the above:
> =

>         __doc__<attributename>__

IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
__docs__ dictionary is a better solution:

- It provides all docstrings for the attributes of an object in a
  single place.

  * Handy in interactive mode.
  * This simplifies the generation of documentation considerably.

- It is easier to explain in the documentation

>     To keep track of the last assigned name, the byte code compiler
>     stores this name in a variable of the compiling structure.  This
>     variable defaults to NULL.  When it sees a docstring, it then
>     checks the variable and uses the name as basis for the above name
>     mangling to produce an implicit assignment of the docstring to the
>     mangled name.  It then resets the variable to NULL to avoid
>     duplicate assignments.

Normally, Python concatenates adjacent strings. It doesn't do this
with docstrings. I think Python's behavior would be more consistent
if docstrings were concatenated like any other strings.

>     Since the implementation does not reset the compiling structure
>     variable when processing a non-expression, e.g. a function
>     definition, the last assigned name remains active until either the
>     next assignment or the next occurrence of a docstring.
> =

>     This can lead to cases where the docstring and assignment may be
>     separated by other expressions:
> =

>         class C:
>             "C doc string"
> =

>             b =3D 2
> =

>             def x(self):
>                 "C.x doc string"
>                 y =3D 3
>                 return 1
> =

>             "b's doc string"
> =

>     Since the definition of method "x" currently does not reset the
>     used assignment name variable, it is still valid when the compiler
>     reaches the docstring "b's doc string" and thus assigns the string
>     to __doc__b__.

This is rather surprising behavior. Does this mean that a string in
the middle of a function definition would be interpreted as the
docstring of the function?

For instance,

    def spam():
        a =3D 3
        "Is this spam's docstring? (not in 1.5.2)"
        return 1

Anyway, the behavior of Python should be the same for all kinds of
docstrings. =


>     A possible solution to this problem would be resetting the name
>     variable for all non-expression nodes.

IMHO, David Goodger's proposal of indenting the docstring relative to the=

attribute it refers to is a better solution.

If that requires too many changes to the parser, the name variable
should be reset for all statement nodes.

Hoping-to-use-attribute-docstrings-soon ly,
Christian

-- =

Christian Tanzer                                         tanzer@swing.co.=
at
Glasauergasse 32                                       Tel: +43 1 876 62 =
36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 =
92



From mal@lemburg.com  Mon Aug 28 09:28:16 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:28:16 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <39A68B42.4E3F8A3D@lemburg.com> <B5CEE3D9.81F2%dgoodger@bigfoot.com>
Message-ID: <39AA22A0.D533598A@lemburg.com>

[Note: Please CC: all messages on this thread to me directly as I
 am the PEP maintainer. If you don't, then I might not read your
 comments.]

David Goodger wrote:
> 
> Some comments:
> 
> 1. I think the idea of attribute docstrings is a great one. It would assist
> in the auto-documenting of code immeasurably.

Agreed ;-)
 
> 2. I second Frank Niessink (frankn=nuws@cs.vu.nl), who wrote:
> 
> > wouldn't the naming
> > scheme <attributename>.__doc__ be a better one?
> >
> > So if:
> >
> > class C:
> >   a = 1
> >   """Description of a."""
> >
> > then:
> >
> > C.a.__doc__ == "Description of a."
> 
> 'C.a.__doc__' is far more natural and Pythonic than 'C.__doc__a__'. The
> latter would also require ugly tricks to access.

This doesn't work, since Python objects cannot have arbitrary
attributes. Also, I wouldn't want to modify attribute objects indirectly
from the outside as the above implies.

I don't really see the argument of __doc__a__ being hard to
access: these attributes are meant for tools to use, not
humans ;-), and these tools can easily construct the right
lookup names by scanning the dir(obj) and then testing for
the various __doc__xxx__ strings.
 
> 3. However, what would happen to C.a.__doc__ (or C.__doc__a__ for that
> matter) when attribute 'a' is reassigned? For example:
> 
>     class C:
>         a = 1  # class attribute, default value for instance attribute
>         """Description of a."""
> 
>         def __init__(self, arg=None):
>             if arg is not None:
>                 self.a = arg  # instance attribute
>             self.b = []
>             """Description of b."""
> 
>     instance = C(2)
> 
> What would instance.a.__doc__ (instance.__doc__a__) be? Would the __doc__ be
> wiped out by the reassignment, or magically remain unless overridden?

See above. This won't work.
 
> 4. How about instance attributes that are never class attributes? Like
> 'instance.b' in the example above?

I don't get the point... doc strings should always be considered
constant and thus be defined in the class/module definition.
 
> 5. Since docstrings "belong" to the attribute preceeding them, wouldn't it
> be more Pythonic to write:
> 
>     class C:
>         a = 1
>             """Description of a."""
> 
> ? (In case of mail viewer problems, each line above is indented relative to
> the one before.) This emphasizes the relationship between the docstring and
> the attribute. Of course, such an approach may entail a more complicated
> modification to the Python source, but also more complete IMHO.

Note that Python's indents block and these are always preceeded
by a line ending in a colon. The above idea would break this.

> 6. Instead of mangling names, how about an alternative approach? Each class,
> instance, module, and function gets a single special name (call it
> '__docs__' for now), a dictionary of attribute-name to docstring mappings.
> __docs__ would be the docstring equivalent to __dict__. These dictionary
> entries would not be affected by reassignment unless a new docstring is
> specified. So, in the example from (3) above, we would have:
> 
>     >>> instance.__docs__
>     {'b': 'Description of b.'}
>     >>> C.__docs__
>     {'a': 'Description of a.'}
> 
> Just as there is a built-in function 'dir' to apply Inheritance rules to
> instance.__dict__, there would have to be a function 'docs' to apply
> inheritance to instance.__docs__:
> 
>     >>> docs(instance)
>     {'a': 'Description of a.', 'b': 'Description of b.'}
> 
> There are repercussions here. A module containing the example from (3) above
> would have a __docs__ dictionary containing mappings for docstrings for each
> top-level class and function defined, in addition to docstrings for each
> global variable.

This would not work well together with class inheritance.
 
> In conclusion, although this proposal has great promise, it still needs
> work. If it's is to be done at all, better to do it right.
> 
> This could be the first true test of the PEP system; getting input from the
> Python user community as well as the core PythonLabs and Python-Dev groups.
> Other PEPs have been either after-the-fact or, in the case of those features
> approved for inclusion in Python 2.0, too rushed for a significant
> discussion.

We'll see whether this "global" approach is a good one ;-)
In any case, I think it'll give more awareness of the PEP
system.

Thanks for the comments,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Aug 28 09:55:15 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:55:15 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TISv-000wcEC@swing.co.at>
Message-ID: <39AA28F3.1968E27@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal@lemburg.com> wrote:
> 
> >     This PEP proposes a small addition to the way Python currently
> >     handles docstrings embedded in Python code.
> (snip)
> >     Here is an example:
> >
> >         class C:
> >             "class C doc-string"
> >
> >             a = 1
> >             "attribute C.a doc-string (1)"
> >
> >             b = 2
> >             "attribute C.b doc-string (2)"
> >
> >     The docstrings (1) and (2) are currently being ignored by the
> >     Python byte code compiler, but could obviously be put to good use
> >     for documenting the named assignments that precede them.
> >
> >     This PEP proposes to also make use of these cases by proposing
> >     semantics for adding their content to the objects in which they
> >     appear under new generated attribute names.
> 
> Great proposal. This would make docstrings even more useful.

Right :-)
 
> >     In order to preserve features like inheritance and hiding of
> >     Python's special attributes (ones with leading and trailing double
> >     underscores), a special name mangling has to be applied which
> >     uniquely identifies the docstring as belonging to the name
> >     assignment and allows finding the docstring later on by inspecting
> >     the namespace.
> >
> >     The following name mangling scheme achieves all of the above:
> >
> >         __doc__<attributename>__
> 
> IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
> __docs__ dictionary is a better solution:
> 
> - It provides all docstrings for the attributes of an object in a
>   single place.
> 
>   * Handy in interactive mode.
>   * This simplifies the generation of documentation considerably.
> 
> - It is easier to explain in the documentation

The downside is that it doesn't work well together with
class inheritance: docstrings of the above form can
be overridden or inherited just like any other class
attribute.
 
> >     To keep track of the last assigned name, the byte code compiler
> >     stores this name in a variable of the compiling structure.  This
> >     variable defaults to NULL.  When it sees a docstring, it then
> >     checks the variable and uses the name as basis for the above name
> >     mangling to produce an implicit assignment of the docstring to the
> >     mangled name.  It then resets the variable to NULL to avoid
> >     duplicate assignments.
> 
> Normally, Python concatenates adjacent strings. It doesn't do this
> with docstrings. I think Python's behavior would be more consistent
> if docstrings were concatenated like any other strings.

Huh ? It does...

>>> class C:
...     "first line"\
...     "second line"
... 
>>> C.__doc__
'first linesecond line'

And the same works for the attribute doc strings too.

> >     Since the implementation does not reset the compiling structure
> >     variable when processing a non-expression, e.g. a function
> >     definition, the last assigned name remains active until either the
> >     next assignment or the next occurrence of a docstring.
> >
> >     This can lead to cases where the docstring and assignment may be
> >     separated by other expressions:
> >
> >         class C:
> >             "C doc string"
> >
> >             b = 2
> >
> >             def x(self):
> >                 "C.x doc string"
> >                 y = 3
> >                 return 1
> >
> >             "b's doc string"
> >
> >     Since the definition of method "x" currently does not reset the
> >     used assignment name variable, it is still valid when the compiler
> >     reaches the docstring "b's doc string" and thus assigns the string
> >     to __doc__b__.
> 
> This is rather surprising behavior. Does this mean that a string in
> the middle of a function definition would be interpreted as the
> docstring of the function?

No, since at the beginning of the function the name variable
is set to NULL.
 
> For instance,
> 
>     def spam():
>         a = 3
>         "Is this spam's docstring? (not in 1.5.2)"
>         return 1
> 
> Anyway, the behavior of Python should be the same for all kinds of
> docstrings.
> 
> >     A possible solution to this problem would be resetting the name
> >     variable for all non-expression nodes.
> 
> IMHO, David Goodger's proposal of indenting the docstring relative to the
> attribute it refers to is a better solution.
> 
> If that requires too many changes to the parser, the name variable
> should be reset for all statement nodes.

See my other mail: indenting is only allowed for blocks of
code and these are usually started with a colon -- doesn't
work here.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Aug 28 09:58:34 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:58:34 +0200
Subject: [Python-Dev] Python keywords
References: <20000828011624.E500@xs4all.nl>
Message-ID: <39AA29BA.73EA9FB3@lemburg.com>

Thomas Wouters wrote:
> 
> Mark, (and the rest of python-dev)
> 
> There was a thread here a few weeks ago (or so, I seem to have misplaced
> that particular thread :P) about using Python keywords as identifiers in
> some cases. You needed that ability for .NET-Python, where the specs say any
> identifier should be possible as methods and attributes, and there were some
> comments on the list on how to do that (by Guido, for one.)

Are you sure you want to confuse Python source code readers by
making keywords usable as identifiers ?

What would happen to Python simple to parse grammar -- would
syntax highlighting still be as simple as it is now ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Mon Aug 28 11:54:13 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 05:54:13 -0500
Subject: [Python-Dev] Python keywords
In-Reply-To: Your message of "Mon, 28 Aug 2000 01:16:24 +0200."
 <20000828011624.E500@xs4all.nl>
References: <20000828011624.E500@xs4all.nl>
Message-ID: <200008281054.FAA22728@cj20424-a.reston1.va.home.com>

[Thomas Wouters]
> There was a thread here a few weeks ago (or so, I seem to have misplaced
> that particular thread :P) about using Python keywords as identifiers in
> some cases. You needed that ability for .NET-Python, where the specs say any
> identifier should be possible as methods and attributes, and there were some
> comments on the list on how to do that (by Guido, for one.)
> 
> Well, the attached patch sort-of does that. I tried making it a bit nicer,
> but that involved editing all places that currently use the NAME-type node,
> and most of those don't advertise that they're doing that :-S The attached
> patch is in no way nice, but it does work:
> 
> >>> class X:
> ...     def print(self, x):
> ...             print "printing", x
> ... 
> >>> x = X()
> >>> x.print(1)
> printing 1
> >>> x.print
> <method X.print of X instance at 0x8207fc4>
> >>> x.assert = 1
> >>>
> 
> However, it also allows this at the top level, currently:
> >>> def print(x):
> ...     print "printing", x
> ... 

Initially I thought this would be fine, but on second thought I'm not
so sure.  To a newbie who doesn't know all the keywords, this would be
confusing:

  >>> def try(): # my first function
  ...     print "hello"
  ...
  >>> try()
    File "<stdin>", line 1
      try()
	 ^
  SyntaxError: invalid syntax
  >>>

I don't know how best to fix this -- using different syntax for 'def'
inside a class than outside would require a complete rewrite of the
grammar, which is not a good idea.  Perhaps a 2nd pass compile-time
check would be sufficient.

> which results in some unexpected behaviour:
> >>> print(1)
> 1
> >>> globals()['print'](1)
> printing 1
> 
> But when combining it with modules, it does work as expected, of course:
> 
> # printer.py:
> def print(x, y):
>         print "printing", x, "and", y
> #
> 
> >>> import printer
> >>> printer.print
> <function print at 0x824120c>
> >>> printer.print(1, 2)
> printing 1 and 2
> 
> Another plus-side of this particular method is that it's simple and
> straightforward, if a bit maintenance-intensive :-) But the big question is:
> is this enough for what you need ? Or do you need the ability to use
> keywords in *all* identifiers, including variable names and such ? Because
> that is quite a bit harder ;-P

I believe that one other thing is needed: keyword parameters (only in
calls, not in definitions).  Also, I think you missed a few reserved
words, e.g. 'and', 'or'.  See Lib/keyword.py!

A comment on the patch: wouldn't it be *much* better to change the
grammar to introduce a new nonterminal, e.g. unres_name, as follows:

unres_name; NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | \
  'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | \
  'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | \
  'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally'

and use this elsewhere in the rules:

funcdef: 'def' unres_name parameters ':' suite
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' unres_name

Then you'd have to fix compile.c of course, but only in two places (I
think?).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Mon Aug 28 12:16:18 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 06:16:18 -0500
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: Your message of "Mon, 28 Aug 2000 09:30:23 +0300."
 <Pine.GSO.4.10.10008280930000.5796-100000@sundial>
References: <Pine.GSO.4.10.10008280930000.5796-100000@sundial>
Message-ID: <200008281116.GAA22841@cj20424-a.reston1.va.home.com>

> > from SUSv2:
> > 
> >     int remove(const char *path);
> > 
> >     If path does not name a directory, remove(path)
> >     is equivalent to unlink(path). 
> > 
> >     If path names a directory, remove(path) is equi-
> >     valent to rmdir(path). 
> > 
> > should I fix this?
> 
> 1. Yes.
> 2. After the feature freeze.

Agreed.  Note that the correct fix is to use remove() if it exists and
emulate it if it doesn't.

On Windows, I believe remove() exists but probably not with the above
semantics so it should be emulated.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Mon Aug 28 13:33:59 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 14:33:59 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
Message-ID: <39AA5C37.2F1846B3@lemburg.com>

I've been tossing some ideas around w/r to adding pragma style
declarations to Python and would like to hear what you think
about these:

1. Embed pragma declarations in comments:

	#pragma: name = value

   Problem: comments are removed by the tokenizer, yet the compiler
   will have to make use of them, so some logic would be needed
   to carry them along.

2. Reusing a Python keyword to build a new form of statement:

	def name = value

   Problem: not sure whether the compiler and grammar could handle
   this.

   The nice thing about this kind of declaration is that it would
   generate a node which the compiler could actively use. Furthermore,
   scoping would come for free. This one is my favourite.

3. Add a new keyword:

	decl name = value

   Problem: possible code breakage.

This is only a question regarding the syntax of these meta-
information declarations. The semantics remain to be solved
in a different discussion.

Comments ?

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Mon Aug 28 13:38:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 14:38:13 +0200
Subject: [Python-Dev] Python keywords
In-Reply-To: <200008281054.FAA22728@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 28, 2000 at 05:54:13AM -0500
References: <20000828011624.E500@xs4all.nl> <200008281054.FAA22728@cj20424-a.reston1.va.home.com>
Message-ID: <20000828143813.F500@xs4all.nl>

On Mon, Aug 28, 2000 at 05:54:13AM -0500, Guido van Rossum wrote:

> > However, it also allows this at the top level, currently:
> > >>> def print(x):
> > ...     print "printing", x
> > ... 

> Initially I thought this would be fine, but on second thought I'm not
> so sure.  To a newbie who doesn't know all the keywords, this would be
> confusing:
> 
>   >>> def try(): # my first function
>   ...     print "hello"
>   ...
>   >>> try()
>     File "<stdin>", line 1
>       try()
> 	 ^
>   SyntaxError: invalid syntax
>   >>>
> 
> I don't know how best to fix this -- using different syntax for 'def'
> inside a class than outside would require a complete rewrite of the
> grammar, which is not a good idea.  Perhaps a 2nd pass compile-time
> check would be sufficient.

Hmm. I'm not really sure. I think it's nice to be able to use
'object.print', and it would be, well, inconsistent, not to allow
'module.print' (or module.exec, for that matter), but I realize how
confusing it can be.

Perhaps generate a warning ? :-P

> I believe that one other thing is needed: keyword parameters (only in
> calls, not in definitions).  Also, I think you missed a few reserved
> words, e.g. 'and', 'or'.  See Lib/keyword.py!

Ahh, yes. I knew there had to be a list of keywords, but I was too tired to
go haunt for it, last night ;) 

> A comment on the patch: wouldn't it be *much* better to change the
> grammar to introduce a new nonterminal, e.g. unres_name, as follows:

> unres_name; NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | \
>   'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | \
>   'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | \
>   'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally'

> and use this elsewhere in the rules:

> funcdef: 'def' unres_name parameters ':' suite
> trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' unres_name

> Then you'd have to fix compile.c of course, but only in two places (I
> think?).

I tried this before, a week or two ago, but it was too much of a pain. The
nodes get tossed around no end, and tracking down where they are STR()'d and
TYPE()'d is, well, annoying ;P I tried to hack around it by making STR() and
CHILD() do some magic, but it didn't quite work. I kind of gave up and
decided it had to be done in the metagrammar, which drove me insane last
night ;-) and then decided to 'prototype' it first.

Then again, maybe I missed something. I might try it again. It would
definately be the better solution ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From DavidA@ActiveState.com  Thu Aug 24 01:25:55 2000
From: DavidA@ActiveState.com (David Ascher)
Date: Wed, 23 Aug 2000 17:25:55 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] [Announce] ActivePython 1.6 beta release (fwd)
Message-ID: <Pine.WNT.4.21.0008231725340.272-100000@loom>

It is my pleasure to announce the availability of the beta release of
ActivePython 1.6, build 100.

This binary distribution, based on Python 1.6b1, is available from
ActiveState's website at:

    http://www.ActiveState.com/Products/ActivePython/

ActiveState is committed to making Python easy to install and use on all
major platforms. ActivePython contains the convenience of swift
installation, coupled with commonly used modules, providing you with a
total package to meets your Python needs. Additionally, for Windows users,
ActivePython provides a suite of Windows tools, developed by Mark Hammond.

ActivePython is provided in convenient binary form for Windows, Linux and
Solaris under a variety of installation packages, available at:

    http://www.ActiveState.com/Products/ActivePython/Download.html

For support information, mailing list subscriptions and archives, a bug
reporting system, and fee-based technical support, please go to

    http://www.ActiveState.com/Products/ActivePython/

Please send us feedback regarding this release, either through the mailing
list or through direct email to ActivePython-feedback@ActiveState.com.

ActivePython is free, and redistribution of ActivePython within your
organization is allowed.  The ActivePython license is available at
http://www.activestate.com/Products/ActivePython/License_Agreement.html
and in the software packages.

We look forward to your comments and to making ActivePython suit your
Python needs in future releases.

Thank you,

-- David Ascher & the ActivePython team
   ActiveState Tool Corporation










From nhodgson@bigpond.net.au  Mon Aug 28 15:22:50 2000
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Tue, 29 Aug 2000 00:22:50 +1000
Subject: [Python-Dev] Python identifiers - was: Python keywords
References: <20000828011624.E500@xs4all.nl>
Message-ID: <019601c010fb$731007c0$8119fea9@neil>

   As well as .NET requiring a mechanism for accessing externally defined
identifiers which clash with Python keywords, it would be good to allow
access to identifiers containing non-ASCII characters. This is allowed in
.NET. C# copies the Java convention of allowing \u escapes in identifiers as
well as character/string literals.

   Has there been any thought to allowing this in Python? The benefit of
this convention over encoding the file in UTF-8 or an 8 bit character set is
that it is ASCII safe and can be manipulated correctly by common tools. My
interest in this is in the possibility of extending Scintilla and PythonWin
to directly understand this sequence, showing the correct glyph rather than
the \u sequence.

   Neil



From bwarsaw@beopen.com  Mon Aug 28 14:44:45 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 28 Aug 2000 09:44:45 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
 <200008260814.KAA06267@hera.informatik.uni-bonn.de>
Message-ID: <14762.27853.159285.488297@anthem.concentric.net>

>>>>> "MF" == Markus Fleck <fleck@triton.informatik.uni-bonn.de> writes:

    MF> In principle, I do have the time again to do daily moderation
    MF> of incoming postings for c.l.py.a. Unfortunately, I currently
    MF> lack the infrastructure (i.e. the moderation program), which
    MF> went down together with the old starship. I was basically
    MF> waiting for a version of Mailman that could be used to post to
    MF> moderated newsgroups. (I should probably have been more vocal
    MF> about that, or even should have started hacking Mailman
    MF> myself...

All this is in place now.
    
    MF> I *did* start to write something that would grab new
    MF> announcements daily from Parnassus and post them to c.l.py.a,
    MF> and I may even come to finish this in September, but that
    MF> doesn't substitute for a "real" moderation tool for
    MF> user-supplied postings. Also, it would probably be a lot
    MF> easier for Parnassus postings to be built directly from the
    MF> Parnassus database, instead from its [generated] HTML pages -
    MF> the Parnassus author intended to supply such functionality,
    MF> but I didn't hear from him yet, either.)

I think that would be a cool thing to work on.  As I mentioned to
Markus in private email, it would be great if the Parnassus->news tool
added the special c.l.py.a footer so that automated scripts on the
/other/ end could pull the messages off the newsgroup, search for the
footer, and post them to web pages, etc.

    MF> So what's needed now? Primarily, a Mailman installation that
    MF> can post to moderated newsgroups (and maybe also do the
    MF> mail2list gatewaying for c.l.py.a), and a mail alias that
    MF> forwards mail for python-announce@python.org to that Mailman
    MF> address. Some "daily digest" generator for Parnassus
    MF> announcements would be nice to have, too, but that can only
    MF> come once the other two things work.

All this is in place, as MAL said.  Markus, if you'd like to be a
moderator, email me and I'd be happy to add you.

And let's start encouraging people to post to c.l.py.a and
python-announce@Python.org again!

-Barry



From bwarsaw@beopen.com  Mon Aug 28 16:01:24 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Mon, 28 Aug 2000 11:01:24 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
Message-ID: <14762.32452.579356.483473@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake@beopen.com> writes:

    Fred> None the less, performance is an issue for dictionaries, so
    Fred> I came up with the idea to use a specialized version for
    Fred> string keys.

Note that JPython does something similar for dictionaries that are
used for namespaces.  See PyStringMap.java.

-Barry


From fdrake@beopen.com  Mon Aug 28 16:19:44 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 11:19:44 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
In-Reply-To: <14762.32452.579356.483473@anthem.concentric.net>
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
 <14762.32452.579356.483473@anthem.concentric.net>
Message-ID: <14762.33552.622374.428515@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Note that JPython does something similar for dictionaries that are
 > used for namespaces.  See PyStringMap.java.

  The difference is that there are no code changes outside
dictobject.c to make this useful for my proposal -- there isn't a new
object type involved.  The PyStringMap class is actually a different
implementation (which I did dig into a bit at one point, to create
versions that weren't bound to JPython).
  My modified dictionary objects are just dictionary objects that
auto-degrade themselves as soon as a non-string key is looked up
(including while setting values).  But the approach and rational are
very similar.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From guido@beopen.com  Mon Aug 28 18:09:30 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 12:09:30 -0500
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: Your message of "Mon, 28 Aug 2000 14:33:59 +0200."
 <39AA5C37.2F1846B3@lemburg.com>
References: <39AA5C37.2F1846B3@lemburg.com>
Message-ID: <200008281709.MAA24142@cj20424-a.reston1.va.home.com>

> I've been tossing some ideas around w/r to adding pragma style
> declarations to Python and would like to hear what you think
> about these:
> 
> 1. Embed pragma declarations in comments:
> 
> 	#pragma: name = value
> 
>    Problem: comments are removed by the tokenizer, yet the compiler
>    will have to make use of them, so some logic would be needed
>    to carry them along.
> 
> 2. Reusing a Python keyword to build a new form of statement:
> 
> 	def name = value
> 
>    Problem: not sure whether the compiler and grammar could handle
>    this.
> 
>    The nice thing about this kind of declaration is that it would
>    generate a node which the compiler could actively use. Furthermore,
>    scoping would come for free. This one is my favourite.
> 
> 3. Add a new keyword:
> 
> 	decl name = value
> 
>    Problem: possible code breakage.
> 
> This is only a question regarding the syntax of these meta-
> information declarations. The semantics remain to be solved
> in a different discussion.

I say add a new reserved word pragma and accept the consequences.  The
other solutions are just too ugly.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From jeremy@beopen.com  Mon Aug 28 17:36:33 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Mon, 28 Aug 2000 12:36:33 -0400 (EDT)
Subject: [Python-Dev] need help with build on HP-UX
Message-ID: <14762.38161.405971.414152@bitdiddle.concentric.net>

We have a bug report for Python 1.5.2 that says building with threads
enabled causes a core dump with the interpreter is started.

#110650:
http://sourceforge.net/bugs/?func=detailbug&bug_id=110650&group_id=5470

I don't have access to an HP-UX box on which to text this problem.  If
anyone does, could they verify whether the problem exists with the
current code?

Jeremy


From nathan@islanddata.com  Mon Aug 28 17:51:24 2000
From: nathan@islanddata.com (Nathan Clegg)
Date: Mon, 28 Aug 2000 09:51:24 -0700 (PDT)
Subject: [Python-Dev] RE: need help with build on HP-UX
In-Reply-To: <14762.38161.405971.414152@bitdiddle.concentric.net>
Message-ID: <XFMail.20000828095124.nathan@islanddata.com>

I can't say for current code, but I ran into this problem with 1.5.2.  I
resolved it by installing pthreads instead of HP's native.  Is/should this be a
prerequisite?



On 28-Aug-2000 Jeremy Hylton wrote:
> We have a bug report for Python 1.5.2 that says building with threads
> enabled causes a core dump with the interpreter is started.
> 
>#110650:
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110650&group_id=5470
> 
> I don't have access to an HP-UX box on which to text this problem.  If
> anyone does, could they verify whether the problem exists with the
> current code?
> 
> Jeremy
> 
> -- 
> http://www.python.org/mailman/listinfo/python-list



----------------------------------
Nathan Clegg
 nathan@islanddata.com




From guido@beopen.com  Mon Aug 28 19:34:55 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 13:34:55 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: Your message of "Mon, 28 Aug 2000 10:20:08 MST."
 <200008281720.KAA09138@slayer.i.sourceforge.net>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
Message-ID: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>

How about popen4?  Or is that Windows specific?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Mon Aug 28 18:36:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 19:36:06 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <39A68B42.4E3F8A3D@lemburg.com> <B5CEE3D9.81F2%dgoodger@bigfoot.com> <39AA22A0.D533598A@lemburg.com> <200008281515.IAA27799@netcom.com>
Message-ID: <39AAA306.2CBD5383@lemburg.com>

Aahz Maruch wrote:
> 
> [p&e]
> 
> In article <39AA22A0.D533598A@lemburg.com>,
> M.-A. Lemburg <mal@lemburg.com> wrote:
> >
> >>     >>> docs(instance)
> >>     {'a': 'Description of a.', 'b': 'Description of b.'}
> >>
> >> There are repercussions here. A module containing the example from (3) above
> >> would have a __docs__ dictionary containing mappings for docstrings for each
> >> top-level class and function defined, in addition to docstrings for each
> >> global variable.
> >
> >This would not work well together with class inheritance.
> 
> Could you provide an example explaining this?  Using a dict *seems* like
> a good idea to me, too.

class A:
    " Base class for database "

    x = "???"
    " name of the database; override in subclasses ! "

    y = 1
    " run in auto-commit ? "

class D(A):

    x = "mydb"
    """ name of the attached database; note that this must support
        transactions 
    """

This will give you:

A.__doc__x__ == " name of the database; override in subclasses ! "
A.__doc__y__ == " run in auto-commit ? "
D.__doc__x__ == """ name of the attached database; note that this must support
        transactions 
    """
D.__doc__y__ == " run in auto-commit ? "

There's no way you are going to achieve this using dictionaries.

Note: You can always build dictionaries of docstring by using
the existing Python introspection features. This PEP is
meant to provide the data -- not the extraction tools.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From gvwilson@nevex.com  Mon Aug 28 18:43:29 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 13:43:29 -0400 (EDT)
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: <200008281709.MAA24142@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>

> > Marc-Andre Lemburg:
> > 1. Embed pragma declarations in comments:
> > 	#pragma: name = value
> > 
> > 2. Reusing a Python keyword to build a new form of statement:
> > 	def name = value
> > 
> > 3. Add a new keyword:
> > 	decl name = value

> Guido van Rossum:
> I say add a new reserved word pragma and accept the consequences.  
> The other solutions are just too ugly.

Greg Wilson:
Will pragma values be available at run-time, e.g. in a special
module-level dictionary variable '__pragma__', so that:

    pragma "encoding" = "UTF8"
    pragma "division" = "fractional"

has the same effect as:

    __pragma__["encoding"] = "UTF8"
    __pragma__["division"] = "fractional"

If that's the case, would it be better to use the dictionary syntax?  Or
does the special form simplify pragma detection so much as to justify
adding new syntax?

Also, what's the effect of putting a pragma in the middle of a file,
rather than at the top?  Does 'import' respect pragmas, or are they
per-file?  I've seen Fortran files that start with 20 lines of:

    C$VENDOR PROPERTY DEFAULT

to disable any settings that might be in effect when the file is included
in another, just so that the author of the include'd file could be sure of
the semantics of the code he was writing.

Thanks,

Greg



From fdrake@beopen.com  Mon Aug 28 19:00:43 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 14:00:43 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
 <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
Message-ID: <14762.43211.814471.424886@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > How about popen4?  Or is that Windows specific?

  Haven't written it yet.  It's a little different from just wrappers
around popen2 module functions.  The Popen3 class doesn't support it
yet.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From skip@mojam.com (Skip Montanaro)  Mon Aug 28 19:06:49 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 28 Aug 2000 13:06:49 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
 <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
Message-ID: <14762.43577.780248.889686@beluga.mojam.com>

    Guido> How about popen4?  Or is that Windows specific?

This is going to sound really dumb, but for all N where N >= 2, how many
popenN routines are there?  Do they represent a subclass of rabbits?  Until
the thread about Windows and os.popen2 started, I, living in a dream world
where my view of libc approximated 4.2BSD, wasn't even aware any popenN
routines existed.  In fact, on my Mandrake box that seems to still be the
case:

    % man -k popen
    popen, pclose (3)    - process I/O
    % nm -a /usr/lib/libc.a | egrep popen
    iopopen.o:
    00000188 T _IO_new_popen
    00000188 W _IO_popen
    00000000 a iopopen.c
    00000188 T popen

In fact, the os module documentation only describes popen, not popenN.

Where'd all these other popen variants come from?  Where can I find them
documented online?

Skip


From fdrake@beopen.com  Mon Aug 28 19:22:27 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 14:22:27 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <14762.43577.780248.889686@beluga.mojam.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
 <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
 <14762.43577.780248.889686@beluga.mojam.com>
Message-ID: <14762.44515.597067.695634@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > In fact, the os module documentation only describes popen, not popenN.

  This will be fixed.

 > Where'd all these other popen variants come from?  Where can I find them
 > documented online?

  In the popen2 module docs, there are descriptions for popen2() and
popen3().  popen4() is new from the Windows world.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From mal@lemburg.com  Mon Aug 28 19:57:26 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 20:57:26 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
Message-ID: <39AAB616.460FA0A8@lemburg.com>

Greg Wilson wrote:
> 
> > > Marc-Andre Lemburg:
> > > 1. Embed pragma declarations in comments:
> > >     #pragma: name = value
> > >
> > > 2. Reusing a Python keyword to build a new form of statement:
> > >     def name = value
> > >
> > > 3. Add a new keyword:
> > >     decl name = value
> 
> > Guido van Rossum:
> > I say add a new reserved word pragma and accept the consequences.
> > The other solutions are just too ugly.
> 
> Greg Wilson:
> Will pragma values be available at run-time, e.g. in a special
> module-level dictionary variable '__pragma__', so that:
> 
>     pragma "encoding" = "UTF8"
>     pragma "division" = "fractional"
> 
> has the same effect as:
> 
>     __pragma__["encoding"] = "UTF8"
>     __pragma__["division"] = "fractional"
> 
> If that's the case, would it be better to use the dictionary syntax?  Or
> does the special form simplify pragma detection so much as to justify
> adding new syntax?

Pragmas tell the compiler to make certain assumptions about the
scope they appear in. It may be useful have their values available
as __pragma__ dict too, but only for introspection purposes and
then only for objects which support the attribute.

If we were to use a convention such as your proposed dictionary
assignment for these purposes, the compiler would have to treat
these assignments in special ways. Adding a new reserved word is
much cleaner.

> Also, what's the effect of putting a pragma in the middle of a file,
> rather than at the top?  Does 'import' respect pragmas, or are they
> per-file?  I've seen Fortran files that start with 20 lines of:
> 
>     C$VENDOR PROPERTY DEFAULT
> 
> to disable any settings that might be in effect when the file is included
> in another, just so that the author of the include'd file could be sure of
> the semantics of the code he was writing.

The compiler will see the pragma definition as soon as it reaches
it during compilation. All subsequent compilation (up to where
the compilation block ends, i.e. up to module, function and class
boundaries) will be influenced by the setting.

This is in line with all other declarations in Python, e.g. those
of global variables, functions and classes.

Imports do not affect pragmas since pragmas are a compile
time thing.

Here are some possible applications of pragmas (just to toss in
a few ideas):

# Cause global lookups to be cached in function's locals for future
# reuse.
pragma globals = 'constant'

# Cause all Unicode literals in the current scope to be
# interpreted as UTF-8.
pragma encoding = 'utf-8'

# Use -OO style optimizations
pragma optimization = 2

# Default division mode
pragma division = 'float'

The basic syntax in the above examples is:

	"pragma" NAME "=" (NUMBER | STRING+)

It has to be that simple to allow the compiler use the information
at compilation time.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From skip@mojam.com (Skip Montanaro)  Mon Aug 28 20:17:47 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 28 Aug 2000 14:17:47 -0500 (CDT)
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: <39AAB616.460FA0A8@lemburg.com>
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
 <39AAB616.460FA0A8@lemburg.com>
Message-ID: <14762.47835.129388.512169@beluga.mojam.com>

    MAL> Here are some possible applications of pragmas (just to toss in
    MAL> a few ideas):

    MAL> # Cause global lookups to be cached in function's locals for future
    MAL> # reuse.
    MAL> pragma globals = 'constant'

    MAL> # Cause all Unicode literals in the current scope to be
    MAL> # interpreted as UTF-8.
    MAL> pragma encoding = 'utf-8'

    MAL> # Use -OO style optimizations
    MAL> pragma optimization = 2

    MAL> # Default division mode
    MAL> pragma division = 'float'

Marc-Andre,

My interpretation of the word "pragma" (and I think a probably common
interpretation) is that it is a "hint to the compiler" which the compiler
can ignore if it chooses.  See

    http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?query=pragma

Your use of the word suggests that you propose to implement something more
akin to a "directive", that is, something the compiler is not free to
ignore.  Ignoring the pragma in the first and third examples will likely
only make the program run slower.  Ignoring the second or fourth pragmas
would likely result in incorrect compilation of the source.

Whatever you come up with, I think the distinction between hint and
directive will have to be made clear in the documentation.

Skip



From tanzer@swing.co.at  Mon Aug 28 17:27:44 2000
From: tanzer@swing.co.at (Christian Tanzer)
Date: Mon, 28 Aug 2000 18:27:44 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: Your message of "Mon, 28 Aug 2000 10:55:15 +0200."
 <39AA28F3.1968E27@lemburg.com>
Message-ID: <m13TRlA-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal@lemburg.com> wrote:

> > IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
> > __docs__ dictionary is a better solution:
> > =

> > - It provides all docstrings for the attributes of an object in a
> >   single place.
> > =

> >   * Handy in interactive mode.
> >   * This simplifies the generation of documentation considerably.
> > =

> > - It is easier to explain in the documentation
> =

> The downside is that it doesn't work well together with
> class inheritance: docstrings of the above form can
> be overridden or inherited just like any other class
> attribute.

Yep. That's why David also proposed a `doc' function combining the
`__docs__' of a class with all its ancestor's __docs__.

> > Normally, Python concatenates adjacent strings. It doesn't do this
> > with docstrings. I think Python's behavior would be more consistent
> > if docstrings were concatenated like any other strings.
> =

> Huh ? It does...
> =

> >>> class C:
> ...     "first line"\
> ...     "second line"
> ... =

> >>> C.__doc__
> 'first linesecond line'
> =

> And the same works for the attribute doc strings too.

Surprise. I tried it this morning. Didn't use a backslash, though. And al=
most =

overlooked it now.

> > >             b =3D 2
> > >
> > >             def x(self):
> > >                 "C.x doc string"
> > >                 y =3D 3
> > >                 return 1
> > >
> > >             "b's doc string"
> > >
> > >     Since the definition of method "x" currently does not reset the=

> > >     used assignment name variable, it is still valid when the compi=
ler
> > >     reaches the docstring "b's doc string" and thus assigns the str=
ing
> > >     to __doc__b__.
> > =

> > This is rather surprising behavior. Does this mean that a string in
> > the middle of a function definition would be interpreted as the
> > docstring of the function?
> =

> No, since at the beginning of the function the name variable
> is set to NULL.

Fine. Could the attribute docstrings follow the same pattern, then?

> > >     A possible solution to this problem would be resetting the name=

> > >     variable for all non-expression nodes.
> > =

> > IMHO, David Goodger's proposal of indenting the docstring relative to=
 the
> > attribute it refers to is a better solution.
> > =

> > If that requires too many changes to the parser, the name variable
> > should be reset for all statement nodes.
> =

> See my other mail: indenting is only allowed for blocks of
> code and these are usually started with a colon -- doesn't
> work here.

Too bad.

It's-still-a-great-addition-to-Python ly, =

Christian

-- =

Christian Tanzer                                         tanzer@swing.co.=
at
Glasauergasse 32                                       Tel: +43 1 876 62 =
36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 =
92



From mal@lemburg.com  Mon Aug 28 20:29:04 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 21:29:04 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
 <39AAB616.460FA0A8@lemburg.com> <14762.47835.129388.512169@beluga.mojam.com>
Message-ID: <39AABD80.5089AEAF@lemburg.com>

Skip Montanaro wrote:
> 
>     MAL> Here are some possible applications of pragmas (just to toss in
>     MAL> a few ideas):
> 
>     MAL> # Cause global lookups to be cached in function's locals for future
>     MAL> # reuse.
>     MAL> pragma globals = 'constant'
> 
>     MAL> # Cause all Unicode literals in the current scope to be
>     MAL> # interpreted as UTF-8.
>     MAL> pragma encoding = 'utf-8'
> 
>     MAL> # Use -OO style optimizations
>     MAL> pragma optimization = 2
> 
>     MAL> # Default division mode
>     MAL> pragma division = 'float'
> 
> Marc-Andre,
> 
> My interpretation of the word "pragma" (and I think a probably common
> interpretation) is that it is a "hint to the compiler" which the compiler
> can ignore if it chooses.  See
> 
>     http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?query=pragma
> 
> Your use of the word suggests that you propose to implement something more
> akin to a "directive", that is, something the compiler is not free to
> ignore.  Ignoring the pragma in the first and third examples will likely
> only make the program run slower.  Ignoring the second or fourth pragmas
> would likely result in incorrect compilation of the source.
> 
> Whatever you come up with, I think the distinction between hint and
> directive will have to be made clear in the documentation.

True, I see the pragma statement as directive. Perhaps its not
the best name after all -- but then it is likely not to be in
use in existing Python programs as identifier, so perhaps we
just need to make it clear in the documentation that some
pragma statements will carry important information, not only hints.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Aug 28 20:35:58 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 21:35:58 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TRlA-000wcEC@swing.co.at>
Message-ID: <39AABF1E.171BFD00@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal@lemburg.com> wrote:
> 
> > > IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
> > > __docs__ dictionary is a better solution:
> > >
> > > - It provides all docstrings for the attributes of an object in a
> > >   single place.
> > >
> > >   * Handy in interactive mode.
> > >   * This simplifies the generation of documentation considerably.
> > >
> > > - It is easier to explain in the documentation
> >
> > The downside is that it doesn't work well together with
> > class inheritance: docstrings of the above form can
> > be overridden or inherited just like any other class
> > attribute.
> 
> Yep. That's why David also proposed a `doc' function combining the
> `__docs__' of a class with all its ancestor's __docs__.

The same can be done for __doc__<attrname>__ style attributes:
a helper function would just need to look at dir(Class) and then
extract the attribute doc strings it finds. It could also do
a DFS search to find a complete API description of the class
by emulating attribute lookup and combine method and attribute
docstrings to produce some nice online documentation output.
 
> > > Normally, Python concatenates adjacent strings. It doesn't do this
> > > with docstrings. I think Python's behavior would be more consistent
> > > if docstrings were concatenated like any other strings.
> >
> > Huh ? It does...
> >
> > >>> class C:
> > ...     "first line"\
> > ...     "second line"
> > ...
> > >>> C.__doc__
> > 'first linesecond line'
> >
> > And the same works for the attribute doc strings too.
> 
> Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> overlooked it now.

You could also wrap the doc string in parenthesis or use a triple
quote string.
 
> > > >             b = 2
> > > >
> > > >             def x(self):
> > > >                 "C.x doc string"
> > > >                 y = 3
> > > >                 return 1
> > > >
> > > >             "b's doc string"
> > > >
> > > >     Since the definition of method "x" currently does not reset the
> > > >     used assignment name variable, it is still valid when the compiler
> > > >     reaches the docstring "b's doc string" and thus assigns the string
> > > >     to __doc__b__.
> > >
> > > This is rather surprising behavior. Does this mean that a string in
> > > the middle of a function definition would be interpreted as the
> > > docstring of the function?
> >
> > No, since at the beginning of the function the name variable
> > is set to NULL.
> 
> Fine. Could the attribute docstrings follow the same pattern, then?

They could and probably should by resetting the variable
after all constructs which do not assign attributes.
 
> > > >     A possible solution to this problem would be resetting the name
> > > >     variable for all non-expression nodes.
> > >
> > > IMHO, David Goodger's proposal of indenting the docstring relative to the
> > > attribute it refers to is a better solution.
> > >
> > > If that requires too many changes to the parser, the name variable
> > > should be reset for all statement nodes.
> >
> > See my other mail: indenting is only allowed for blocks of
> > code and these are usually started with a colon -- doesn't
> > work here.
> 
> Too bad.
> 
> It's-still-a-great-addition-to-Python ly,
> Christian

Me thinks so too ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@beopen.com  Mon Aug 28 22:59:36 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 16:59:36 -0500
Subject: [Python-Dev] Lukewarm about range literals
Message-ID: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>

I chatted with some PythonLabs folks this morning and nobody had any
real enthusiasm for range literals.  I notice that:

  for i in [:100]: print i

looks a bit too much like line noise.  I remember that Randy Pausch
once mentioned that a typical newbie will read this as:

  for i in 100 print i

and they will have a heck of a time to reconstruct the punctuation,
with all sorts of errors lurking, e.g.:

  for i in [100]: print i
  for i in [100:]: print i
  for i in :[100]: print i

Is there anyone who wants to champion this?

Sorry, Thomas!  I'm not doing this to waste your time!  It honestly
only occurred to me this morning, after Tim mentioned he was at most
lukewarm about it...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From thomas@xs4all.net  Mon Aug 28 22:06:31 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 23:06:31 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 28, 2000 at 04:59:36PM -0500
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <20000828230630.I500@xs4all.nl>

On Mon, Aug 28, 2000 at 04:59:36PM -0500, Guido van Rossum wrote:

> Sorry, Thomas!  I'm not doing this to waste your time!  It honestly
> only occurred to me this morning, after Tim mentioned he was at most
> lukewarm about it...

Heh, no problem. It was good practice, and if you remember (or search your
mail archive) I was only lukewarm about it, too, back when you asked me to
write it! And been modulating between 'lukewarm' and 'stonecold', inbetween
generators, tuple-ranges that look like hardware-addresses, nonint-ranges
and what not.

Less-docs-to-write-if-noone-champions-this-then-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@beopen.com  Mon Aug 28 23:30:10 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 17:30:10 -0500
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
Message-ID: <200008282230.RAA30148@cj20424-a.reston1.va.home.com>

Now that the CNRI license issues are nearly settled, BeOpen.com needs
to put its own license on Python 2.0 (as a derivative work of CNRI's
Python 1.6) too.  We want an open discussion about the new license
with the Python community, and have established a mailing list for
this purpose.  To participate subscribe, go to

   http://mailman.beopen.com/mailman/listinfo/license-py20

and follow the instructions for subscribing.  The mailing list is
unmoderated, open to all, and archived
(at http://mailman.beopen.com/pipermail/license-py20/).

Your questions, concerns and suggestions are welcome!

Our initial thoughts are to use a slight adaptation of the CNRI
license for Python 1.6, adding an "or GPL" clause, meaning that Python
2.0 can be redistributed under the Python 2.0 license or under the GPL
(like Perl can be redistributed under the Artistic license or under
the GPL).

Note that I don't want this list to degenerate into flaming about the
CNRI license (except as it pertains directly to the 2.0 license) --
there's little we can do about the CNRI license, and it has been
beaten to death on comp.lang.python.

In case you're in the dark about the CNRI license, please refer to
http://www.python.org/1.6/download.html for the license text and to
http://www.python.org/1.6/license_faq.html for a list of frequently
asked questions about the license and CNRI's answers.

Note that we're planning to release the first beta release of Python
2.0 on September 4 -- however we can change the license for the final
release.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From skip@mojam.com (Skip Montanaro)  Mon Aug 28 22:46:19 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 28 Aug 2000 16:46:19 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <14762.56747.826063.269390@beluga.mojam.com>

    Guido> I notice that:

    Guido>   for i in [:100]: print i

    Guido> looks a bit too much like line noise.  I remember that Randy
    Guido> Pausch once mentioned that a typical newbie will read this as:

    Guido>   for i in 100 print i

Just tossing out a couple ideas here.  I don't see either mentioned in the
current version of the PEP.

    1. Would it help readability if there were no optional elements in range
       literals?  That way you'd have to write

	for i in [0:100]: print i

    2. Would it be more visually obvious to use ellipsis notation to
       separate the start and end inidices?

        >>> for i in [0...100]: print i
	0
	1
	...
	99

	>>> for i in [0...100:2]: print i
	0
	2
	...
	98

I don't know if either are possible syntactically.

Skip


From thomas@xs4all.net  Mon Aug 28 22:55:36 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 23:55:36 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14762.56747.826063.269390@beluga.mojam.com>; from skip@mojam.com on Mon, Aug 28, 2000 at 04:46:19PM -0500
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com> <14762.56747.826063.269390@beluga.mojam.com>
Message-ID: <20000828235536.J500@xs4all.nl>

On Mon, Aug 28, 2000 at 04:46:19PM -0500, Skip Montanaro wrote:

> I don't know if either are possible syntactically.

They are perfectly possible (in fact, more easily so than the current
solution, if it hadn't already been written.) I like the elipsis syntax
myself, but mostly because i have *no* use for elipses, currently. It's also
reminiscent of the range-creating '..' syntax I learned in MOO, a long time
ago ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gvwilson@nevex.com  Mon Aug 28 23:04:41 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 18:04:41 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000828235536.J500@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>

> Thomas Wouters wrote:
> They are perfectly possible (in fact, more easily so than the current
> solution, if it hadn't already been written.) I like the elipsis
> syntax myself, but mostly because i have *no* use for elipses,
> currently. It's also reminiscent of the range-creating '..' syntax I
> learned in MOO, a long time ago ;)

I would vote -1 on [0...100:10] --- even range(0, 100, 10) reads better,
IMHO.  I understand Guido et al's objections to:

    for i in [:100]:

but in my experience, students coming to Python from other languages seem
to expect to be able to say "do this N times" very simply.  Even:

    for i in range(100):

raises eyebrows.  I know it's all syntactic sugar, but it comes up in the
first hour of every course I've taught...

Thanks,

Greg



From nowonder@nowonder.de  Tue Aug 29 01:41:57 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Tue, 29 Aug 2000 00:41:57 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
Message-ID: <39AB06D5.BD99855@nowonder.de>

Greg Wilson wrote:
> 
> I would vote -1 on [0...100:10] --- even range(0, 100, 10) reads better,

I don't like [0...100] either. It just looks bad.
But I really *do* like [0..100] (maybe that's Pascal being my first
serious language).

That said, I prefer almost any form of range literals over the current
situation. range(0,100) has no meaning to me (maybe because English is
not my mother tongue), but [0..100] looks like "from 0 to 100"
(although one might expect len([1..100]) == 100).

> but in my experience, students coming to Python from other languages seem
> to expect to be able to say "do this N times" very simply.  Even:
> 
>     for i in range(100):
> 
> raises eyebrows.  I know it's all syntactic sugar, but it comes up in the
> first hour of every course I've taught...

I fully agree on that one, although I think range(N) to
iterate N times isn't as bad as range(len(SEQUENCE)) to
iterate over the indices of a sequence.

not-voting---but-you-might-be-able-to-guess-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From cgw@fnal.gov  Mon Aug 28 23:47:30 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Mon, 28 Aug 2000 17:47:30 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <14762.60418.53633.223999@buffalo.fnal.gov>

I guess I'm in the minority here because I kind of like the range
literal syntax.

Guido van Rossum writes:
 > I notice that:
 > 
 >   for i in [:100]: print i
 > 
 > looks a bit too much like line noise.  I remember that Randy Pausch
 > once mentioned that a typical newbie will read this as:
 > 
 >   for i in 100 print i

When I was a complete Python newbie (back around 1994) I thought that
the syntax

l2 = l1[:]

for copying lists looked pretty mysterious and weird.  But after
spending some time programming Python I've come to think that the
slice syntax is perfectly natural.  Should constructs be banned from
the language simply because they might confuse newbies?  I don't think
so.

I for one like Thomas' range literals.  They fit very naturally into
the existing Python concept of slices.

 > and they will have a heck of a time to reconstruct the punctuation,
 > with all sorts of errors lurking, e.g.:
 > 
 >   for i in [100]: print i
 >   for i in [100:]: print i
 >   for i in :[100]: print i

This argument seems a bit weak to me; you could take just about any
Python expression and mess up the punctuation with misplaced colons.

 > Is there anyone who wants to champion this?

I don't know about "championing" it but I'll give it a +1, if that
counts for anything.



From gvwilson@nevex.com  Tue Aug 29 00:02:29 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 19:02:29 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14762.60418.53633.223999@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com>

> Charles wrote:
> When I was a complete Python newbie (back around 1994) I thought that
> the syntax
> 
> l2 = l1[:]
> 
> for copying lists looked pretty mysterious and weird.  But after
> spending some time programming Python I've come to think that the
> slice syntax is perfectly natural.  Should constructs be banned from
> the language simply because they might confuse newbies?

Greg writes:
Well, it *is* the reason we switched from Perl to Python in our software
engineering course...

Greg



From guido@beopen.com  Tue Aug 29 01:33:01 2000
From: guido@beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 19:33:01 -0500
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Your message of "Mon, 28 Aug 2000 19:02:29 -0400."
 <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com>
References: <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com>
Message-ID: <200008290033.TAA30757@cj20424-a.reston1.va.home.com>

> > Charles wrote:
> > When I was a complete Python newbie (back around 1994) I thought that
> > the syntax
> > 
> > l2 = l1[:]
> > 
> > for copying lists looked pretty mysterious and weird.  But after
> > spending some time programming Python I've come to think that the
> > slice syntax is perfectly natural.  Should constructs be banned from
> > the language simply because they might confuse newbies?
> 
> Greg writes:
> Well, it *is* the reason we switched from Perl to Python in our software
> engineering course...

And the original proposal for range literals also came from the
Numeric corner of the world (I believe Paul Dubois first suggested it
to me).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From tim_one@email.msn.com  Tue Aug 29 02:51:33 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Mon, 28 Aug 2000 21:51:33 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000828230630.I500@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>

Just brain-dumping here:

Thomas did an excellent job on the patch!  It's clean & crisp and, I think,
bulletproof.  Just want that to be clear.

As the reviewer, I spent about 2 hours playing with it, trying it out in my
code.  And I simply liked it less the more I used it; e.g.,

for i in [:len(a)]:
    a[i] += 1

struck me as clumsier and uglier than

for i in range(len(a)):
    a[i] += 1

at once-- which I expected due to the novelty --but didn't grow on me at
*all*.  Which is saying something, since I'm the world's longest-standing
fan of "for i indexing a" <wink>; i.e., I'm *no* fan of the range(len(...))
business, and this seems even worse.  Despite that I should know 100x better
at all levels, I kept finding myself trying to write stuff like

for i in [:a]:  # or [len(a)] a couple times, even [a:] once
    a[i] += 1

Charles likes slices.  Me too!  I *love* them.  But as a standalone notation
(i.e., not as a subscript), part of the glory of slicing breaks down:  for
the list a, a[:] makes good sense, but when *iterating* over a,  it's
suddenly [:len(a)] because there's no context to supply a correct upper
bound.

For 2.0, the question is solely yes-or-no on this specific notation.  If it
goes in, it will never go away.  I was +0 at first, at best -0 now.  It does
nothing for me I can't do just as easily-- and I think more clearly --with
range.  The kinds of "extensions"/variations mentioned in the PEP make me
shiver, too.

Post 2.0, who knows.  I'm not convinced Python actually needs another
arithmetic-progression *list* notation.  If it does, I've always been fond
of Haskell's range literals (but note that they include the endpoint):

Prelude> [1..10]
[1,2,3,4,5,6,7,8,9,10]
Prelude> [1, 3 .. 10]
[1,3,5,7,9]
Prelude> [10, 9 .. 1]
[10,9,8,7,6,5,4,3,2,1]
Prelude> [10, 7 .. -5]
[10,7,4,1,-2,-5]
Prelude>

Of course Haskell is profoundly lazy too, so "infinite" literals are just as
normal there:

Prelude> take 5 [1, 100 ..]
[1,100,199,298,397]
Prelude> take 5 [3, 2 ..]
[3,2,1,0,-1]

It's often easier to just list the first two terms than to figure out the
*last* term and name the stride.  I like notations that let me chuckle "hey,
you're the computer, *you* figure out the silly details" <wink>.




From dgoodger@bigfoot.com  Tue Aug 29 04:05:41 2000
From: dgoodger@bigfoot.com (David Goodger)
Date: Mon, 28 Aug 2000 23:05:41 -0400
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <39AABF1E.171BFD00@lemburg.com>
References: <m13TRlA-000wcEC@swing.co.at><39AABF1E.171BFD00@lemburg.com>
Message-ID: <B5D0A0C4.82E1%dgoodger@bigfoot.com>

on 2000-08-28 15:35, M.-A. Lemburg (mal@lemburg.com) wrote:

> Christian Tanzer wrote:
>> 
>> "M.-A. Lemburg" <mal@lemburg.com> wrote:
>> 
>>>> IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
>>>> __docs__ dictionary is a better solution:
>>>> 
>>>> - It provides all docstrings for the attributes of an object in a
>>>> single place.
>>>> 
>>>> * Handy in interactive mode.
>>>> * This simplifies the generation of documentation considerably.
>>>> 
>>>> - It is easier to explain in the documentation
>>> 
>>> The downside is that it doesn't work well together with
>>> class inheritance: docstrings of the above form can
>>> be overridden or inherited just like any other class
>>> attribute.
>> 
>> Yep. That's why David also proposed a `doc' function combining the
>> `__docs__' of a class with all its ancestor's __docs__.
> 
> The same can be done for __doc__<attrname>__ style attributes:
> a helper function would just need to look at dir(Class) and then
> extract the attribute doc strings it finds. It could also do
> a DFS search to find a complete API description of the class
> by emulating attribute lookup and combine method and attribute
> docstrings to produce some nice online documentation output.

Using dir(Class) wouldn't find any inherited attributes of the class. A
depth-first search would be required for any use of attribute docstrings.

From the Python library docs:

    dir ([object]) 

    ... The list is not necessarily complete; e.g., for classes,
    attributes defined in base classes are not included, and for
    class instances, methods are not included. ...

This can easily be verified by a quick interactive session:

    >>> class C:
    ...     x = 1
    ... 
    >>> class D(C):
    ...     y = 2
    ... 
    >>> D.__dict__
    {'__module__': '__main__', '__doc__': None, 'y': 2}
    >>> C.__dict__
    {'__doc__': None, '__module__': '__main__', 'x': 1}
    >>> dir(D)
    ['__doc__', '__module__', 'y']
    >>> i = D()
    >>> i.__dict__
    {}
    >>> dir(i)
    []

So there are no entries in i's __dict__. And yet:

    >>> i.x
    1
    >>> i.y
    2

The advantage of the __doc__attribute__ name-mangling scheme (over __docs__
dictionaries) would be that the attribute docstrings would be accessible
from subclasses and class instances. But since "these attributes are meant
for tools to use, not humans," this is not an issue.

Just to *find* all attribute names, in order to extract the docstrings, you
would *have* to go through a depth-first search of all base classes. Since
you're doing that anyway, why not collect docstrings as you collect
attributes? There would be no penalty. In fact, such an optimized function
could be written and included in the standard distribution.

A perfectly good model exists in __dict__ and dir(). Why not imitate it?

on 2000-08-28 04:28, M.-A. Lemburg (mal@lemburg.com) wrote:
> This would not work well together with class inheritance.

It seems to me that it would work *exactly* as does class inheritance,
cleanly and elegantly. The __doc__attribute__ name-mangling scheme strikes
me as un-Pythonic, to be honest.

Let me restate: I think the idea of attribute docstring is great. It brings
a truly Pythonic, powerful auto-documentation system (a la POD or JavaDoc)
closer. And I'm willing to help!

-- 
David Goodger    dgoodger@bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From cgw@fnal.gov  Tue Aug 29 05:38:41 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Mon, 28 Aug 2000 23:38:41 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
References: <20000828230630.I500@xs4all.nl>
 <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <14763.15953.563107.722452@buffalo.fnal.gov>

Tim Peters writes:

 > As the reviewer, I spent about 2 hours playing with it, trying it out in my
 > code.  And I simply liked it less the more I used it

That's 2 hours more than I (and probably most other people) spent
trying it out.

 > For 2.0, the question is solely yes-or-no on this specific notation.  If it
 > goes in, it will never go away.

This strikes me as an extremely strong argument.  If the advantages
aren't really all that clear, then adopting this syntax for range
literals now removes the possibility to come up with a better way at a
later date ("opportunity cost", as the economists say).

The Haskell examples you shared are pretty neat.

FWIW, I retract my earlier +1.



From ping@lfw.org  Tue Aug 29 06:09:39 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 00:09:39 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>

On Mon, 28 Aug 2000, Tim Peters wrote:
> Post 2.0, who knows.  I'm not convinced Python actually needs another
> arithmetic-progression *list* notation.  If it does, I've always been fond
> of Haskell's range literals (but note that they include the endpoint):
> 
> Prelude> [1..10]
> [1,2,3,4,5,6,7,8,9,10]
> Prelude> [1, 3 .. 10]
> [1,3,5,7,9]
> Prelude> [10, 9 .. 1]
> [10,9,8,7,6,5,4,3,2,1]
> Prelude> [10, 7 .. -5]
> [10,7,4,1,-2,-5]

I think these examples are beautiful.  There is no reason why we couldn't
fit something like this into Python.  Imagine this:

    - The ".." operator produces a tuple (or generator) of integers.
      It should probably have precedence just above "in".
    
    - "a .. b", where a and b are integers, produces the sequence
      of integers (a, a+1, a+2, ..., b).

    - If the left argument is a tuple of two integers, as in
      "a, b .. c", then we get the sequence of integers from
      a to c with step b-a, up to and including c if c-a happens
      to be a multiple of b-a (exactly as in Haskell).

And, optionally:

    - The "..!" operator produces a tuple (or generator) of integers.
      It functions exactly like the ".." operator except that the
      resulting sequence does not include the endpoint.  (If you read
      "a .. b" as "go from a up to b", then read "a ..! b" as "go from
      a up to, but not including b".)

If this operator existed, we could then write:

    for i in 2, 4 .. 20:
        print i

    for i in 1 .. 10:
        print i*i

    for i in 0 ..! len(a):
        a[i] += 1

...and these would all do the obvious things.


-- ?!ng




From greg@cosc.canterbury.ac.nz  Tue Aug 29 06:04:05 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Aug 2000 17:04:05 +1200 (NZST)
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
In-Reply-To: <200008282230.RAA30148@cj20424-a.reston1.va.home.com>
Message-ID: <200008290504.RAA17003@s454.cosc.canterbury.ac.nz>

> meaning that Python
> 2.0 can be redistributed under the Python 2.0 license or under the
> GPL

Are you sure that's possible? Doesn't the CNRI license
require that its terms be passed on to users of derivative
works? If so, a user of Python 2.0 couldn't just remove the
CNRI license and replace it with the GPL.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Tue Aug 29 06:17:38 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Aug 2000 17:17:38 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
Message-ID: <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>

Ka-Ping Yee <ping@lfw.org>:

>    for i in 1 .. 10:
>        print i*i

That looks quite nice to me!

>    for i in 0 ..! len(a):
>        a[i] += 1

And that looks quite ugly. Couldn't it just as well be

    for i in 0 .. len(a)-1:
        a[i] += 1

and be vastly clearer?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+



From bwarsaw@beopen.com  Tue Aug 29 06:30:31 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 01:30:31 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
 <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>
Message-ID: <14763.19063.973751.122546@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    GE> Ka-Ping Yee <ping@lfw.org>:

    >> for i in 1 .. 10: print i*i

    GE> That looks quite nice to me!

Indeed.

    >> for i in 0 ..! len(a): a[i] += 1

    GE> And that looks quite ugly. Couldn't it just as well be

    |     for i in 0 .. len(a)-1:
    |         a[i] += 1

    GE> and be vastly clearer?

I agree.  While I read 1 ..! 10 as "from one to not 10" that doesn't
exactly tell me what the sequence /does/ run to. ;)

-Barry


From Fredrik Lundh" <effbot@telia.com  Tue Aug 29 08:09:02 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 09:09:02 +0200
Subject: [Python-Dev] Lukewarm about range literals
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <01c501c01188$a21232e0$766940d5@hagrid>

tim peters wrote:
> Charles likes slices.  Me too!  I *love* them.  But as a standalone notation
> (i.e., not as a subscript), part of the glory of slicing breaks down:  for
> the list a, a[:] makes good sense, but when *iterating* over a,  it's
> suddenly [:len(a)] because there's no context to supply a correct upper
> bound.

agreed.  ranges and slices are two different things.  giving
them the same syntax is a lousy idea.

> Post 2.0, who knows.  I'm not convinced Python actually needs another
> arithmetic-progression *list* notation.  If it does, I've always been fond
> of Haskell's range literals (but note that they include the endpoint):
> 
> Prelude> [1..10]
> [1,2,3,4,5,6,7,8,9,10]
> Prelude> [1, 3 .. 10]
> [1,3,5,7,9]

isn't that taken from SETL?

(the more I look at SETL, the more Pythonic it looks.  not too
bad for something that was designed in the late sixties ;-)

talking about SETL, now that the range literals are gone, how
about revisiting an old proposal:

    "...personally, I prefer their "tuple former" syntax over the the
    current PEP202 proposal:

        [expression : iterator]

        [n : n in range(100)]
        [(x**2, x) : x in range(1, 6)]
        [a : a in y if a > 5]

    (all examples are slightly pythonified; most notably, they
    use "|" or "st" (such that) instead of "if")

    the expression can be omitted if it's the same thing as the
    loop variable, *and* there's at least one "if" clause:

        [a in y if a > 5]

    also note that their "for-in" statement can take qualifiers:

        for a in y if a > 5:
            ...

</F>



From tanzer@swing.co.at  Tue Aug 29 07:42:17 2000
From: tanzer@swing.co.at (Christian Tanzer)
Date: Tue, 29 Aug 2000 08:42:17 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: Your message of "Mon, 28 Aug 2000 21:35:58 +0200."
 <39AABF1E.171BFD00@lemburg.com>
Message-ID: <m13Tf69-000wcDC@swing.co.at>

"M.-A. Lemburg" <mal@lemburg.com> wrote:

> > > > IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
> > > > __docs__ dictionary is a better solution:
(snip)
> > > The downside is that it doesn't work well together with
> > > class inheritance: docstrings of the above form can
> > > be overridden or inherited just like any other class
> > > attribute.
> > =

> > Yep. That's why David also proposed a `doc' function combining the
> > `__docs__' of a class with all its ancestor's __docs__.
> =

> The same can be done for __doc__<attrname>__ style attributes:
> a helper function would just need to look at dir(Class) and then
> extract the attribute doc strings it finds. It could also do
> a DFS search to find a complete API description of the class
> by emulating attribute lookup and combine method and attribute
> docstrings to produce some nice online documentation output.

Of course, one can get at all docstrings by using `dir'. But it is a
pain and slow as hell. And nothing one would use in interactive mode.

As Python already handles the analogous case for `__dict__' and
`getattr', it seems to be just a SMOP to do it for `__docs__', too. =


> > > > Normally, Python concatenates adjacent strings. It doesn't do thi=
s
> > > > with docstrings. I think Python's behavior would be more consiste=
nt
> > > > if docstrings were concatenated like any other strings.
> > >
> > > Huh ? It does...
> > >
> > > >>> class C:
> > > ...     "first line"\
> > > ...     "second line"
> > > ...
> > > >>> C.__doc__
> > > 'first linesecond line'
> > >
> > > And the same works for the attribute doc strings too.
> > =

> > Surprise. I tried it this morning. Didn't use a backslash, though. An=
d almost
> > overlooked it now.
> =

> You could also wrap the doc string in parenthesis or use a triple
> quote string.

Wrapping a docstring in parentheses doesn't work in 1.5.2:

Python 1.5.2 (#5, Jan  4 2000, 11:37:02)  [GCC 2.7.2.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> class C:
=2E..   ("first line"
=2E..    "second line")
=2E.. =

>>> C.__doc__ =

>>> =


Triple quoted strings work -- that's what I'm constantly using. The
downside is, that the docstrings either contain spurious white space
or it messes up the layout of the code (if you start subsequent lines
in the first column).

-- =

Christian Tanzer                                         tanzer@swing.co.=
at
Glasauergasse 32                                       Tel: +43 1 876 62 =
36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 =
92



From mal@lemburg.com  Tue Aug 29 10:00:49 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 11:00:49 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TRlA-000wcEC@swing.co.at><39AABF1E.171BFD00@lemburg.com> <B5D0A0C4.82E1%dgoodger@bigfoot.com>
Message-ID: <39AB7BC1.5670ACDC@lemburg.com>

David Goodger wrote:
> 
> on 2000-08-28 15:35, M.-A. Lemburg (mal@lemburg.com) wrote:
> 
> > Christian Tanzer wrote:
> >>
> >> "M.-A. Lemburg" <mal@lemburg.com> wrote:
> >>
> >>>> IMHO, David Goodger's (<dgoodger@bigfoot.com>) idea of using a
> >>>> __docs__ dictionary is a better solution:
> >>>>
> >>>> - It provides all docstrings for the attributes of an object in a
> >>>> single place.
> >>>>
> >>>> * Handy in interactive mode.
> >>>> * This simplifies the generation of documentation considerably.
> >>>>
> >>>> - It is easier to explain in the documentation
> >>>
> >>> The downside is that it doesn't work well together with
> >>> class inheritance: docstrings of the above form can
> >>> be overridden or inherited just like any other class
> >>> attribute.
> >>
> >> Yep. That's why David also proposed a `doc' function combining the
> >> `__docs__' of a class with all its ancestor's __docs__.
> >
> > The same can be done for __doc__<attrname>__ style attributes:
> > a helper function would just need to look at dir(Class) and then
> > extract the attribute doc strings it finds. It could also do
> > a DFS search to find a complete API description of the class
> > by emulating attribute lookup and combine method and attribute
> > docstrings to produce some nice online documentation output.
> 
> Using dir(Class) wouldn't find any inherited attributes of the class. A
> depth-first search would be required for any use of attribute docstrings.

Uhm, yes... that's what I wrote in the last paragraph.

> The advantage of the __doc__attribute__ name-mangling scheme (over __docs__
> dictionaries) would be that the attribute docstrings would be accessible
> from subclasses and class instances. But since "these attributes are meant
> for tools to use, not humans," this is not an issue.

I understand that you would rather like a "frozen" version
of the class docs, but this simply doesn't work out for
the common case of mixin classes and classes which are built
at runtime.

The name mangling is meant for internal use and just to give
the beast a name ;-) 

Doc tools can then take whatever action
they find necessary and apply the needed lookup, formatting
and content extraction. They might even add a frozen __docs__
attribute to classes which are known not to change after
creation.

I use such a function which I call freeze() to optimize many
static classes in my applications: the function scans all
available attributes in the inheritance tree and adds them
directly to the class in question. This gives some noticable
speedups for deeply nested class structures or ones which
use many mixin classes.

> Just to *find* all attribute names, in order to extract the docstrings, you
> would *have* to go through a depth-first search of all base classes. Since
> you're doing that anyway, why not collect docstrings as you collect
> attributes? There would be no penalty. In fact, such an optimized function
> could be written and included in the standard distribution.
> 
> A perfectly good model exists in __dict__ and dir(). Why not imitate it?

Sure, but let's do that in a doc() utility function.

I want to keep the implementation of this PEP clean and simple.
All meta-logic should be applied by external helpers.

> on 2000-08-28 04:28, M.-A. Lemburg (mal@lemburg.com) wrote:
> > This would not work well together with class inheritance.
> 
> It seems to me that it would work *exactly* as does class inheritance,
> cleanly and elegantly.

Right, and that's why I'm proposing to use attributes for the
docstrings as well: the docstrings will then behave just like
the attributes they describe.

> The __doc__attribute__ name-mangling scheme strikes
> me as un-Pythonic, to be honest.

It may look a bit strange, but it's certainly not un-Pythonic:
just look at private name mangling or the many __xxx__ hooks
which Python uses.
 
> Let me restate: I think the idea of attribute docstring is great. It brings
> a truly Pythonic, powerful auto-documentation system (a la POD or JavaDoc)
> closer. And I'm willing to help!

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Tue Aug 29 10:41:15 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 11:41:15 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13Tf69-000wcDC@swing.co.at>
Message-ID: <39AB853B.217402A2@lemburg.com>

Christian Tanzer wrote:
> 
> > > > >>> class C:
> > > > ...     "first line"\
> > > > ...     "second line"
> > > > ...
> > > > >>> C.__doc__
> > > > 'first linesecond line'
> > > >
> > > > And the same works for the attribute doc strings too.
> > >
> > > Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> > > overlooked it now.
> >
> > You could also wrap the doc string in parenthesis or use a triple
> > quote string.
> 
> Wrapping a docstring in parentheses doesn't work in 1.5.2:
> 
> Python 1.5.2 (#5, Jan  4 2000, 11:37:02)  [GCC 2.7.2.1] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> class C:
> ...   ("first line"
> ...    "second line")
> ...
> >>> C.__doc__
> >>>

Hmm, looks like you're right... the parenthesis probably only
work for "if" and function calls. This works:

function("firstline"
	 "secondline")

> Triple quoted strings work -- that's what I'm constantly using. The
> downside is, that the docstrings either contain spurious white space
> or it messes up the layout of the code (if you start subsequent lines
> in the first column).

Just a question of how smart you doc string extraction
tools are. Have a look at hack.py:

	http://starship.python.net/~lemburg/hack.py

and its docs() API:

>>> class C:
...     """ first line
...         second line
...         third line
...     """
... 
>>> import hack 
>>> hack.docs(C)
Class  :
    first line
    second line
    third line

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jack@oratrix.nl  Tue Aug 29 10:44:30 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Tue, 29 Aug 2000 11:44:30 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: Message by "M.-A. Lemburg" <mal@lemburg.com> ,
 Mon, 28 Aug 2000 20:57:26 +0200 , <39AAB616.460FA0A8@lemburg.com>
Message-ID: <20000829094431.AA5DB303181@snelboot.oratrix.nl>

> The basic syntax in the above examples is:
> 
> 	"pragma" NAME "=" (NUMBER | STRING+)
> 
> It has to be that simple to allow the compiler use the information
> at compilation time.

Can we have a bit more syntax, so other packages that inspect the source 
(freeze and friends come to mind) can also use the pragma scheme?

Something like
	"pragma" NAME ("." NAME)+ "=" (NUMBER | STRING+)
should allow freeze to use something like

pragma freeze.exclude = "win32ui, sunaudiodev, linuxaudiodev"

which would be ignored by the compiler but interpreted by freeze.
And, if they're stored in the __pragma__ dictionary too, as was suggested 
here, you can also add pragmas specific for class browsers, debuggers and such.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From tim_one@email.msn.com  Tue Aug 29 10:45:24 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 05:45:24 -0400
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
In-Reply-To: <01c501c01188$a21232e0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>

[/F]
> agreed.  ranges and slices are two different things.  giving
> them the same syntax is a lousy idea.

Don't know about *that*, but it doesn't appear to work as well as was hoped.

[Tim]
>> Post 2.0, who knows.  I'm not convinced Python actually needs
>> another arithmetic-progression *list* notation.  If it does, I've
>> always been fond of Haskell's range literals (but note that they
>> include the endpoint):
>>
>> Prelude> [1..10]
>> [1,2,3,4,5,6,7,8,9,10]
>> Prelude> [1, 3 .. 10]
>> [1,3,5,7,9]

> isn't that taken from SETL?

Sure looks like it to me.  The Haskell designers explicitly credited SETL
for list comprehensions, but I don't know that they do for this gimmick too.
Of course Haskell's "infinite" list builders weren't in SETL, and, indeed,
expressions like [1..] are pretty common in Haskell programs.  One of the
prettiest programs ever in any language ever:

primes = sieve [2..]
         where sieve (x:xs) = x :
                              sieve [n | n <- xs, n `mod` x /= 0]

which defines the list of all primes.

> (the more I look at SETL, the more Pythonic it looks.  not too
> bad for something that was designed in the late sixties ;-)

It was way ahead of its time.  Still is!  Check out its general loop
construct, though -- now *that's* a kitchen sink.  Guido mentioned that
ABC's Lambert Meertens spent a year's sabbatical at NYU when SETL was in its
heyday, and I figure that's where ABC got quantifiers in boolean expressions
(if each x in list has p(x); if no x in list has p(x); if some x in list has
p(x)).  Have always wondered why Python didn't have that too; I ask that
every year, but so far Guido has never answered it <wink>.

> talking about SETL, now that the range literals are gone, how
> about revisiting an old proposal:
>
>     "...personally, I prefer their "tuple former" syntax over the the
>     current PEP202 proposal:
>
>         [expression : iterator]
>
>         [n : n in range(100)]
>         [(x**2, x) : x in range(1, 6)]
>         [a : a in y if a > 5]
>
>     (all examples are slightly pythonified; most notably, they
>     use "|" or "st" (such that) instead of "if")
>
>     the expression can be omitted if it's the same thing as the
>     loop variable, *and* there's at least one "if" clause:
>
>         [a in y if a > 5]
>
>     also note that their "for-in" statement can take qualifiers:
>
>         for a in y if a > 5:
>             ...

You left off the last sentence from the first time you posted this:

>     is there any special reason why we cannot use colon instead
>     of "for"?

Guido then said we couldn't use a colon because that would make [x : y] too
hard to parse, because range literals were of the same form.  Thomas went on
to point out that it's worse than that, it's truly ambiguous.

Now I expect you prefaced this with "now that the range literals are gone"
expecting that everyone would just remember all that <wink>.  Whether they
did or not, now they should.

I counted two replies beyond those.  One from Peter Schneider-Kamp was
really selling another variant.  The other from Marc-Andre Lemburg argued
that while the shorthand is convenient for mathematicians, "I doubt that
CP4E users
get the grasp of this".

Did I miss anything?

Since Guido didn't chime in again, I assumed he was happy with how things
stood.  I further assume he picked on a grammar technicality to begin with
because that's the way he usually shoots down a proposal he doesn't want to
argue about -- "no new keywords" has served him extremely well that way
<wink>.  That is, I doubt that "now that the range literals are gone" (if
indeed they are!) will make any difference to him, and with the release one
week away he'd have to get real excited real fast.

I haven't said anything about it, but I'm with Marc-Andre on this:  sets
were *extremely* heavily used in SETL, and brevity in their expression was a
great virtue there because of it.  listcomps won't be that heavily used in
Python, and I think it's downright Pythonic to leave them wordy in order to
*discourage* fat hairy listcomp expressions.  They've been checked in for
quite a while now, and I like them fine as they are in practice.

I've also got emails like this one in pvt:

    The current explanation "[for and if clauses] nest in the same way
    for loops and if statements nest now." is pretty clear and easy to
    remember.

That's important too, because despite pockets of hysteria to the contrary on
c.l.py, this is still Python.  When I first saw your first example:

     [n : n in range(100)]

I immediately read "n in range(100)" as a true/false expression, because
that's what it *is* in 1.6 unless immediately preceded by "for".  The
current syntax preserves that.  Saving two characters (":" vs "for") isn't
worth it in Python.  The vertical bar *would* be "worth it" to me, because
that's what's used in SETL, Haskell *and* common mathematical practice for
"such that".  Alas, as Guido is sure to point out, that's too hard to parse
<0.9 wink>.

consider-it-channeled-unless-he-thunders-back-ly y'rs  - tim




From mal@lemburg.com  Tue Aug 29 11:40:11 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 12:40:11 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <20000829094431.AA5DB303181@snelboot.oratrix.nl>
Message-ID: <39AB930B.F34673AB@lemburg.com>

Jack Jansen wrote:
> 
> > The basic syntax in the above examples is:
> >
> >       "pragma" NAME "=" (NUMBER | STRING+)
> >
> > It has to be that simple to allow the compiler use the information
> > at compilation time.
> 
> Can we have a bit more syntax, so other packages that inspect the source
> (freeze and friends come to mind) can also use the pragma scheme?
> 
> Something like
>         "pragma" NAME ("." NAME)+ "=" (NUMBER | STRING+)
> should allow freeze to use something like
> 
> pragma freeze.exclude = "win32ui, sunaudiodev, linuxaudiodev"
> 
> which would be ignored by the compiler but interpreted by freeze.
> And, if they're stored in the __pragma__ dictionary too, as was suggested
> here, you can also add pragmas specific for class browsers, debuggers and such.

Hmm, freeze_exclude would have also done the trick.

The only thing that will have to be assured is that the
arguments are readily available at compile time. Adding
a dot shouldn't hurt ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Tue Aug 29 12:02:14 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 13:02:14 +0200
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Aug 29, 2000 at 05:45:24AM -0400
References: <01c501c01188$a21232e0$766940d5@hagrid> <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>
Message-ID: <20000829130214.L500@xs4all.nl>

On Tue, Aug 29, 2000 at 05:45:24AM -0400, Tim Peters wrote:

> Saving two characters (":" vs "for") isn't worth it in Python.  The
> vertical bar *would* be "worth it" to me, because that's what's used in
> SETL, Haskell *and* common mathematical practice for "such that".  Alas,
> as Guido is sure to point out, that's too hard to parse

It's impossible to parse, of course, unless you require the parentheses
around the expression preceding it :)

[ (n) | n in range(100) if n%2 ]

I-keep-writing-'where'-instead-of-'if'-in-those-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gvwilson@nevex.com  Tue Aug 29 12:28:58 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 07:28:58 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>

> > Ka-Ping Yee <ping@lfw.org>:
> >    for i in 1 .. 10:
> >        print i*i
> >    for i in 0 ..! len(a):
> >        a[i] += 1

Greg Wilson writes:

The problem with using ellipsis is that there's no obvious way to include
a stride --- how do you hit every second (or n'th) element, rather than
every element?  I'd rather stick to range() than adopt:

    for i in [1..10:5]

Thanks,
Greg

BTW, I understand from side conversations that adding a 'keys()' method to
sequences, so that arbitrary collections could be iterated over using:

    for i in S.keys():
        print i, S[i]

was considered and rejected.  If anyone knows why, I'd be grateful for a
recap.




From guido@beopen.com  Tue Aug 29 13:36:38 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:36:38 -0500
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
In-Reply-To: Your message of "Tue, 29 Aug 2000 17:04:05 +1200."
 <200008290504.RAA17003@s454.cosc.canterbury.ac.nz>
References: <200008290504.RAA17003@s454.cosc.canterbury.ac.nz>
Message-ID: <200008291236.HAA32070@cj20424-a.reston1.va.home.com>

[Greg Ewing]
> > meaning that Python
> > 2.0 can be redistributed under the Python 2.0 license or under the
> > GPL
> 
> Are you sure that's possible? Doesn't the CNRI license
> require that its terms be passed on to users of derivative
> works? If so, a user of Python 2.0 couldn't just remove the
> CNRI license and replace it with the GPL.

I don't know the answer to this, but Bob Weiner, BeOpen's CTO, claims
that according to BeOpen's lawyer this is okay.  I'll ask him about
it.

I'll post his answer (when I get it) on the license-py20 list.  I
encourage to subscribe and repost this question there for the
archives!

(There were some early glitches with the list address, but they have
been fixed.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Tue Aug 29 13:41:53 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:41:53 -0500
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: Your message of "Tue, 29 Aug 2000 09:09:02 +0200."
 <01c501c01188$a21232e0$766940d5@hagrid>
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
 <01c501c01188$a21232e0$766940d5@hagrid>
Message-ID: <200008291241.HAA32136@cj20424-a.reston1.va.home.com>

> isn't that taken from SETL?
> 
> (the more I look at SETL, the more Pythonic it looks.  not too
> bad for something that was designed in the late sixties ;-)

You've got it backwards: Python's predecessor, ABC, was inspired by
SETL -- Lambert Meertens spent a year with the SETL group at NYU
before coming up with the final ABC design!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From nowonder@nowonder.de  Tue Aug 29 14:41:30 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Tue, 29 Aug 2000 13:41:30 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
Message-ID: <39ABBD8A.B9B3136@nowonder.de>

Greg Wilson wrote:
> 
> BTW, I understand from side conversations that adding a 'keys()' method to
> sequences, so that arbitrary collections could be iterated over using:
> 
>     for i in S.keys():
>         print i, S[i]
> 
> was considered and rejected.  If anyone knows why, I'd be grateful for a
> recap.

If I remember correctly, it was rejected because adding
keys(), items() etc. methods to sequences would make all
objects (in this case sequences and mappings) look the same.

More accurate information from:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101178&group_id=5470

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From fredrik@pythonware.com  Tue Aug 29 12:49:17 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 13:49:17 +0200
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
References: <01c501c01188$a21232e0$766940d5@hagrid> <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com> <20000829130214.L500@xs4all.nl>
Message-ID: <01ae01c011af$2a3550a0$0900a8c0@SPIFF>

thomas wrote:
> > Saving two characters (":" vs "for") isn't worth it in Python.  The
> > vertical bar *would* be "worth it" to me, because that's what's used in
> > SETL, Haskell *and* common mathematical practice for "such that".  Alas,
> > as Guido is sure to point out, that's too hard to parse
>
> It's impossible to parse, of course, unless you require the parentheses
> around the expression preceding it :)
>
> [ (n) | n in range(100) if n%2 ]

I'm pretty sure Tim meant "|" instead of "if".  the SETL syntax is:

    [ n : n in range(100) | n%2 ]

(that is, ":" instead of for, and "|" or "st" instead of "if".  and yes,
they have nice range literals too, so don't take that "range" too
literal ;-)

in SETL, that can also be abbreviated to:

    [ n in range(100) | n%2 ]

which, of course, is a perfectly valid (though slightly obscure)
python expression...

</F>



From guido@beopen.com  Tue Aug 29 13:53:32 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:53:32 -0500
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: Your message of "Tue, 29 Aug 2000 07:41:53 EST."
 <200008291241.HAA32136@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <01c501c01188$a21232e0$766940d5@hagrid>
 <200008291241.HAA32136@cj20424-a.reston1.va.home.com>
Message-ID: <200008291253.HAA32332@cj20424-a.reston1.va.home.com>

> It was way ahead of its time.  Still is!  Check out its general loop
> construct, though -- now *that's* a kitchen sink.  Guido mentioned that
> ABC's Lambert Meertens spent a year's sabbatical at NYU when SETL was in its
> heyday, and I figure that's where ABC got quantifiers in boolean expressions
> (if each x in list has p(x); if no x in list has p(x); if some x in list has
> p(x)).  Have always wondered why Python didn't have that too; I ask that
> every year, but so far Guido has never answered it <wink>.

I don't recall you asking me that even *once* before now.  Proof,
please?

Anyway, the answer is that I saw diminishing returns from adding more
keywords and syntax.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From skip@mojam.com (Skip Montanaro)  Tue Aug 29 15:46:23 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 09:46:23 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <39AB06D5.BD99855@nowonder.de>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
 <39AB06D5.BD99855@nowonder.de>
Message-ID: <14763.52415.747655.334938@beluga.mojam.com>

    Peter> I don't like [0...100] either. It just looks bad.  But I really
    Peter> *do* like [0..100] (maybe that's Pascal being my first serious
    Peter> language).

Which was why I proposed "...".  It's sort of like "..", but has the
advantage of already being a recognized token.  I doubt there would be much
problem adding ".." as a token either.

What we really want I think is something that evokes the following in the
mind of the reader

    for i from START to END incrementing by STEP:

without gobbling up all those keywords.  That might be one of the following:

    for i in [START..END,STEP]:
    for i in [START:END:STEP]:
    for i in [START..END:STEP]:

I'm sure there are other possibilities, but given the constraints of putting
the range literal in square brackets and not allowing a comma as the first
separator, the choices seem limited.

Perhaps it will just have to wait until Py3K when a little more grammar
fiddling is possible.

Skip


From thomas@xs4all.net  Tue Aug 29 15:52:21 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 16:52:21 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 09:46:23AM -0500
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com> <39AB06D5.BD99855@nowonder.de> <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <20000829165221.N500@xs4all.nl>

On Tue, Aug 29, 2000 at 09:46:23AM -0500, Skip Montanaro wrote:

> Which was why I proposed "...".  It's sort of like "..", but has the
> advantage of already being a recognized token.  I doubt there would be much
> problem adding ".." as a token either.

"..." is not a token, it's three tokens:

subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]

So adding ".." should be no problem.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gvwilson@nevex.com  Tue Aug 29 15:55:34 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 10:55:34 -0400 (EDT)
Subject: [Python-Dev] pragmas as callbacks
In-Reply-To: <39AAB616.460FA0A8@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291023160.21280-100000@akbar.nevex.com>

If a mechanism for providing meta-information about code is going to be
added to Python, then I would like it to be flexible enough for developers
to define/add their own.  It's just like allowing developers to extend the
type system with new classes, rather than handing them a fixed set of
built-in types and saying, "Good luck".  (Most commercial Fortran
compilers take the second approach, by providing a bunch of inflexible,
vendor-specific pragmas.  It's a nightmare...)

I think that pragmas are essentially callbacks into the interpreter.  When
I put:

    pragma encoding = "UTF-16"

I am telling the interpreter to execute its 'setEncoding()' method right
away.

So, why not present pragmas in that way?  I.e., why not expose the Python
interpreter as a callable object while the source is being parsed and
compiled?  I think that:

    __python__.setEncoding("UTF-16")

is readable, and can be extended in lots of well-structured ways by
exposing exactly as much of the interpreter as is deemed safe. Arguments
could be restricted to constants, or built-in operations on constants, to
start with, without compromising future extensibility.

Greg




From skip@mojam.com (Skip Montanaro)  Tue Aug 29 15:55:49 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 09:55:49 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
References: <20000828230630.I500@xs4all.nl>
 <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <14763.52981.603640.415652@beluga.mojam.com>

One of the original arguments for range literals as I recall was that
indexing of loops could get more efficient.  The compiler would know that
[0:100:2] represents a series of integers and could conceivably generate
more efficient loop indexing code (and so could Python2C and other compilers
that generated C code).  This argument doesn't seem to be showing up here at
all.  Does it carry no weight in the face of the relative inscrutability of
the syntax?

Skip


From cgw@fnal.gov  Tue Aug 29 16:29:20 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 10:29:20 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000829165221.N500@xs4all.nl>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
 <39AB06D5.BD99855@nowonder.de>
 <14763.52415.747655.334938@beluga.mojam.com>
 <20000829165221.N500@xs4all.nl>
Message-ID: <14763.54992.458188.483296@buffalo.fnal.gov>

Thomas Wouters writes:
 > On Tue, Aug 29, 2000 at 09:46:23AM -0500, Skip Montanaro wrote:
 > 
 > > Which was why I proposed "...".  It's sort of like "..", but has the
 > > advantage of already being a recognized token.  I doubt there would be much
 > > problem adding ".." as a token either.
 > 
 > "..." is not a token, it's three tokens:
 > 
 > subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
 > 
 > So adding ".." should be no problem.

I have another idea.  I don't think it's been discussed previously,
but I came late to this party.  Sorry if this is old hat.


How about a:b to indicate the range starting at a and ending with b-1?

I claim that this syntax is already implicit in Python.

Think about the following:  if S is a sequence and i an index,

     S[i]

means the pairing of the sequence S with the index i.  Sequences and
indices are `dual' in the sense that pairing them together yields a
value.  I am amused by the fact that in the C language, 

     S[i] = *(S+i) = *(i+S) = i[S] 

which really shows this duality.

Now we already have

     S[a:b]

to denote the slice operation, but this can also be described as the
pairing of S with the range literal a:b

According to this view, the square braces indicate the pairing or
mapping operation itself, they are not part of the range literal.
They shouldn't be part of the range literal syntax.  Thinking about
this gets confused by the additional use of `[' for list construction.
If you take them away, you could even defend having 1:5 create an
xrange-like object rather than a list.

I think this also shows why [a:b] is *not* the natural syntax for a
range literal.

This is beautfully symmetric to me - 1..3 looks like it should be a
closed interval (including the endpoints), but it's very natural and
Pythonic that a:b is semi-open: the existing "slice invariance" 

     S[a:b] + S[b:c] = S[a:c] 

could be expressed as 

     a:b + b:c = a:c

which is very attractive to me, but of course there are problems.


The syntax Tim disfavored:

     for i in [:len(a)]:

now becomes

     for i in 0:len(a):  
     #do not allow elided endpoints outside of a [ context

which doesn't look so bad to me, but is probably ambiguous.  Hmmm,
could this possibly work or is it too much of a collision with the use
of `:' to indicate block structure?

Tim - I agree that the Haskell prime-number printing program is indeed
one of the prettiest programs ever.  Thanks for posting it.

Hold-off-on-range-literals-for-2.0-ly yr's,
				-C



From mal@lemburg.com  Tue Aug 29 16:37:39 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 17:37:39 +0200
Subject: [Python-Dev] pragmas as callbacks
References: <Pine.LNX.4.10.10008291023160.21280-100000@akbar.nevex.com>
Message-ID: <39ABD8C3.DABAAA6B@lemburg.com>

Greg Wilson wrote:
> 
> If a mechanism for providing meta-information about code is going to be
> added to Python, then I would like it to be flexible enough for developers
> to define/add their own.  It's just like allowing developers to extend the
> type system with new classes, rather than handing them a fixed set of
> built-in types and saying, "Good luck".  (Most commercial Fortran
> compilers take the second approach, by providing a bunch of inflexible,
> vendor-specific pragmas.  It's a nightmare...)

I don't think that Python will move in that direction. pragmas are
really only meant to add some form of meta-information to a Python
source file which would otherwise have to be passed to the compiler
in order to produce correct output. It's merely a way of defining
compile time flags for Python modules which allow more flexible
compilation.

Other tools might also make use of these pragmas, e.g. freeze,
to allow inspection of a module without having to execute it.

> I think that pragmas are essentially callbacks into the interpreter.  When
> I put:
> 
>     pragma encoding = "UTF-16"
> 
> I am telling the interpreter to execute its 'setEncoding()' method right
> away.

pragmas have a different target: they tell the compiler (or some
other non-executing tool) to make a certain assumption about the
code it is currently busy compiling.

The compiler is not expected to execute any Python code when it
sees a pragma, it will only set a few internal variables according
to the values stated in the pragma or simply ignore it if the
pragma uses an unknown key and then proceed with compiling.
 
> So, why not present pragmas in that way?  I.e., why not expose the Python
> interpreter as a callable object while the source is being parsed and
> compiled?  I think that:
> 
>     __python__.setEncoding("UTF-16")
> 
> is readable, and can be extended in lots of well-structured ways by
> exposing exactly as much of the interpreter as is deemed safe. Arguments
> could be restricted to constants, or built-in operations on constants, to
> start with, without compromising future extensibility.

The natural place for these APIs would be the sys module... 
no need for an extra __python__ module or object.

I'd rather not add complicated semantics to pragmas -- they
should be able to set flags, but not much more.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Moshe Zadka <moshez@math.huji.ac.il>  Tue Aug 29 16:40:39 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Tue, 29 Aug 2000 18:40:39 +0300 (IDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.54992.458188.483296@buffalo.fnal.gov>
Message-ID: <Pine.GSO.4.10.10008291838220.13338-100000@sundial>

On Tue, 29 Aug 2000, Charles G Waldman wrote:

> I have another idea.  I don't think it's been discussed previously,
> but I came late to this party.  Sorry if this is old hat.
> 
> How about a:b to indicate the range starting at a and ending with b-1?

I think it's nice. I'm not sure I like it yet, but it's an interesting
idea. Someone's gonna yell ": is ambiguos". Well, you know how, when
you know Python, you go around telling people "() don't create tuples,
commas do" and feeling all wonderful? Well, we can do the same with
ranges <wink>.

(:)-ly y'rs, Z.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From guido@beopen.com  Tue Aug 29 17:37:40 2000
From: guido@beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 11:37:40 -0500
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Your message of "Tue, 29 Aug 2000 10:29:20 EST."
 <14763.54992.458188.483296@buffalo.fnal.gov>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com> <39AB06D5.BD99855@nowonder.de> <14763.52415.747655.334938@beluga.mojam.com> <20000829165221.N500@xs4all.nl>
 <14763.54992.458188.483296@buffalo.fnal.gov>
Message-ID: <200008291637.LAA04186@cj20424-a.reston1.va.home.com>

> How about a:b to indicate the range starting at a and ending with b-1?

I believe this is what the Nummies originally suggested.

> which doesn't look so bad to me, but is probably ambiguous.  Hmmm,
> could this possibly work or is it too much of a collision with the use
> of `:' to indicate block structure?

Alas, it could never work.  Look at this:

  for i in a:b:c

Does it mean

  for i in (a:b) : c

or

  for i in a: (b:c)

?

So we're back to requiring *some* form of parentheses.

I'm postponing this discussion until after Python 2.0 final is
released -- the feature freeze is real!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From gvwilson@nevex.com  Tue Aug 29 16:54:23 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 11:54:23 -0400 (EDT)
Subject: [Python-Dev] pragmas as callbacks
In-Reply-To: <39ABD8C3.DABAAA6B@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291149040.23391-100000@akbar.nevex.com>

> Marc-Andre Lemburg wrote:
> I'd rather not add complicated semantics to pragmas -- they should be
> able to set flags, but not much more.

Greg Wilson writes:

That's probably what every Fortran compiler vendor said at first --- "just
a couple of on/off flags".  Then it was, "Set numeric values (like the
debugging level)".  A full-blown HPF compiler's pragmas are now a complete
programming language, so that you can (for example) specify how to
partition one array based on the partitioning in another.

Same thing happened with the C preprocessor --- more and more directives
crept in over time.  And the Microsoft C++ compiler.  And I'm sure this
list's readers could come up with dozens of more examples.

Pragmas are a way to give instructions to the interpreter; when you let
people give something instructions, you're letting them program it, and I
think it's best to design your mechanism from the start to support that.

Greg "oh no, not another parallelization directive" Wilson




From skip@mojam.com (Skip Montanaro)  Tue Aug 29 17:19:27 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 11:19:27 -0500 (CDT)
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
Message-ID: <14763.57999.57444.678054@beluga.mojam.com>

Don't know if this should concern us in preparation for 2.0b1 release, but
the following came across c.l.py this morning.  

FYI.

Skip

------- start of forwarded message (RFC 934 encapsulation) -------
X-Digest: Python-list digest, Vol 1 #3344 - 13 msgs
Message: 11
Newsgroups: comp.lang.python
Organization: Concentric Internet Services
Lines: 41
Message-ID: <39ABD9A1.A8ECDEC8@faxnet.com>
NNTP-Posting-Host: 208.36.195.178
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Path: news!uunet!ffx.uu.net!newsfeed.mathworks.com!feeder.via.net!newshub2.rdc1.sfba.home.com!news.home.com!newsfeed.concentric.net!global-news-master
Xref: news comp.lang.python:110026
Precedence: bulk
List-Id: General discussion list for the Python programming language <python-list.python.org>
From: Jon LaCour <jal@faxnet.com>
Sender: python-list-admin@python.org
To: python-list@python.org
Subject: Python Problem - Important!
Date: 29 Aug 2000 15:41:00 GMT
Reply-To: jal@faxnet.com

I am beginning development of a very large web application, and I would
like to use Python (no, Zope is not an option).  PyApache seems to be my
best bet, but there is a MASSIVE problem with Python/PyApache that
prevents it from being even marginally useful to me, and to most major
software companies.

My product requires database access, and the database module that I use
for connecting depends on a library called mxDateTime.  This is a very
robust library that is in use all over our system (it has never given us
problems).  Yet, when I use PyApache to connect to a database, I have
major issues.

I have seen this same problem posted to this newsgroup and to the
PyApache mailing list several times from over a year ago, and it appears
to be unresolved.  The essential problem is this: the second time a
module is loaded, Python has cleaned up its dictionaries in its cleanup
mechanism, and does not allow a re-init.  With mxDateTime this gives an
error:

    "TypeError:  call of non-function (type None)"

Essentially, this is a major problem in either the Python internals, or
in PyApache.  After tracing the previous discussions on this issue, it
appears that this is a Python problem.  I am very serious when I say
that this problem *must* be resolved before Python can be taken
seriously for use in web applications, especially when Zope is not an
option.  I require the use of Apache's security features, and several
other Apache extensions.  If anyone knows how to resolve this issue, or
can even point out a way that I can resolve this *myself* I would love
to hear it.

This is the single stumbling block standing in the way of my company
converting almost entirely to Python development, and I am hoping that
python developers will take this bug and smash it quickly.

Thanks in advance, please cc: all responses to my email address at
jal@faxnet.com.

Jonathan LaCour
Developer, VertiSoft

------- end -------


From Fredrik Lundh" <effbot@telia.com  Tue Aug 29 17:50:24 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 18:50:24 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <14763.57999.57444.678054@beluga.mojam.com>
Message-ID: <003301c011d9$3c1bbc80$766940d5@hagrid>

skip wrote:
> Don't know if this should concern us in preparation for 2.0b1 release, but
> the following came across c.l.py this morning.  

http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

    "The problem you describe is an artifact of the way mxDateTime 
    tries to reuse the time.time() API available through the 
    standard Python time module"

> Essentially, this is a major problem in either the Python internals, or
> in PyApache.

ah, the art of debugging...

</F>



From mal@lemburg.com  Tue Aug 29 17:46:29 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 18:46:29 +0200
Subject: [Python-Dev] pragmas as callbacks
References: <Pine.LNX.4.10.10008291149040.23391-100000@akbar.nevex.com>
Message-ID: <39ABE8E5.44073A09@lemburg.com>

Greg Wilson wrote:
> 
> > Marc-Andre Lemburg wrote:
> > I'd rather not add complicated semantics to pragmas -- they should be
> > able to set flags, but not much more.
> 
> Greg Wilson writes:
> 
> That's probably what every Fortran compiler vendor said at first --- "just
> a couple of on/off flags".  Then it was, "Set numeric values (like the
> debugging level)".  A full-blown HPF compiler's pragmas are now a complete
> programming language, so that you can (for example) specify how to
> partition one array based on the partitioning in another.
> 
> Same thing happened with the C preprocessor --- more and more directives
> crept in over time.  And the Microsoft C++ compiler.  And I'm sure this
> list's readers could come up with dozens of more examples.
>
> Pragmas are a way to give instructions to the interpreter; when you let
> people give something instructions, you're letting them program it, and I
> think it's best to design your mechanism from the start to support that.

I don't get your point: you can "program" the interpreter by
calling various sys module APIs to set interpreter flags already.

Pragmas are needed to tell the compiler what to do with a
source file. They extend the command line flags which are already
available to a more fine-grained mechanism. That's all -- nothing
more.

If a programmer wants to influence compilation globally,
then she would have to set the sys module flags prior to invoking
compile().

(This is already possible using mx.Tools additional sys builtins,
e.g. you can tell the compiler to work in optimizing mode prior
to invoking compile(). Some version of these will most likely go
into 2.1.)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From cgw@fnal.gov  Tue Aug 29 17:48:43 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 11:48:43 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008291637.LAA04186@cj20424-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
 <39AB06D5.BD99855@nowonder.de>
 <14763.52415.747655.334938@beluga.mojam.com>
 <20000829165221.N500@xs4all.nl>
 <14763.54992.458188.483296@buffalo.fnal.gov>
 <200008291637.LAA04186@cj20424-a.reston1.va.home.com>
Message-ID: <14763.59755.137579.785257@buffalo.fnal.gov>

Guido van Rossum writes:

 > Alas, it could never work.  Look at this:
 > 
 >   for i in a:b:c
 > 
 > Does it mean
 > 
 >   for i in (a:b) : c
 > 
 > or
 > 
 >   for i in a: (b:c)

Of course, it means "for i in the range from a to b-1 with stride c", but as written it's
illegal because you'd need another `:' after the c.  <wink>

 > I'm postponing this discussion until after Python 2.0 final is
 > released -- the feature freeze is real!

Absolutely.  I won't bring this up again, until the approprate time.



From mal@lemburg.com  Tue Aug 29 17:54:49 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 18:54:49 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <14763.57999.57444.678054@beluga.mojam.com> <003301c011d9$3c1bbc80$766940d5@hagrid>
Message-ID: <39ABEAD9.B106E53E@lemburg.com>

Fredrik Lundh wrote:
> 
> skip wrote:
> > Don't know if this should concern us in preparation for 2.0b1 release, but
> > the following came across c.l.py this morning.
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> 
>     "The problem you describe is an artifact of the way mxDateTime
>     tries to reuse the time.time() API available through the
>     standard Python time module"
> 

Here is a pre-release version of mx.DateTime which should fix
the problem (the new release will use the top-level mx package
-- it does contain a backward compatibility hack though):

http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip

Please let me know if it fixes your problem... I don't use PyApache.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jal@ns1.quickrecord.com  Tue Aug 29 18:17:17 2000
From: jal@ns1.quickrecord.com (Jonathan LaCour)
Date: Tue, 29 Aug 2000 13:17:17 -0400 (EDT)
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
In-Reply-To: <39ABEAD9.B106E53E@lemburg.com>
Message-ID: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com>

Well, it appears that this version raises a different problem. Do I need
to be running anything higher than python-1.5.2?  Possibly this has
something to do with how I installed this pre-release.  I simply moved 
the old DateTime directory out of the site-packages directory, and
then moved the mx, and DateTime directories from the zip that was
provided into the site-packages directory, and restarted. Here is the
traceback from the apache error log:

patientSearchResults.py failed for 192.168.168.130, reason: the script
raised an unhandled exception. Script's traceback follows:
Traceback (innermost last):
  File "/home/httpd/html/py-bin/patientSearchResults.py", line 3, in ?
    import ODBC.Solid
  File "/usr/lib/python1.5/site-packages/ODBC/__init__.py", line 21, in ?
    import DateTime # mxDateTime package must be installed first !
  File "/usr/lib/python1.5/site-packages/DateTime/__init__.py", line 17,
in ?
    from mx.DateTime import *
  File "/usr/lib/python1.5/site-packages/mx/DateTime/__init__.py", line
20, in ?    from DateTime import *
  File "/usr/lib/python1.5/site-packages/mx/DateTime/DateTime.py", line 8,
in ?
    from mxDateTime import *
  File
"/usr/lib/python1.5/site-packages/mx/DateTime/mxDateTime/__init__.py",
line 12, in ?
    setnowapi(time.time)
NameError: setnowapi


On Tue, 29 Aug 2000, M.-A. Lemburg wrote:

> Fredrik Lundh wrote:
> > 
> > skip wrote:
> > > Don't know if this should concern us in preparation for 2.0b1 release, but
> > > the following came across c.l.py this morning.
> > 
> > http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> > 
> >     "The problem you describe is an artifact of the way mxDateTime
> >     tries to reuse the time.time() API available through the
> >     standard Python time module"
> > 
> 
> Here is a pre-release version of mx.DateTime which should fix
> the problem (the new release will use the top-level mx package
> -- it does contain a backward compatibility hack though):
> 
> http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> 
> Please let me know if it fixes your problem... I don't use PyApache.
> 
> Thanks,
> -- 
> Marc-Andre Lemburg
> ______________________________________________________________________
> Business:                                      http://www.lemburg.com/
> Python Pages:                           http://www.lemburg.com/python/
> 



From gvwilson@nevex.com  Tue Aug 29 18:21:52 2000
From: gvwilson@nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 13:21:52 -0400 (EDT)
Subject: [Python-Dev] Re: pragmas as callbacks
In-Reply-To: <39ABE8E5.44073A09@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>

> > > Marc-Andre Lemburg wrote:
> > > I'd rather not add complicated semantics to pragmas -- they should be
> > > able to set flags, but not much more.

> > Greg Wilson writes:
> > Pragmas are a way to give instructions to the interpreter; when you let
> > people give something instructions, you're letting them program it, and I
> > think it's best to design your mechanism from the start to support that.

> Marc-Andre Lemburg:
> I don't get your point: you can "program" the interpreter by
> calling various sys module APIs to set interpreter flags already.
> 
> Pragmas are needed to tell the compiler what to do with a
> source file. They extend the command line flags which are already
> available to a more fine-grained mechanism. That's all -- nothing
> more.

Greg Wilson writes:
I understand, but my experience with other languages indicates that once
you have a way to set the parser's flags from within the source file being
parsed, people are going to want to be able to do it conditionally, i.e.
to set one flag based on the value of another.  Then they're going to want
to see if particular flags have been set to something other than their
default values, and so on.  Pragmas are a way to embed programs for the
parser in the file being parsed.  If we're going to allow this at all, we
will save ourselves a lot of future grief by planning for this now.

Thanks,
Greg



From mal@lemburg.com  Tue Aug 29 18:24:08 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 19:24:08 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com>
Message-ID: <39ABF1B8.426B7A6@lemburg.com>

Jonathan LaCour wrote:
> 
> Well, it appears that this version raises a different problem. Do I need
> to be running anything higher than python-1.5.2?  Possibly this has
> something to do with how I installed this pre-release.  I simply moved
> the old DateTime directory out of the site-packages directory, and
> then moved the mx, and DateTime directories from the zip that was
> provided into the site-packages directory, and restarted. Here is the
> traceback from the apache error log:
> 
> patientSearchResults.py failed for 192.168.168.130, reason: the script
> raised an unhandled exception. Script's traceback follows:
> Traceback (innermost last):
>   File "/home/httpd/html/py-bin/patientSearchResults.py", line 3, in ?
>     import ODBC.Solid
>   File "/usr/lib/python1.5/site-packages/ODBC/__init__.py", line 21, in ?
>     import DateTime # mxDateTime package must be installed first !
>   File "/usr/lib/python1.5/site-packages/DateTime/__init__.py", line 17,
> in ?
>     from mx.DateTime import *
>   File "/usr/lib/python1.5/site-packages/mx/DateTime/__init__.py", line
> 20, in ?    from DateTime import *
>   File "/usr/lib/python1.5/site-packages/mx/DateTime/DateTime.py", line 8,
> in ?
>     from mxDateTime import *
>   File
> "/usr/lib/python1.5/site-packages/mx/DateTime/mxDateTime/__init__.py",
> line 12, in ?
>     setnowapi(time.time)
> NameError: setnowapi

This API is new... could it be that you didn't recompile the
mxDateTime C extension inside the package ?

> On Tue, 29 Aug 2000, M.-A. Lemburg wrote:
> 
> > Fredrik Lundh wrote:
> > >
> > > skip wrote:
> > > > Don't know if this should concern us in preparation for 2.0b1 release, but
> > > > the following came across c.l.py this morning.
> > >
> > > http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> > >
> > >     "The problem you describe is an artifact of the way mxDateTime
> > >     tries to reuse the time.time() API available through the
> > >     standard Python time module"
> > >
> >
> > Here is a pre-release version of mx.DateTime which should fix
> > the problem (the new release will use the top-level mx package
> > -- it does contain a backward compatibility hack though):
> >
> > http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> >
> > Please let me know if it fixes your problem... I don't use PyApache.
> >
> > Thanks,
> > --
> > Marc-Andre Lemburg
> > ______________________________________________________________________
> > Business:                                      http://www.lemburg.com/
> > Python Pages:                           http://www.lemburg.com/python/
> >

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From mwh21@cam.ac.uk  Tue Aug 29 18:34:15 2000
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 29 Aug 2000 18:34:15 +0100
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Skip Montanaro's message of "Tue, 29 Aug 2000 09:55:49 -0500 (CDT)"
References: <20000828230630.I500@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <14763.52981.603640.415652@beluga.mojam.com>
Message-ID: <m38ztgyns8.fsf@atrus.jesus.cam.ac.uk>

Skip Montanaro <skip@mojam.com> writes:

> One of the original arguments for range literals as I recall was that
> indexing of loops could get more efficient.  The compiler would know that
> [0:100:2] represents a series of integers and could conceivably generate
> more efficient loop indexing code (and so could Python2C and other compilers
> that generated C code).  This argument doesn't seem to be showing up here at
> all.  Does it carry no weight in the face of the relative inscrutability of
> the syntax?

IMHO, no.  A compiler sufficiently smart to optimize range literals
ought to be sufficiently smart to optimize most calls to "range".  At
least, I think so.  I also think the inefficiency of list construction
in Python loops is a red herring; executing the list body involves
going round & round the eval loop, and I'd be amazed if that didn't
dominate (note that - on my system at least - loops involving range
are often (marginally) faster than ones using xrange, presumably due
to the special casing of list[integer] in eval_code2).

Sure, it would be nice is this aspect got optimized, but lets speed up
the rest of the interpreter enough so you can notice first!

Cheers,
Michael

-- 
  Very rough; like estimating the productivity of a welder by the
  amount of acetylene used.         -- Paul Svensson, comp.lang.python
    [on the subject of the measuring programmer productivity by LOC]



From mal@lemburg.com  Tue Aug 29 18:41:25 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 19:41:25 +0200
Subject: [Python-Dev] Re: pragmas as callbacks
References: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>
Message-ID: <39ABF5C5.6605CF55@lemburg.com>

Greg Wilson wrote:
> 
> > > > Marc-Andre Lemburg wrote:
> > > > I'd rather not add complicated semantics to pragmas -- they should be
> > > > able to set flags, but not much more.
> 
> > > Greg Wilson writes:
> > > Pragmas are a way to give instructions to the interpreter; when you let
> > > people give something instructions, you're letting them program it, and I
> > > think it's best to design your mechanism from the start to support that.
> 
> > Marc-Andre Lemburg:
> > I don't get your point: you can "program" the interpreter by
> > calling various sys module APIs to set interpreter flags already.
> >
> > Pragmas are needed to tell the compiler what to do with a
> > source file. They extend the command line flags which are already
> > available to a more fine-grained mechanism. That's all -- nothing
> > more.
> 
> Greg Wilson writes:
> I understand, but my experience with other languages indicates that once
> you have a way to set the parser's flags from within the source file being
> parsed, people are going to want to be able to do it conditionally, i.e.
> to set one flag based on the value of another.  Then they're going to want
> to see if particular flags have been set to something other than their
> default values, and so on.  Pragmas are a way to embed programs for the
> parser in the file being parsed.  If we're going to allow this at all, we
> will save ourselves a lot of future grief by planning for this now.

I don't think mixing compilation with execution is a good idea.

If we ever want to add this feature, we can always use a
pragma for it ;-) ...

def mysettings(compiler, locals, globals, target):
    compiler.setoptimization(2)

# Call the above hook for every new compilation block
pragma compiler_hook = "mysettings"

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Tue Aug 29 19:42:41 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 29 Aug 2000 14:42:41 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <14764.1057.909517.977904@bitdiddle.concentric.net>

Does anyone have suggestions for how to detect unbounded recursion in
the Python core on Unix platforms?

Guido assigned me bug 112943 yesterday and gave it priority 9.
http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470

The bug in question causes a core dump on Unix because of a broken
__radd__.  There's another bug (110615) that does that same thing with
recursive invocations of __repr__.

And, of course, there's:
def foo(x): 
    return foo(x)
foo(None)

I believe that these bugs have been fixed on Windows.  Fredrik
confirmed this for one of them, but I don't remember which one.  Would
someone mind confirming and updating the records in the bug tracker?

I don't see an obvious solution.  Is there any way to implement
PyOS_CheckStack on Unix?  I imagine that each platform would have its
own variant and that there is no hope of getting them debugged before
2.0b1. 

We could add some counters in eval_code2 and raise an exception after
some arbitrary limit is reached.  Arbitrary limits seem bad -- and any
limit would have to be fairly restrictive because each variation on
the bug involves a different number of C function calls between
eval_code2 invocations.

We could special case each of the __special__ methods in C to raise an
exception upon recursive calls with the same arguments, but this is
complicated and expensive.  It does not catch the simplest version, 
the foo function above.

Does stackless raise exceptions cleanly on each of these bugs?  That
would be an argument worth mentioning in the PEP, eh?

Any other suggestions are welcome.

Jeremy


From Fredrik Lundh" <effbot@telia.com  Tue Aug 29 20:15:30 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 21:15:30 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <00af01c011ed$86671dc0$766940d5@hagrid>

jeremy wrote:
>  Guido assigned me bug 112943 yesterday and gave it priority 9.
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470
> 
> The bug in question causes a core dump on Unix because of a broken
> __radd__.  There's another bug (110615) that does that same thing with
> recursive invocations of __repr__.
> 
> And, of course, there's:
> def foo(x): 
>     return foo(x)
> foo(None)
> 
> I believe that these bugs have been fixed on Windows.  Fredrik
> confirmed this for one of them, but I don't remember which one.  Would
> someone mind confirming and updating the records in the bug tracker?

my checkstack patch fixes #110615 and #112943 on windows.
cannot login to sourceforge right now, so I cannot update the
descriptions.

> I don't see an obvious solution.  Is there any way to implement
> PyOS_CheckStack on Unix?

not that I know...  you better get a real operating system ;-)

</F>



From mal@lemburg.com  Tue Aug 29 20:26:52 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 21:26:52 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <39AC0E7C.922536AA@lemburg.com>

Jeremy Hylton wrote:
> 
> Does anyone have suggestions for how to detect unbounded recursion in
> the Python core on Unix platforms?
> 
> Guido assigned me bug 112943 yesterday and gave it priority 9.
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470
> 
> The bug in question causes a core dump on Unix because of a broken
> __radd__.  There's another bug (110615) that does that same thing with
> recursive invocations of __repr__.
> 
> And, of course, there's:
> def foo(x):
>     return foo(x)
> foo(None)
> 
> I believe that these bugs have been fixed on Windows.  Fredrik
> confirmed this for one of them, but I don't remember which one.  Would
> someone mind confirming and updating the records in the bug tracker?
> 
> I don't see an obvious solution.  Is there any way to implement
> PyOS_CheckStack on Unix?  I imagine that each platform would have its
> own variant and that there is no hope of getting them debugged before
> 2.0b1.

I've looked around in the include files for Linux but haven't
found any APIs which could be used to check the stack size.
Not even getrusage() returns anything useful for the current
stack size.

For the foo() example I found that on my machine the core dump
happens at depth 9821 (counted from 0), so setting the recursion
limit to something around 9000 should fix it at least for
Linux2.

> We could add some counters in eval_code2 and raise an exception after
> some arbitrary limit is reached.  Arbitrary limits seem bad -- and any
> limit would have to be fairly restrictive because each variation on
> the bug involves a different number of C function calls between
> eval_code2 invocations.
> 
> We could special case each of the __special__ methods in C to raise an
> exception upon recursive calls with the same arguments, but this is
> complicated and expensive.  It does not catch the simplest version,
> the foo function above.
> 
> Does stackless raise exceptions cleanly on each of these bugs?  That
> would be an argument worth mentioning in the PEP, eh?
> 
> Any other suggestions are welcome.
> 
> Jeremy
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From jeremy@beopen.com  Tue Aug 29 20:40:49 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Tue, 29 Aug 2000 15:40:49 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC0E7C.922536AA@lemburg.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
Message-ID: <14764.4545.972459.760991@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal@lemburg.com> writes:

  >> I don't see an obvious solution.  Is there any way to implement
  >> PyOS_CheckStack on Unix?  I imagine that each platform would have
  >> its own variant and that there is no hope of getting them
  >> debugged before 2.0b1.

  MAL> I've looked around in the include files for Linux but haven't
  MAL> found any APIs which could be used to check the stack size.
  MAL> Not even getrusage() returns anything useful for the current
  MAL> stack size.

Right.  

  MAL> For the foo() example I found that on my machine the core dump
  MAL> happens at depth 9821 (counted from 0), so setting the
  MAL> recursion limit to something around 9000 should fix it at least
  MAL> for Linux2.

Right.  I had forgotten about the MAX_RECURSION_LIMIT.  It would
probably be better to set the limit lower on Linux only, right?  If
so, what's the cleanest was to make the value depend on the platform.

Jeremy


From mal@lemburg.com  Tue Aug 29 20:42:08 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 21:42:08 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net>
Message-ID: <39AC1210.18703B0B@lemburg.com>

Jeremy Hylton wrote:
> 
> >>>>> "MAL" == M -A Lemburg <mal@lemburg.com> writes:
> 
>   >> I don't see an obvious solution.  Is there any way to implement
>   >> PyOS_CheckStack on Unix?  I imagine that each platform would have
>   >> its own variant and that there is no hope of getting them
>   >> debugged before 2.0b1.
> 
>   MAL> I've looked around in the include files for Linux but haven't
>   MAL> found any APIs which could be used to check the stack size.
>   MAL> Not even getrusage() returns anything useful for the current
>   MAL> stack size.
> 
> Right.
> 
>   MAL> For the foo() example I found that on my machine the core dump
>   MAL> happens at depth 9821 (counted from 0), so setting the
>   MAL> recursion limit to something around 9000 should fix it at least
>   MAL> for Linux2.
> 
> Right.  I had forgotten about the MAX_RECURSION_LIMIT.  It would
> probably be better to set the limit lower on Linux only, right?  If
> so, what's the cleanest was to make the value depend on the platform.

Perhaps a naive test in the configure script might help. I used
the following script to determine the limit:

import resource
i = 0    
def foo(x):
    global i
    print i,resource.getrusage(resource.RUSAGE_SELF)   
    i = i + 1
    foo(x)
foo(None)

Perhaps a configure script could emulate the stack requirements
of eval_code2() by using declaring a buffer of a certain size.
The script would then run in a similar way as the one
above printing the current stack depth and then dump core at
some point. The configure script would then have to remove the
core file and use the last written  depth number as basis
for setting the MAX_RECURSION_LIMIT.

E.g. for the above Python script I get:

9818 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9819 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9820 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9821 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
Segmentation fault (core dumped)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From ping@lfw.org  Tue Aug 29 21:09:46 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:09:46 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
Message-ID: <Pine.LNX.4.10.10008291508500.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Greg Wilson wrote:
> The problem with using ellipsis is that there's no obvious way to include
> a stride --- how do you hit every second (or n'th) element, rather than
> every element?

As explained in the examples i posted,

    1, 3 .. 20

could produce

    (1, 3, 5, 7, 9, 11, 13, 15, 17, 19)


-- ?!ng



From thomas@xs4all.net  Tue Aug 29 20:49:12 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 21:49:12 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Aug 29, 2000 at 02:42:41PM -0400
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <20000829214912.O500@xs4all.nl>

On Tue, Aug 29, 2000 at 02:42:41PM -0400, Jeremy Hylton wrote:

> Is there any way to implement PyOS_CheckStack on Unix?  I imagine that
> each platform would have its own variant and that there is no hope of
> getting them debugged before 2.0b1.

I can think of three mechanisms: using getrusage() and getrlimit() to find out
the current stacksize and the stack limit is most likely to give accurate
numbers, but is only available on most UNIX systems, not all of them. (I
hear there are systems that don't implement getrusage/getrlimit ;-)

int PyOS_CheckStack(void)
{
    struct rlimit rlim;
    struct rusage rusage;

    if (getrusage(RUSAGE_SELF, &rusage) != 0)
        /* getrusage failed, ignore or raise error ? */
    if (getrlimit(RLIMIT_STACK, &rlim) != 0)
        /* ditto */
    return rlim.rlim_cur > rusage.ru_isrss + PYOS_STACK_MARGIN;
}

(Note that it's probably necessary to repeat the getrlimit as well as the
getrusage, because even the 'hard' limit can change -- a Python program can
change the limits using the 'resource' module.) There are currently no
autoconf checks for rlimit/rusage, but we can add those without problem.
(and enable the resource automagically module while we're at it ;)

If that fails, I don't think there is a way to get the stack limit (unless
it's in platform-dependant ways) but there might be a way to get the
approximate size of the stack by comparing the address of a local variable
with the stored address of a local variable set at the start of Python.
Something like

static long stack_start_addr

[... in some init function ...]
    int dummy;
    stack_start_addr = (long) &dummy;
[ or better yet, use a real variable from that function, but one that won't
get optimized away (or you might lose that optimization) ]

#define PY_STACK_LIMIT 0x200000 /* 2Mbyte */

int PyOS_CheckStack(void)
{
    int dummy;
    return abs(stack_start_addr - (long)&dummy) < PY_STACK_LIMIT;
}

This is definately sub-optimal! With the fixed stack-limit, which might both
be too high and too low. Note that the abs() is necessary to accomodate both
stacks that grow downwards and those that grow upwards, though I'm
hard-pressed at the moment to name a UNIX system with an upwards growing
stack. And this solution is likely to get bitten in the unshapely behind by
optimizing, too-smart-for-their-own-good compilers, possibly requiring a
'volatile' qualifier to make them keep their hands off of it.

But the final solution, using alloca() like the Windows check does, is even
less portable... alloca() isn't available on some systems (more than
getrlimit isn't available on, I think, but the two sets are likely to
intersect) and I've heard rumours that on some systems it's even an alias
for malloc(), leading to memory leaks and other weird behaviour.

I'm thinking that a combination of #1 and #2 is best, where #1 is used when
getrlimit/getrusage are available, but #2 if they are not. However, I'm not
sure if either works, so it's a bit soon for that kind of thoughts :-)

> Does stackless raise exceptions cleanly on each of these bugs?  That
> would be an argument worth mentioning in the PEP, eh?

No, I don't think it does. Stackless gets bitten much later by recursive
behaviour, though, and just retains the current 'recursion depth' counter,
possibly set a bit higher. (I'm not sure, but I'm sure a true stackophobe
will clarify ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From bwarsaw@beopen.com  Tue Aug 29 20:52:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 15:52:32 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com>
Message-ID: <14764.5248.979275.341242@anthem.concentric.net>

>>>>> "M" == M  <mal@lemburg.com> writes:

    |     print i,resource.getrusage(resource.RUSAGE_SELF)   

My experience echos yours here MAL -- I've never seen anything 
from getrusage() that would be useful in this context. :/

A configure script test would be useful, but you'd have to build a
minimal Python interpreter first to run the script, wouldn't you?

-Barry


From bwarsaw@beopen.com  Tue Aug 29 20:53:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 15:53:32 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
 <Pine.LNX.4.10.10008291508500.302-100000@server1.lfw.org>
Message-ID: <14764.5308.529148.181749@anthem.concentric.net>

>>>>> "KY" == Ka-Ping Yee <ping@lfw.org> writes:

    KY> As explained in the examples i posted,

    KY>     1, 3 .. 20

What would

    1, 3, 7 .. 99

do? :)

-Barry


From ping@lfw.org  Tue Aug 29 21:20:03 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:20:03 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.5308.529148.181749@anthem.concentric.net>
Message-ID: <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:
> 
> What would
> 
>     1, 3, 7 .. 99
> 
> do? :)

    ValueError: too many elements on left side of ".." operator

or

    ValueError: at most two elements permitted on left side of ".."

You get the idea.


-- ?!ng



From prescod@prescod.net  Tue Aug 29 21:00:55 2000
From: prescod@prescod.net (Paul)
Date: Tue, 29 Aug 2000 15:00:55 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.5308.529148.181749@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0008291457410.6330-100000@amati.techno.com>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:

> 
> >>>>> "KY" == Ka-Ping Yee <ping@lfw.org> writes:
> 
>     KY> As explained in the examples i posted,
> 
>     KY>     1, 3 .. 20
> 
> What would
> 
>     1, 3, 7 .. 99

consider:

rangeRecognizers.register( primeHandler )
rangeRecognizers.register( fibHandler )
rangeRecognizers.register( compositeHandler )
rangeRecognizers.register( randomHandler )

(you want to fall back on random handler last so it needs to be registered
last)

 Paul Prescod




From thomas@xs4all.net  Tue Aug 29 21:02:27 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:02:27 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.5248.979275.341242@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 29, 2000 at 03:52:32PM -0400
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net>
Message-ID: <20000829220226.P500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:52:32PM -0400, Barry A. Warsaw wrote:

> >>>>> "M" == M  <mal@lemburg.com> writes:
> 
>     |     print i,resource.getrusage(resource.RUSAGE_SELF)   
> 
> My experience echos yours here MAL -- I've never seen anything 
> from getrusage() that would be useful in this context. :/

Ack. indeed. Nevermind my longer post then, getrusage() is usageless. (At
least on Linux.)

> A configure script test would be useful, but you'd have to build a
> minimal Python interpreter first to run the script, wouldn't you?

Nah, as long as you can test how many recursions it would take to run out of
stack... But it's still not optimal: we're doing a check at compiletime (or
rather, configure-time) on a limit which can change during the course of a
single process, nevermind a single installation ;P And I don't really like
doing a configure test that's just a program that tries to run out of
memory... it might turn out troublesome for systems with decent sized
stacks.

(getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
maximum number of recursions from that.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Tue Aug 29 21:05:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:05:13 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 29, 2000 at 10:02:27PM +0200
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net> <20000829220226.P500@xs4all.nl>
Message-ID: <20000829220513.Q500@xs4all.nl>

On Tue, Aug 29, 2000 at 10:02:27PM +0200, Thomas Wouters wrote:

> Ack. indeed. Nevermind my longer post then, getrusage() is usageless. (At
> least on Linux.)

And on BSDI, too.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Tue Aug 29 21:05:32 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:05:32 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>
References: <14764.5308.529148.181749@anthem.concentric.net>
 <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>
Message-ID: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:
 > What would
 > 
 >     1, 3, 7 .. 99
 > 
 > do? :)

Ka-Ping Yee writes:
 >     ValueError: too many elements on left side of ".." operator
...
 >     ValueError: at most two elements permitted on left side of ".."

  Looks like a SyntaxError to me.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From mal@lemburg.com  Tue Aug 29 21:10:02 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:10:02 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net>
Message-ID: <39AC189A.95846E0@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal@lemburg.com> writes:
> 
>     |     print i,resource.getrusage(resource.RUSAGE_SELF)
> 
> My experience echos yours here MAL -- I've never seen anything
> from getrusage() that would be useful in this context. :/
> 
> A configure script test would be useful, but you'd have to build a
> minimal Python interpreter first to run the script, wouldn't you?

I just experimented with this a bit: I can't seem to get
a plain C program to behave like the Python interpreter.

The C program can suck memory in large chunks and consume
great amounts of stack, it just doesn't dump core... (don't
know what I'm doing wrong here).

Yet the Python 2.0 interpreter only uses about 5MB of
memory at the time it dumps core -- seems strange to me,
since the plain C program can easily consume more than 20Megs
and still continues to run.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From fdrake@beopen.com  Tue Aug 29 21:09:29 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:09:29 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com>
 <14764.5248.979275.341242@anthem.concentric.net>
 <20000829220226.P500@xs4all.nl>
Message-ID: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > (getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
 > maximum number of recursions from that.)

  Still no go -- we can calculate the number of recursions for a
particular call frame size (or expected mix of frame sizes, which is
really the same), but we can't predict recursive behavior inside a C
extension, which is a significant part of the problem (witness the SRE
experience).  That's why PyOS_StackCheck() actually has to do more
than test a counter -- if the counter is low but the call frames are
larger than our estimate, it won't help.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From skip@mojam.com (Skip Montanaro)  Tue Aug 29 21:12:57 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:12:57 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <14764.6473.859814.216436@beluga.mojam.com>

    Jeremy> Does anyone have suggestions for how to detect unbounded
    Jeremy> recursion in the Python core on Unix platforms?

On most (all?) processors in common usage, the stack grows down toward the
heap and the heap brows upward, so what you really want to do is detect that
collision.  brk and sbrk are used to manipulate the end of the heap.  A
local variable in the current scope should be able to tell you roughly where
the top of stack is.

Of course, you really can't call brk or sbrk safely.  You have to leave that
to malloc.  You might get some ideas of how to estimate the current end of
the heap by peering at the GNU malloc code.

This might be a good reason to experiment with Vladimir's obmalloc package.
It could easily be modified to remember the largest machine address it
returns via malloc or realloc calls.  That value could be compared with the
current top of stack.  If obmalloc brks() memory back to the system (I've
never looked at it - I'm just guessing) it could lower the saved value to
the last address in the block below the just recycled block.

(I agree this probably won't be done very well before 2.0 release.)

Skip


From tim_one@email.msn.com  Tue Aug 29 21:14:16 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 16:14:16 -0400
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: <200008291253.HAA32332@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMEHCAA.tim_one@email.msn.com>

[Tim]
> ...
> Have always wondered why Python didn't have that [ABC's boolean
> quatifiers] too; I ask that every year, but so far Guido has never
> answered it <wink>.

[Guido]
> I don't recall you asking me that even *once* before now.  Proof,
> please?

That's too time-consuming until DejaNews regains its memory.  I never asked
*directly*, it simply comes up at least once a year on c.l.py (and has since
the old days!), and then I always mention that it comes up every year but
that Guido never jumps into those threads <wink>.  The oldest reference I
can find in DejaNews today is just from January 1st of this year, at the end
of

    http://www.deja.com/getdoc.xp?AN=567219971

There it got mentioned offhandedly.  Much earlier threads were near-PEP'ish
in their development of how this could work in Python.  I'll attach the
earliest one I have in my personal email archive, from a bit over 4 years
ago.  All my personal email much before that got lost in KSR's bankruptcy
bit bucket.

> Anyway, the answer is that I saw diminishing returns from adding more
> keywords and syntax.

Yes, I've channeled that too -- that's why I never bugged you directly
<wink>.



-----Original Message-----
From: python-list-request@cwi.nl [mailto:python-list-request@cwi.nl]
Sent: Saturday, August 03, 1996 4:42 PM
To: Marc-Andre Lemburg; python-list@cwi.nl
Subject: RE: \exists and \forall in Python ?!


> [Marc-Andre Lemburg]
> ... [suggesting "\exists" & "\forall" quantifiers] ...

Python took several ideas from CWI's ABC language, and this is one that
didn't
make the cut.  I'd be interested to hear Guido's thoughts on this!  They're
certainly very nice to have, although I wouldn't say they're of core
importance.  But then a lot of "nice to have but hardly crucial" features
did
survive the cut (like, e.g., "x < y < z" as shorthand for "x < y and y <
z"),
and it's never clear where to draw the line.

In ABC, the additional keywords were "some", "each", "no" and "has", as in
(importing the ABC semantics into a virtual Python):

if some d in range(2,n) has n % d == 0:
    print n, "not prime; it's divisible by", d
else:
    print n, "is prime"

or

if no d in range(2,n) has n % d == 0:
    print n, "is prime"
else:
    print n, "not prime; it's divisible by", d

or

if each d in range(2,n) has n % d == 0:
    print n, "is <= 2; test vacuously true"
else:
    print n, "is not divisible by, e.g.,", d

So "some" is a friendly spelling of "there exists", "no" of "not there
exists", and "each" of "for all".  In addition to testing the condition,
"some" also bound the test vrbls to "the first"  witness if there was one,
and
"no" and "each" to the first counterexample if there was one.  I think ABC
got
that all exactly right, so (a) it's the right model to follow if Python were
to add this, and (b) the (very useful!) business of binding the test vrbls
if
& only if the test succeeds (for "some") or fails (for "no" and "each")
makes
it much harder to fake (comprehensibly & efficiently) via map & reduce
tricks.

side-effects-are-your-friends-ly y'rs  - tim

Tim Peters    tim_one@msn.com, tim@dragonsys.com
not speaking for Dragon Systems Inc.




From skip@mojam.com (Skip Montanaro)  Tue Aug 29 21:17:10 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:17:10 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com>
 <14764.5248.979275.341242@anthem.concentric.net>
 <20000829220226.P500@xs4all.nl>
Message-ID: <14764.6726.985174.85964@beluga.mojam.com>

    Thomas> Nah, as long as you can test how many recursions it would take
    Thomas> to run out of stack... But it's still not optimal: we're doing a
    Thomas> check at compiletime (or rather, configure-time) on a limit
    Thomas> which can change during the course of a single process,
    Thomas> nevermind a single installation ;P And I don't really like doing
    Thomas> a configure test that's just a program that tries to run out of
    Thomas> memory... it might turn out troublesome for systems with decent
    Thomas> sized stacks.

Not to mention which you'll get different responses depending on how heavily
the system is using VM, right?  If you are unlucky enough to build on a
memory-rich system then copy the python interpreter over to a memory-starved
system (or just run the interpreter while you have Emacs, StarOffice and
Netscape running), you may well run out of virtual memory a lot sooner than
your configure script thought.

Skip


From skip@mojam.com (Skip Montanaro)  Tue Aug 29 21:19:16 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:19:16 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC189A.95846E0@lemburg.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com>
 <14764.5248.979275.341242@anthem.concentric.net>
 <39AC189A.95846E0@lemburg.com>
Message-ID: <14764.6852.672716.587046@beluga.mojam.com>

    MAL> The C program can suck memory in large chunks and consume great
    MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
    MAL> doing wrong here).

Are you overwriting all that memory you malloc with random junk?  If not,
the stack and the heap may have collided but not corrupted each other.

Skip


From ping@lfw.org  Tue Aug 29 21:43:23 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:43:23 -0500 (CDT)
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <m13Tf69-000wcDC@swing.co.at>
Message-ID: <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Christian Tanzer wrote:
> Triple quoted strings work -- that's what I'm constantly using. The
> downside is, that the docstrings either contain spurious white space
> or it messes up the layout of the code (if you start subsequent lines
> in the first column).

The "inspect" module (see http://www.lfw.org/python/) handles this nicely.

    Python 1.5.2 (#4, Jul 21 2000, 18:28:23) [C] on sunos5
    Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
    >>> import inspect
    >>> class Foo:
    ...     """First line.
    ...        Second line.
    ...            An indented line.
    ...        Some more text."""
    ... 
    >>> inspect.getdoc(Foo)
    'First line.\012Second line.\012    An indented line.\012Some more text.'
    >>> print _
    First line.
    Second line.
        An indented line.
    Some more text.
    >>>        

I suggested "inspect.py" for the standard library quite some time ago
(long before the feature freeze, and before ascii.py, which has since
made it in).  MAL responded pretty enthusiastically
(http://www.python.org/pipermail/python-dev/2000-July/013511.html).
Could i request a little more feedback from others?

It's also quite handy for other purposes.  It can get the source
code for a given function or class:

    >>> func = inspect.getdoc
    >>> inspect.getdoc(func)
    'Get the documentation string for an object.'
    >>> inspect.getfile(func)
    'inspect.py'
    >>> lines, lineno = inspect.getsource(func)
    >>> print string.join(lines)
    def getdoc(object):
         """Get the documentation string for an object."""
         if not hasattr(object, "__doc__"):
             raise TypeError, "arg has no __doc__ attribute"
         if object.__doc__:
             lines = string.split(string.expandtabs(object.__doc__), "\n")
             margin = None
             for line in lines[1:]:
                 content = len(string.lstrip(line))
                 if not content: continue
                 indent = len(line) - content
                 if margin is None: margin = indent
                 else: margin = min(margin, indent)
             if margin is not None:
                 for i in range(1, len(lines)): lines[i] = lines[i][margin:]
             return string.join(lines, "\n")

And it can get the argument spec for a function:

    >>> inspect.getargspec(func)
    (['object'], None, None, None)
    >>> apply(inspect.formatargspec, _)
    '(object)'

Here's a slightly more challenging example:

    >>> def func(a, (b, c), (d, (e, f), (g,)), h=3): pass
    ... 
    >>> inspect.getargspec(func)
    (['a', ['b', 'c'], ['d', ['e', 'f'], ['g']], 'h'], None, None, (3,))
    >>> apply(inspect.formatargspec, _)
    '(a, (b, c), (d, (e, f), (g,)), h=3)'
    >>> 



-- ?!ng



From cgw@fnal.gov  Tue Aug 29 21:22:03 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 15:22:03 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com>
 <14764.5248.979275.341242@anthem.concentric.net>
 <20000829220226.P500@xs4all.nl>
 <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
Message-ID: <14764.7019.100780.127130@buffalo.fnal.gov>

The situation on Linux is damn annoying, because, from a few minutes
of rummaging around in the kernel it's clear that this information
*is* available to the kernel, just not exposed to the user in a useful
way.  The file /proc/<pid>/statm [1] gives as field 5 "drs", which is
"number of pages of data/stack".  If only the data and stack weren't
lumped together in this number, we could actually do something with
it!

[1]: Present on Linux 2.2 only.  See /usr/src/linux/Documentation/proc.txt
for description of this (fairly obscure) file.



From mal@lemburg.com  Tue Aug 29 21:24:00 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:24:00 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
 <39AC0E7C.922536AA@lemburg.com>
 <14764.4545.972459.760991@bitdiddle.concentric.net>
 <39AC1210.18703B0B@lemburg.com>
 <14764.5248.979275.341242@anthem.concentric.net>
 <39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com>
Message-ID: <39AC1BE0.FFAA9100@lemburg.com>

Skip Montanaro wrote:
> 
>     MAL> The C program can suck memory in large chunks and consume great
>     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
>     MAL> doing wrong here).
> 
> Are you overwriting all that memory you malloc with random junk?  If not,
> the stack and the heap may have collided but not corrupted each other.

Not random junk, but all 1s:

int recurse(int depth)
{
    char buffer[2048];
    memset(buffer, 1, sizeof(buffer));

    /* Call recursively */
    printf("%d\n",depth);
    recurse(depth + 1);
}

main()
{
    recurse(0);
}

Perhaps I need to go up a bit on the stack to trigger the
collision (i.e. go down two levels, then up one, etc.) ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From ping@lfw.org  Tue Aug 29 21:49:28 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:49:28 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Fred L. Drake, Jr. wrote:
> Ka-Ping Yee writes:
>  >     ValueError: too many elements on left side of ".." operator
> ...
>  >     ValueError: at most two elements permitted on left side of ".."
> 
>   Looks like a SyntaxError to me.  ;)

I would have called "\xgh" a SyntaxError too, but Guido argued
convincingly that it's consistently ValueError for bad literals.
So i'm sticking with that.  See the thread of replies to

    http://www.python.org/pipermail/python-dev/2000-August/014629.html


-- ?!ng



From thomas@xs4all.net  Tue Aug 29 21:26:53 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:26:53 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6852.672716.587046@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 03:19:16PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net> <39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com>
Message-ID: <20000829222653.R500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:19:16PM -0500, Skip Montanaro wrote:

>     MAL> The C program can suck memory in large chunks and consume great
>     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
>     MAL> doing wrong here).

Are you sure you are consuming *stack* ?

> Are you overwriting all that memory you malloc with random junk?  If not,
> the stack and the heap may have collided but not corrupted each other.

malloc() does not consume stackspace, it consumes heapspace. Don't bother
using malloc(). You have to allocate huge tracks o' land in 'automatic'
variables, or use alloca() (which isn't portable.) Depending on your arch,
you might need to actually write to every, ooh, 1024th int or so.

{
    int *spam[0x2000];
	(etc)
}

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Tue Aug 29 21:26:43 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:26:43 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>
References: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>
 <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>
Message-ID: <14764.7299.991437.132621@cj42289-a.reston1.va.home.com>

Ka-Ping Yee writes:
 > I would have called "\xgh" a SyntaxError too, but Guido argued
 > convincingly that it's consistently ValueError for bad literals.

  I understand the idea about bad literals.  I don't think that's what
this is.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Fredrik Lundh" <effbot@telia.com  Tue Aug 29 21:40:16 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 22:40:16 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>	<39AC0E7C.922536AA@lemburg.com>	<14764.4545.972459.760991@bitdiddle.concentric.net>	<39AC1210.18703B0B@lemburg.com>	<14764.5248.979275.341242@anthem.concentric.net>	<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com> <39AC1BE0.FFAA9100@lemburg.com>
Message-ID: <00eb01c011f9$59d47a80$766940d5@hagrid>

mal wrote:
> int recurse(int depth)
> {
>     char buffer[2048];
>     memset(buffer, 1, sizeof(buffer));
> 
>     /* Call recursively */
>     printf("%d\n",depth);
>     recurse(depth + 1);
> }
> 
> main()
> {
>     recurse(0);
> }
> 
> Perhaps I need to go up a bit on the stack to trigger the
> collision (i.e. go down two levels, then up one, etc.) ?!

or maybe the optimizer removed your buffer variable?

try printing the buffer address, to see how much memory
you're really consuming here.

     printf("%p %d\n", buffer, depth);

</F>



From thomas@xs4all.net  Tue Aug 29 21:31:08 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:31:08 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6473.859814.216436@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 03:12:57PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <14764.6473.859814.216436@beluga.mojam.com>
Message-ID: <20000829223108.S500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:12:57PM -0500, Skip Montanaro wrote:

> On most (all?) processors in common usage, the stack grows down toward the
> heap and the heap brows upward, so what you really want to do is detect that
> collision.  brk and sbrk are used to manipulate the end of the heap.  A
> local variable in the current scope should be able to tell you roughly where
> the top of stack is.

I don't think that'll help, because the limit isn't the actual (physical)
memory limit, but mostly just administrative limits. 'limit' or 'limits',
depending on your shell.

> current top of stack.  If obmalloc brks() memory back to the system (I've
> never looked at it - I'm just guessing) it could lower the saved value to
> the last address in the block below the just recycled block.

Last I looked, obmalloc() worked on top of the normal system malloc (or its
replacement if you provide one) and doesn't brk/sbrk itself (thank god --
that would mean nastiness if extention modules or such used malloc, or if
python were embedded into a system using malloc!)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Tue Aug 29 21:39:18 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:39:18 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>	<39AC0E7C.922536AA@lemburg.com>	<14764.4545.972459.760991@bitdiddle.concentric.net>	<39AC1210.18703B0B@lemburg.com>	<14764.5248.979275.341242@anthem.concentric.net>	<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com> <39AC1BE0.FFAA9100@lemburg.com> <00eb01c011f9$59d47a80$766940d5@hagrid>
Message-ID: <39AC1F76.41CCED9@lemburg.com>

Fredrik Lundh wrote:
> 
> ...
> 
> or maybe the optimizer removed your buffer variable?
> 
> try printing the buffer address, to see how much memory
> you're really consuming here.
> 
>      printf("%p %d\n", buffer, depth);

I got some more insight using:

int checkstack(int depth)
{
    if (depth <= 0)
	return 0;
    checkstack(depth - 1);
}

int recurse(int depth)
{
    char stack[2048];
    char *heap;
    
    memset(stack, depth % 256, sizeof(stack));
    heap = (char*) malloc(2048);

    /* Call recursively */
    printf("stack %p heap %p depth %d\n", stack, heap, depth);
    checkstack(depth);
    recurse(depth + 1);
    return 0;
}

main()
{
    recurse(0);
}

This print lines like these:
stack 0xbed4b118 heap 0x92a1cb8 depth 9356

... or in other words over 3GB of space between the stack and
the heap. No wonder I'm not seeing any core dumps.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From tim_one@email.msn.com  Tue Aug 29 21:44:18 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 16:44:18 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52981.603640.415652@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMIHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> One of the original arguments for range literals as I recall was that
> indexing of loops could get more efficient.  The compiler would know
> that [0:100:2] represents a series of integers and could conceivably
> generate more efficient loop indexing code (and so could Python2C and
> other compilers that generated C code).  This argument doesn't seem to
> be showing up here at all.  Does it carry no weight in the face of the
> relative inscrutability of the syntax?

It carries no weight at all *for 2.0* because the patch didn't exploit the
efficiency possibilities.

Which I expect are highly overrated (maybe 3% in a "good case" real-life
loop) anyway.  Even if they aren't, the same argument would apply to any
other new syntax for this too, so in no case is it an argument in favor of
this specific new syntax over alternative new syntaxes.

There are also well-known ways to optimize the current "range" exactly the
way Python works today; e.g., compile two versions of the loop, one assuming
range is the builtin, the other assuming it may be anything, then a quick
runtime test to jump to the right one.  Guido hates that idea just because
it's despicable <wink>, but that's the kind of stuff optimizing compilers
*do*, and if we're going to get excited about efficiency then we need to
consider *all sorts of* despicable tricks like that.

In any case, I've spent 5 hours straight now digging thru Python email, have
more backed up than when I started, and have gotten nothing done today
toward moving 2.0b1 along.  I'd love to talk more about all this, but there
simply isn't the time for it now ...




From cgw@fnal.gov  Tue Aug 29 22:05:21 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 16:05:21 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <14764.9617.203071.639126@buffalo.fnal.gov>

Jeremy Hylton writes:
 > Does anyone have suggestions for how to detect unbounded recursion in
 > the Python core on Unix platforms?

Hey, check this out! - it's not portable in general, but it works for Linux,
which certainly covers a large number of the systems out there in the world.

#!/usr/bin/env python

def getstack():
    for l in open("/proc/self/status").readlines():
        if l.startswith('VmStk'):
            t = l.split()
            return 1024 * int(t[1])


def f():
    print getstack()
    f()

f()


I'm working up a version of this in C; you can do a "getrlimit" to
find the maximum stack size, then read /proc/self/status to get
current stack usage, and compare these values.

As far as people using systems that have a broken getrusage and also
no /proc niftiness, well... get yourself a real operating system ;-)






From Fredrik Lundh" <effbot@telia.com  Tue Aug 29 22:03:43 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 23:03:43 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com> <39ABF1B8.426B7A6@lemburg.com>
Message-ID: <012d01c011fe$31d23900$766940d5@hagrid>

mal wrote:
> Here is a pre-release version of mx.DateTime which should fix
> the problem (the new release will use the top-level mx package
> -- it does contain a backward compatibility hack though):
>
> http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
>
> Please let me know if it fixes your problem... I don't use PyApache.

mal, can you update the bug database.  this bug is still listed
as an open bug in the python core...

http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

</F>



From mal@lemburg.com  Tue Aug 29 22:12:45 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 23:12:45 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com> <39ABF1B8.426B7A6@lemburg.com> <012d01c011fe$31d23900$766940d5@hagrid>
Message-ID: <39AC274D.AD9856C7@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > Here is a pre-release version of mx.DateTime which should fix
> > the problem (the new release will use the top-level mx package
> > -- it does contain a backward compatibility hack though):
> >
> > http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> >
> > Please let me know if it fixes your problem... I don't use PyApache.
> 
> mal, can you update the bug database.  this bug is still listed
> as an open bug in the python core...
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

Hmm, I thought I had already closed it... done.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From barry@scottb.demon.co.uk  Tue Aug 29 22:21:04 2000
From: barry@scottb.demon.co.uk (Barry Scott)
Date: Tue, 29 Aug 2000 22:21:04 +0100
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC1F76.41CCED9@lemburg.com>
Message-ID: <000e01c011ff$09e3ca70$060210ac@private>

Use the problem as the solution.

The problem is that you get a SIGSEGV after you fall off the end of the stack.
(I'm assuming you always have guard pages between the stack end and other memory
zones. Otherwise you will not get the SEGV).

If you probe ahead of the stack to trigger the SIGSEGV you can use the signal
handler to trap the probe and recover gracefully. Use posix signal handling
everywhere for portability (don't mix posix and not and expect signals to work
BTW).

jmp_buf probe_env;

int CheckStack()	/* untested */
	{
	if( setjmp( &probe_env ) == 0 )
		{
		char buf[32];
		/* need code to deal with direction of stack */
		if( grow_down )
			buf[-65536] = 1;
		else
			buf[65536] = 1;
		return 1; /* stack is fine of 64k */
		}
	else
		{
		return 0; /* will run out of stack soon */
		}
	}

void sigsegv_handler( int )
	{
	longjmp( &probe_env );
	}

			Barry (not just a Windows devo <wink>)




From thomas@xs4all.net  Tue Aug 29 22:43:29 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 23:43:29 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.9617.203071.639126@buffalo.fnal.gov>; from cgw@fnal.gov on Tue, Aug 29, 2000 at 04:05:21PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <14764.9617.203071.639126@buffalo.fnal.gov>
Message-ID: <20000829234329.T500@xs4all.nl>

On Tue, Aug 29, 2000 at 04:05:21PM -0500, Charles G Waldman wrote:

> Jeremy Hylton writes:
>  > Does anyone have suggestions for how to detect unbounded recursion in
>  > the Python core on Unix platforms?

> Hey, check this out! - it's not portable in general, but it works for Linux,
> which certainly covers a large number of the systems out there in the world.

'large' in terms of "number of instances", perhaps, but not very large in
terms of total number of operating system types/versions, I think. I know of
two operating systems that implement that info in /proc (FreeBSD and Linux)
and one where it's optional (but default off and probably untested: BSDI.) I
also think that this is a very costly thing to do every ten (or even every
hundred) recursions.... I would go for the auto-vrbl-address-check, in
combination with either a fixed stack limit, or getrlimit() - which does
seem to work. Or perhaps the alloca() check for systems that have it (which
can be checked) and seems to work properly (which can be checked, too, but
not as reliable.)

The vrbl-address check only does a few integer calculations, and we can
forgo the getrlimit() call if we do it somewhere during Python init, and
after every call of resource.setrlimit(). (Or just do it anyway: it's
probably not *that* expensive, and if we don't do it each time, we can still
run into trouble if another extention module sets limits, or if python is
embedded in something that changes limits on the fly.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Wed Aug 30 00:10:25 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 01:10:25 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>; from ping@lfw.org on Tue, Aug 29, 2000 at 03:43:23PM -0500
References: <m13Tf69-000wcDC@swing.co.at> <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>
Message-ID: <20000830011025.V500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:43:23PM -0500, Ka-Ping Yee wrote:

> The "inspect" module (see http://www.lfw.org/python/) handles this nicely.

[snip example]

> I suggested "inspect.py" for the standard library quite some time ago
> (long before the feature freeze, and before ascii.py, which has since
> made it in).  MAL responded pretty enthusiastically
> (http://www.python.org/pipermail/python-dev/2000-July/013511.html).
> Could i request a little more feedback from others?

Looks fine to me, would fit nicely in with the other introspective things we
already have (dis, profile, etc) -- but wasn't it going to be added to the
'help' (or what was it) stdlib module ?

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From greg@cosc.canterbury.ac.nz  Wed Aug 30 03:11:41 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 30 Aug 2000 14:11:41 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>

> I doubt there would be much
> problem adding ".." as a token either.

If we're going to use any sort of ellipsis syntax here, I
think it would be highly preferable to use the ellipsis
token we've already got. I can't see any justification for
having two different ellipsis-like tokens in the language,
when there would be no ambiguity in using one for both
purposes.

> What we really want I think is something that evokes the following in the
> mind of the reader
> 
>     for i from START to END incrementing by STEP:

Am I right in thinking that the main motivation here is
to clean up the "for i in range(len(a))" idiom? If so,
what's wrong with a built-in:

  def indices(a):
    return range(len(a))

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From skip@mojam.com (Skip Montanaro)  Wed Aug 30 04:43:57 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 29 Aug 2000 22:43:57 -0500 (CDT)
Subject: [Python-Dev] MacPython 2.0?
Message-ID: <14764.33533.218103.763531@beluga.mojam.com>

Has Jack or anyone else been building Mac versions of 2.0 and making them
available somewhere?  I seem to have fallen off the MacPython list and
haven't taken the time to investigate (perhaps I set subscription to NOMAIL
and forgot that crucial point).  I have no compilation tools on my Mac, so
while I'd like to try testing things a little bit there, I am entirely
dependent on others to provide me with something runnable.

Thx,

Skip


From tanzer@swing.co.at  Wed Aug 30 07:23:08 2000
From: tanzer@swing.co.at (Christian Tanzer)
Date: Wed, 30 Aug 2000 08:23:08 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: Your message of "Tue, 29 Aug 2000 11:41:15 +0200."
 <39AB853B.217402A2@lemburg.com>
Message-ID: <m13U1HA-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal@lemburg.com> wrote:

> > Triple quoted strings work -- that's what I'm constantly using. The
> > downside is, that the docstrings either contain spurious white space
> > or it messes up the layout of the code (if you start subsequent lines=

> > in the first column).
> =

> Just a question of how smart you doc string extraction
> tools are. Have a look at hack.py:

Come on. There are probably hundreds of hacks around to massage
docstrings. I've written one myself. Ka-Ping Yee suggested
inspect.py...

My point was that in such cases it is much better if the language does
it than if everybody does his own kludge. If a change of the Python
parser concerning this point is out of the question, why not have a
standard module providing this functionality (Ka-Ping Yee offered one
<nudge>, <nudge>).

Regards,
Christian

-- =

Christian Tanzer                                         tanzer@swing.co.=
at
Glasauergasse 32                                       Tel: +43 1 876 62 =
36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 =
92



From mal@lemburg.com  Wed Aug 30 09:35:00 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 10:35:00 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13U1HA-000wcEC@swing.co.at>
Message-ID: <39ACC734.6F436894@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal@lemburg.com> wrote:
> 
> > > Triple quoted strings work -- that's what I'm constantly using. The
> > > downside is, that the docstrings either contain spurious white space
> > > or it messes up the layout of the code (if you start subsequent lines
> > > in the first column).
> >
> > Just a question of how smart you doc string extraction
> > tools are. Have a look at hack.py:
> 
> Come on. There are probably hundreds of hacks around to massage
> docstrings. I've written one myself. Ka-Ping Yee suggested
> inspect.py...

That's the point I wanted to make: there's no need to care much
about """-string formatting while writing them as long as you have
tools which do it for you at extraction time.
 
> My point was that in such cases it is much better if the language does
> it than if everybody does his own kludge. If a change of the Python
> parser concerning this point is out of the question, why not have a
> standard module providing this functionality (Ka-Ping Yee offered one
> <nudge>, <nudge>).

Would be a nice addition for Python's stdlib, yes. Maybe for 2.1,
since we are in feature freeze for 2.0...

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From pf@artcom-gmbh.de  Wed Aug 30 09:39:39 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Wed, 30 Aug 2000 10:39:39 +0200 (MEST)
Subject: Memory overcommitment and guessing about stack size (was Re: [Python-Dev] stack check on Unix: any suggestions?)
In-Reply-To: <39AC1BE0.FFAA9100@lemburg.com> from "M.-A. Lemburg" at "Aug 29, 2000 10:24: 0 pm"
Message-ID: <m13U3PH-000Dm9C@artcom0.artcom-gmbh.de>

--ELM967624779-23128-0_
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Hi,

Any attempts to *reliable* predict the amount of virtual memory (stack+heap)
available to a process are *DOOMED TO FAIL* by principle on any unixoid
System.

Some of you might have missed all those repeated threads about virtual memory
allocation and the overcommitment strategy in the various Linux groups.  

M.-A. Lemburg:
> Skip Montanaro wrote:
> > 
> >     MAL> The C program can suck memory in large chunks and consume great
> >     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
> >     MAL> doing wrong here).
> > 
> > Are you overwriting all that memory you malloc with random junk?  If not,
> > the stack and the heap may have collided but not corrupted each other.
> 
> Not random junk, but all 1s:
[...]

For anyone interested in more details, I attach an email written by
Linus Thorvalds in the thread 'Re: Linux is 'creating' memory ?!'
on 'comp.os.linux.developmen.apps' on Mar 20th 1995, since I was
unable to locate this article on Deja (you know).

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

--ELM967624779-23128-0_
Content-Type: text/plain; charset=ISO-8859-1
Content-Disposition: attachment; filename=lt
Content-Description: News-Posting by Linus Torvalds in 1995
Content-Transfer-Encoding: 7bit

From martin@loewis.home.cs.tu-berlin.de  Wed Aug 30 10:12:58 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 11:12:58 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>

> Does anyone have suggestions for how to detect unbounded recursion
> in the Python core on Unix platforms?

I just submitted patch 101352, at

http://sourceforge.net/patch/?func=detailpatch&patch_id=101352&group_id=5470

This patch works on the realistic assumption that reliable stack usage
is not available through getrusage on most systems, so it uses an
estimate instead. The upper stack boundary is determined on thread
creation; the lower stack boundary inside the check. This must allow
for initial stack frames (main, _entry, etc), and for pages allocated
by on the stack by the system. At least on Linux, argv and env pages
count towards the stack limit.

If some systems are known to return good results from getrusage, that
should be used instead.

I have tested this patch on a Linux box to detect recursion in both
the example of bug 112943, as well as the foo() recursion; the latter
would crash with stock CVS python only when I reduced the stack limit
from 8MB to 1MB.

Since the patch uses a heuristic to determine stack exhaustion, it is
probably possible to find cases where it does not work. I.e. it might
diagnose exhaustion, where it could run somewhat longer (rather,
deeper), or it fails to diagnose exhaustion when it is really out of
stack. It is also likely that there are better heuristics. Overall, I
believe this patch is an improvement.

While this patch claims to support all of Unix, it only works where
getrlimit(RLIMIT_STACK) works. Unix(tm) does guarantee this API; it
should work on *BSD and many other Unices as well.

Comments?

Martin


From mal@lemburg.com  Wed Aug 30 10:56:31 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 11:56:31 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
Message-ID: <39ACDA4F.3EF72655@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > Does anyone have suggestions for how to detect unbounded recursion
> > in the Python core on Unix platforms?
> 
> I just submitted patch 101352, at
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101352&group_id=5470
> 
> This patch works on the realistic assumption that reliable stack usage
> is not available through getrusage on most systems, so it uses an
> estimate instead. The upper stack boundary is determined on thread
> creation; the lower stack boundary inside the check. This must allow
> for initial stack frames (main, _entry, etc), and for pages allocated
> by on the stack by the system. At least on Linux, argv and env pages
> count towards the stack limit.
> 
> If some systems are known to return good results from getrusage, that
> should be used instead.
> 
> I have tested this patch on a Linux box to detect recursion in both
> the example of bug 112943, as well as the foo() recursion; the latter
> would crash with stock CVS python only when I reduced the stack limit
> from 8MB to 1MB.
> 
> Since the patch uses a heuristic to determine stack exhaustion, it is
> probably possible to find cases where it does not work. I.e. it might
> diagnose exhaustion, where it could run somewhat longer (rather,
> deeper), or it fails to diagnose exhaustion when it is really out of
> stack. It is also likely that there are better heuristics. Overall, I
> believe this patch is an improvement.
> 
> While this patch claims to support all of Unix, it only works where
> getrlimit(RLIMIT_STACK) works. Unix(tm) does guarantee this API; it
> should work on *BSD and many other Unices as well.
> 
> Comments?

See my comments in the patch manager... the patch looks fine
except for two things: getrlimit() should be tested for
usability in the configure script and the call frequency
of PyOS_CheckStack() should be lowered to only use it for
potentially recursive programs.

Apart from that, this looks like the best alternative so far :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From nowonder@nowonder.de  Wed Aug 30 12:58:51 2000
From: nowonder@nowonder.de (Peter Schneider-Kamp)
Date: Wed, 30 Aug 2000 11:58:51 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>
Message-ID: <39ACF6FB.66BAB739@nowonder.de>

Greg Ewing wrote:
> 
> Am I right in thinking that the main motivation here is
> to clean up the "for i in range(len(a))" idiom? If so,
> what's wrong with a built-in:
> 
>   def indices(a):
>     return range(len(a))

As far as I know adding a builtin indices() has been
rejected as an idea.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter@schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de


From Fredrik Lundh" <effbot@telia.com  Wed Aug 30 11:27:12 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Wed, 30 Aug 2000 12:27:12 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com>
Message-ID: <000d01c0126c$dfe700c0$766940d5@hagrid>

mal wrote:
> See my comments in the patch manager... the patch looks fine
> except for two things: getrlimit() should be tested for
> usability in the configure script and the call frequency
> of PyOS_CheckStack() should be lowered to only use it for
> potentially recursive programs.

the latter would break windows and mac versions of Python,
where Python can run on very small stacks (not to mention
embedded systems...)

for those platforms, CheckStack is designed to work with an
8k safety margin (PYOS_STACK_MARGIN)

:::

one way to address this is to introduce a scale factor, so that
you can add checks based on the default 8k limit, but auto-
magically apply them less often platforms where the safety
margin is much larger...

/* checkstack, but with a "scale" factor */
#if windows or mac
/* default safety margin */
#define PYOS_CHECKSTACK(v, n)\
    (((v) % (n) == 0) && PyOS_CheckStack())
#elif linux
/* at least 10 times the default safety margin */
#define PYOS_CHECKSTACK(v, n)\
    (((v) % ((n)*10) == 0) && PyOS_CheckStack())
#endif

 if (PYOS_CHECKSTACK(tstate->recursion_depth, 10)
    ...

</F>



From mal@lemburg.com  Wed Aug 30 11:42:39 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 12:42:39 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid>
Message-ID: <39ACE51F.3AEC75AB@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > See my comments in the patch manager... the patch looks fine
> > except for two things: getrlimit() should be tested for
> > usability in the configure script and the call frequency
> > of PyOS_CheckStack() should be lowered to only use it for
> > potentially recursive programs.
> 
> the latter would break windows and mac versions of Python,
> where Python can run on very small stacks (not to mention
> embedded systems...)
> 
> for those platforms, CheckStack is designed to work with an
> 8k safety margin (PYOS_STACK_MARGIN)

Ok, I don't mind calling it every ten levels deep, but I'd
rather not have it start at level 0. The reason is
that many programs probably don't make much use of
recursion anyway and have a maximum call depth of around
10-50 levels (Python programs usually using shallow class hierarchies).
These programs should not be bothered by calling PyOS_CheckStack()
all the time. Recursive programs will easily reach the 100 mark -- 
those should call PyOS_CheckStack often enough to notice the 
stack problems.

So the check would look something like this:

if (tstate->recursion_depth >= 50 &&
    tstate->recursion_depth%10 == 0 &&
    PyOS_CheckStack()) {
                PyErr_SetString(PyExc_MemoryError, "Stack overflow");
                return NULL;
        }

> :::
> 
> one way to address this is to introduce a scale factor, so that
> you can add checks based on the default 8k limit, but auto-
> magically apply them less often platforms where the safety
> margin is much larger...
> 
> /* checkstack, but with a "scale" factor */
> #if windows or mac
> /* default safety margin */
> #define PYOS_CHECKSTACK(v, n)\
>     (((v) % (n) == 0) && PyOS_CheckStack())
> #elif linux
> /* at least 10 times the default safety margin */
> #define PYOS_CHECKSTACK(v, n)\
>     (((v) % ((n)*10) == 0) && PyOS_CheckStack())
> #endif
> 
>  if (PYOS_CHECKSTACK(tstate->recursion_depth, 10)
>     ...

I'm not exactly sure how large the safety margin is with
Martin's patch, but this seems a good idea.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Moshe Zadka <moshez@math.huji.ac.il>  Wed Aug 30 11:49:59 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Wed, 30 Aug 2000 13:49:59 +0300 (IDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008301348150.2545-100000@sundial>

On Tue, 29 Aug 2000, Fred L. Drake, Jr. wrote:

> 
> Thomas Wouters writes:
>  > (getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
>  > maximum number of recursions from that.)
> 
>   Still no go -- we can calculate the number of recursions for a
> particular call frame size (or expected mix of frame sizes, which is
> really the same), but we can't predict recursive behavior inside a C
> extension, which is a significant part of the problem (witness the SRE
> experience).  That's why PyOS_StackCheck() actually has to do more
> than test a counter -- if the counter is low but the call frames are
> larger than our estimate, it won't help.

Can my trick (which works only if Python has control of the main) of
comparing addresses of local variables against addresses of local 
variables from main() and against the stack limit be used? 99% of the
people are using the plain Python interpreter with extensions, so it'll
solve 99% of the problem?
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From jack@oratrix.nl  Wed Aug 30 12:30:01 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:30:01 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Message by Jeremy Hylton <jeremy@beopen.com> ,
 Tue, 29 Aug 2000 14:42:41 -0400 (EDT) , <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <20000830113002.44CE7303181@snelboot.oratrix.nl>

My SGI has getrlimit(RLIMIT_STACK) which should do the trick. But maybe this 
is an sgi-ism? Otherwise RLIMIT_VMEM and subtracting brk() may do the trick.

While thinking about this, though, I suddenly realised that my (new, faster) 
Mac implementation of PyOS_CheckStack will fail miserably in any other than 
the main thread, something I'll have to fix shortly.

Unix code will also have to differentiate between running on the main stack 
and a sub-thread stack, probably. And I haven't looked at the way 
PyOS_CheckStack is implemented on Windows, but it may well also share this 
problem.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From jack@oratrix.nl  Wed Aug 30 12:38:55 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:38:55 +0200
Subject: [Python-Dev] MacPython 2.0?
In-Reply-To: Message by Skip Montanaro <skip@mojam.com> ,
 Tue, 29 Aug 2000 22:43:57 -0500 (CDT) , <14764.33533.218103.763531@beluga.mojam.com>
Message-ID: <20000830113855.B1F2F303181@snelboot.oratrix.nl>

> Has Jack or anyone else been building Mac versions of 2.0 and making them
> available somewhere?  I seem to have fallen off the MacPython list and
> haven't taken the time to investigate (perhaps I set subscription to NOMAIL
> and forgot that crucial point).  I have no compilation tools on my Mac, so
> while I'd like to try testing things a little bit there, I am entirely
> dependent on others to provide me with something runnable.

I'm waiting for Guido to release a 2.0 and then I'll quickly follow suit. I 
have almost everything in place for the first alfa/beta.

But, if you're willing to be my guineapig I'd be happy to build you a 
distribution of the current state of things tonight or tomorrow, let me know.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From jack@oratrix.nl  Wed Aug 30 12:53:32 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:53:32 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Message by "M.-A. Lemburg" <mal@lemburg.com> ,
 Wed, 30 Aug 2000 12:42:39 +0200 , <39ACE51F.3AEC75AB@lemburg.com>
Message-ID: <20000830115332.5CA4A303181@snelboot.oratrix.nl>

A completely different way to go about getting the stacksize on Unix is by 
actually committing the space once in a while. Something like (typed in as I'm 
making it up):

STACK_INCREMENT=128000

prober() {
    char space[STACK_INCREMENT];

    space[0] = 1;
    /* or maybe for(i=0;i<STACK_INCREMENT; i+=PAGESIZE) or so */
    space[STACK_INCREMENT-1] = 1;
}

jmp_buf buf;
catcher() {
    longjmp(buf);
    return 1;
}

PyOS_CheckStack() {
    static char *known_safe;
    char *here;

    if (we-are-in-a-thread())
	go do different things;
    if ( &here > known_safe )
	return 1;
    keep-old-SIGSEGV-handler;
    if ( setjmp(buf) )
	return 0;
    signal(SIGSEGV, catcher);
    prober();
    restore-old-SIGSEGV-handler;
    known_safe = &here - (STACK_INCREMENT - PYOS_STACK_MARGIN);
    return 1;
}
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From thomas@xs4all.net  Wed Aug 30 13:25:42 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 14:25:42 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000830113002.44CE7303181@snelboot.oratrix.nl>; from jack@oratrix.nl on Wed, Aug 30, 2000 at 01:30:01PM +0200
References: <jeremy@beopen.com> <20000830113002.44CE7303181@snelboot.oratrix.nl>
Message-ID: <20000830142542.A12695@xs4all.nl>

On Wed, Aug 30, 2000 at 01:30:01PM +0200, Jack Jansen wrote:

> My SGI has getrlimit(RLIMIT_STACK) which should do the trick. But maybe this 
> is an sgi-ism? Otherwise RLIMIT_VMEM and subtracting brk() may do the trick.

No, getrlimit(RLIMIT_STACK, &rlim) is the way to go. 'getrlimit' isn't
available everywhere, but the RLIMIT_STACK constant is universal, as far as
I know. And we can use autoconf to figure out if it's available.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fredrik@pythonware.com  Wed Aug 30 14:30:23 2000
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 30 Aug 2000 15:30:23 +0200
Subject: [Python-Dev] Lukewarm about range literals
References: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>
Message-ID: <04d101c01286$7444d6c0$0900a8c0@SPIFF>

greg wrote:
> If we're going to use any sort of ellipsis syntax here, I
> think it would be highly preferable to use the ellipsis
> token we've already got. I can't see any justification for
> having two different ellipsis-like tokens in the language,
> when there would be no ambiguity in using one for both
> purposes.

footnote: "..." isn't really token:

>>> class Spam:
...     def __getitem__(self, index):
...         print index
...
>>> spam = Spam()
>>> spam[...]
Ellipsis
>>> spam[. . .]
Ellipsis
>>> spam[.
... .
... .
... ]
Ellipsis

(etc)

</F>



From akuchlin@mems-exchange.org  Wed Aug 30 14:26:20 2000
From: akuchlin@mems-exchange.org (A.M. Kuchling)
Date: Wed, 30 Aug 2000 09:26:20 -0400
Subject: [Python-Dev] Cookie.py security
Message-ID: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>

[CC'ed to python-dev and Tim O'Malley]

The Cookie module recently added to 2.0 provides 3 classes of Cookie:
SimpleCookie, which treats cookie values as simple strings, 
SerialCookie, which treats cookie values as pickles and unpickles them,
and SmartCookie which figures out if the value is a pickle or not.

Unpickling untrusted data is unsafe.  (Correct?)  Therefore,
SerialCookie and SmartCookie really shouldn't be used, and Moshe's
docs for the module say so.

Question: should SerialCookie and SmartCookie be removed?  If they're
not there, people won't accidentally use them because they didn't read
the docs and missed the warning.

Con: breaks backward compatibility with the existing cookie module and
forks the code.  

(Are marshals safer than pickles?  What if SerialCookie used marshal
instead?)

--amk



From fdrake@beopen.com  Wed Aug 30 15:09:16 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 30 Aug 2000 10:09:16 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>

A.M. Kuchling writes:
 > (Are marshals safer than pickles?  What if SerialCookie used marshal
 > instead?)

  A bit safer, I think, but this maintains the backward compatibility
issue.
  If it is useful to change the API, this is the best time to do it,
but we'd probably want to rename the module as well.  Shared
maintenance is also an issue -- Tim's opinion is very valuable here!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From trentm@ActiveState.com  Wed Aug 30 17:18:29 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 30 Aug 2000 09:18:29 -0700
Subject: [Python-Dev] NetBSD compilation bug - I need help (was: Re: Python bug)
In-Reply-To: <14764.32658.941039.258537@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Aug 29, 2000 at 11:29:22PM -0400
References: <14764.32658.941039.258537@bitdiddle.concentric.net>
Message-ID: <20000830091829.C14776@ActiveState.com>

On Tue, Aug 29, 2000 at 11:29:22PM -0400, Jeremy Hylton wrote:
> You have one open Python bug that is assigned to you and given a
> priority seven or higher.  I would like to resolve this bugs before
> the 2.0b1 release.
> 
> The bug is:
> 112289 | NetBSD1.4.2 build issue 
>

Sorry to have let this one get a little stale. I can give it a try. A couple
of questions:

1. Who reported this bug? He talked about providing more information and I
would like to speak with him. I cannot find his email address.
2. Does anybody have a NetBSD1.4.2 (or close) machine that I can get shell
access to? Do you know if they have such a machine at SourceForge that users
can get shell access to? Or failing that can someone with such a machine give
me the full ./configure and make output and maybe run this command:
   find /usr/include -name "*" -type f | xargs -l grep -nH _TELL64
and give me the output.


If I come up blank on both of these then I can't really expect to fix this
bug.


Thanks,
Trent


-- 
Trent Mick
TrentM@ActiveState.com


From pf@artcom-gmbh.de  Wed Aug 30 17:37:16 2000
From: pf@artcom-gmbh.de (Peter Funk)
Date: Wed, 30 Aug 2000 18:37:16 +0200 (MEST)
Subject: os.remove() behaviour on empty directories (was Re: [Python-Dev] If you thought there were too many PEPs...)
In-Reply-To: <200008271828.NAA14847@cj20424-a.reston1.va.home.com> from Guido van Rossum at "Aug 27, 2000  1:28:46 pm"
Message-ID: <m13UArU-000Dm9C@artcom0.artcom-gmbh.de>

Hi,

effbot:
> > btw, Python's remove/unlink implementation is slightly
> > broken -- they both map to unlink, but that's not the
> > right way to do it:
> > 
> > from SUSv2:
> > 
> >     int remove(const char *path);
> > 
> >     If path does not name a directory, remove(path)
> >     is equivalent to unlink(path). 
> > 
> >     If path names a directory, remove(path) is equi-
> >     valent to rmdir(path). 
> > 
> > should I fix this?

BDFL:
> That's a new one -- didn't exist when I learned Unix.

Yes, this 'remove()' has been added relatively late to Unix.  It didn't
existed for example in SCO XENIX 386 (the first "real" OS available
for relatively inexpensive IBM-PC arch boxes long before the advent
of Linux).

Changing the behaviour of Pythons 'os.remove()' on Unices might break 
some existing code (although such code is not portable to WinXX anyway):

pf@artcom0:ttyp3 ~ 7> mkdir emptydir
pf@artcom0:ttyp3 ~ 8> python
Python 1.5.2 (#1, Jul 23 1999, 06:38:16)  [GCC egcs-2.91.66 19990314/Linux (egcs- on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import os
>>> try:
...     os.remove('emptydir')
... except OSError:
...     print 'emptydir is a directory'
... 
emptydir is a directory
>>> 

> I guess we can fix this in 2.1.

Please don't do this without a heavy duty warning in a section about
expected upgrade problems.  

This change might annoy people, who otherwise don't care about
portability and use Python on Unices only.  I imagine people using
something like this:

    def cleanup_junkfiles(targetdir)
        for n in os.listdir(targetdir):
            try:
                os.remove(n)
            except OSError:
                pass

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)


From thomas@xs4all.net  Wed Aug 30 18:39:48 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 19:39:48 +0200
Subject: [Python-Dev] Threads & autoconf
Message-ID: <20000830193948.C12695@xs4all.nl>

I'm trying to clean up the autoconf (and README) mess wrt. threads a bit,
but I think I need some hints ;) I can't figure out why there is a separate
--with-dec-threads option... Is there a reason it can't be autodetected like
we do for other thread systems ? Does DEC Unix do something very different
but functional when leaving out the '-threads' option (which is the only
thing -dec- adds) or is it just "hysterical raisins" ? 

And then the systems that need different library/compiler flags/settings...
I suspect noone here has one of those machines ? It'd be nice if we could
autodetect this without trying every combination of flags/libs in autoconf
:P (But then, if we could autodetect, I assume it would've been done long
ago... right ? :)

Do we know if those systems still need those separate flags/libs ? Should we
leave a reference to them in the README, or add a separate README.threads
file with more extensive info about threads and how to disable them ? (I
think README is a bit oversized, but that's probably just me.) And are we
leaving threads on by default ? If not, the README will have to be
re-adjusted again :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gward@mems-exchange.org  Wed Aug 30 18:52:36 2000
From: gward@mems-exchange.org (Greg Ward)
Date: Wed, 30 Aug 2000 13:52:36 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>; from ping@lfw.org on Tue, Aug 29, 2000 at 12:09:39AM -0500
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
Message-ID: <20000830135235.A8465@ludwig.cnri.reston.va.us>

On 29 August 2000, Ka-Ping Yee said:
> I think these examples are beautiful.  There is no reason why we couldn't
> fit something like this into Python.  Imagine this:
> 
>     - The ".." operator produces a tuple (or generator) of integers.
>       It should probably have precedence just above "in".
>     
>     - "a .. b", where a and b are integers, produces the sequence
>       of integers (a, a+1, a+2, ..., b).
> 
>     - If the left argument is a tuple of two integers, as in
>       "a, b .. c", then we get the sequence of integers from
>       a to c with step b-a, up to and including c if c-a happens
>       to be a multiple of b-a (exactly as in Haskell).

I guess I haven't been paying much attention, or I would have squawked
at the idea of using *anything* other than ".." for a literal range.

> If this operator existed, we could then write:
> 
>     for i in 2, 4 .. 20:
>         print i
> 
>     for i in 1 .. 10:
>         print i*i

Yup, beauty.  +1 on this syntax.  I'd vote to scuttle the [1..10] patch
and wait for an implementation of The Right Syntax, as illustrated by Ping.


>     for i in 0 ..! len(a):
>         a[i] += 1

Ugh.  I agree with everythone else on this: why not "0 .. len(a)-1"?

        Greg


From thomas@xs4all.net  Wed Aug 30 19:04:03 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 20:04:03 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000830135235.A8465@ludwig.cnri.reston.va.us>; from gward@mems-exchange.org on Wed, Aug 30, 2000 at 01:52:36PM -0400
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org> <20000830135235.A8465@ludwig.cnri.reston.va.us>
Message-ID: <20000830200402.E12695@xs4all.nl>

On Wed, Aug 30, 2000 at 01:52:36PM -0400, Greg Ward wrote:
> I'd vote to scuttle the [1..10] patch
> and wait for an implementation of The Right Syntax, as illustrated by Ping.

There *is* no [1..10] patch. There is only the [1:10] patch. See the PEP ;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From martin@loewis.home.cs.tu-berlin.de  Wed Aug 30 19:32:30 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 20:32:30 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39ACE51F.3AEC75AB@lemburg.com> (mal@lemburg.com)
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com>
Message-ID: <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>

> So the check would look something like this:
> 
> if (tstate->recursion_depth >= 50 &&
>     tstate->recursion_depth%10 == 0 &&
>     PyOS_CheckStack()) {
>                 PyErr_SetString(PyExc_MemoryError, "Stack overflow");
>                 return NULL;
>         }

That sounds like a good solution to me. A recursion depth of 50 should
be guaranteed on most systems supported by Python.

> I'm not exactly sure how large the safety margin is with
> Martin's patch, but this seems a good idea.

I chose 3% of the rlimit, which must accomodate the space above the
known start of stack plus a single page. That number was chosen
arbitarily; on my Linux system, the stack limit is 8MB, so 3% give
200k. Given the maximum limitation of environment pages and argv
pages, I felt that this is safe enough. OTOH, if you've used more than
7MB of stack, it is likely that the last 200k won't help, either.

Regards,
Martin



From martin@loewis.home.cs.tu-berlin.de  Wed Aug 30 19:37:56 2000
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 20:37:56 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <200008301837.UAA00743@loewis.home.cs.tu-berlin.de>

> My SGI has getrlimit(RLIMIT_STACK) which should do the trick

It tells you how much stack you've got; it does not tell you how much
of that is actually in use.

> Unix code will also have to differentiate between running on the
> main stack and a sub-thread stack, probably.

My patch computes (or, rather, estimates) a start-of-stack for each
thread, and then saves that in the thread context.

> And I haven't looked at the way PyOS_CheckStack is implemented on
> Windows

It should work for multiple threads just fine. It tries to allocate 8k
on the current stack, and then catches the error if any.

Regards,
Martin



From timo@timo-tasi.org  Wed Aug 30 19:51:52 2000
From: timo@timo-tasi.org (timo@timo-tasi.org)
Date: Wed, 30 Aug 2000 14:51:52 -0400
Subject: [Python-Dev] Re: Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>; from A.M. Kuchling on Wed, Aug 30, 2000 at 09:26:20AM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000830145152.A24581@illuminatus.timo-tasi.org>

hola.

On Wed, Aug 30, 2000 at 09:26:20AM -0400, A.M. Kuchling wrote:
> Question: should SerialCookie and SmartCookie be removed?  If they're
> not there, people won't accidentally use them because they didn't read
> the docs and missed the warning.
> 
> Con: breaks backward compatibility with the existing cookie module and
> forks the code.  

I had a thought about this - kind of a intermediate solution.

Right now, the shortcut 'Cookie.Cookie()' returns an instance of the
SmartCookie, which uses Pickle.  Most extant examples of using the
Cookie module use this shortcut.

We could change 'Cookie.Cookie()' to return an instance of SimpleCookie,
which does not use Pickle.  Unfortunately, this may break existing code
(like Mailman), but there is a lot of code out there that it won't break.

Also, people could still use the SmartCookie and SerialCookie classes,
but not they would be more likely to read them in the documentation
because they are "outside the beaten path".



From timo@timo-tasi.org  Wed Aug 30 20:09:13 2000
From: timo@timo-tasi.org (timo@timo-tasi.org)
Date: Wed, 30 Aug 2000 15:09:13 -0400
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>; from Fred L. Drake, Jr. on Wed, Aug 30, 2000 at 10:09:16AM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>
Message-ID: <20000830150913.B24581@illuminatus.timo-tasi.org>

hola.

On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
> 
> A.M. Kuchling writes:
>  > (Are marshals safer than pickles?  What if SerialCookie used marshal
>  > instead?)
> 
>   A bit safer, I think, but this maintains the backward compatibility
> issue.

Is this true?
  Marshal is backwards compatible to Pickle?

If it is true, that'd be kinda cool.

>   If it is useful to change the API, this is the best time to do it,
> but we'd probably want to rename the module as well.  Shared
> maintenance is also an issue -- Tim's opinion is very valuable here!

I agree -- if this is the right change, then now is the right time.

If a significant change is warranted, then the name change is probably
the right way to signal this change.  I'd vote for 'httpcookie.py'.

I've been thinking about the shared maintenance issue, too.  The right
thing is for the Cookie.py (or renamed version thereof) to be the 
official version.  I would probably keep the latest version up on
my web site but mark it as 'deprecated' once Python 2.0 gets released.

thoughts..?

e


From thomas@xs4all.net  Wed Aug 30 20:22:22 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 21:22:22 +0200
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830150913.B24581@illuminatus.timo-tasi.org>; from timo@timo-tasi.org on Wed, Aug 30, 2000 at 03:09:13PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.5516.877559.786344@cj42289-a.reston1.va.home.com> <20000830150913.B24581@illuminatus.timo-tasi.org>
Message-ID: <20000830212222.F12695@xs4all.nl>

On Wed, Aug 30, 2000 at 03:09:13PM -0400, timo@timo-tasi.org wrote:
> hola.
> On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
> > A.M. Kuchling writes:
> >  > (Are marshals safer than pickles?  What if SerialCookie used marshal
> >  > instead?)

> >   A bit safer, I think, but this maintains the backward compatibility
> > issue.

> Is this true?
>   Marshal is backwards compatible to Pickle?

No, what Fred meant is that it maintains the backward compatibility *issue*,
not compatibility itself. It's still a problem for people who want to read
cookies made by the 'old' version, or otherwise want to read in 'old'
cookies.

I think it would be possible to provide a 'safe' unpickle, that only
unpickles primitives, for example, but that might *still* maintain the
backwards compatibility issue, even if it's less of an issue then. And it's
a bloody lot of work, too :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@beopen.com  Wed Aug 30 22:45:28 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Wed, 30 Aug 2000 17:45:28 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830150913.B24581@illuminatus.timo-tasi.org>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
 <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>
 <20000830150913.B24581@illuminatus.timo-tasi.org>
Message-ID: <14765.32888.769808.560154@cj42289-a.reston1.va.home.com>

On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
 >   A bit safer, I think, but this maintains the backward compatibility
 > issue.

timo@timo-tasi.org writes:
 > Is this true?
 >   Marshal is backwards compatible to Pickle?
 > 
 > If it is true, that'd be kinda cool.

  Would be, but my statement wasn't clear: it maintains the *issue*,
not compatibility.  ;(  The data formats are not interchangable in any
way.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Wed Aug 30 23:54:25 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Wed, 30 Aug 2000 18:54:25 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> ...
> What we really want I think is something that evokes the following in the
> mind of the reader
>
>     for i from START to END incrementing by STEP:
>
> without gobbling up all those keywords.

Note that they needn't be keywords, though, any more than "as" became a
keyword in the new "import x as y".  I love the Haskell notation in Haskell
because it fits so nicely with "infinite" lists there too.  I'm not sure
about in Python -- 100s of languages have straightforward integer index
generation, and Python's range(len(seq)) is hard to see as much more than
gratuitous novelty when viewed against that background.

    for i = 1 to 10:           #  1 to 10 inclusive
    for i = 10 to 1 by -1:     #  10 down to 1 inclusive
    for i = 1 upto 10:         #  1 to 9 inclusive
    for i = 10 upto 1 by -1:   #  10 down to 2 inclusive

are all implementable right now without new keywords, and would pretty much
*have* to be "efficient" from the start because they make no pretense at
being just one instance of an infinitely extensible object iteration
protocol.  They are what they are, and that's it -- simplicity isn't
*always* a bad thing <wink>.

>     for i in [START..END,STEP]:
>     for i in [START:END:STEP]:
>     for i in [START..END:STEP]:

The difference in easy readability should squawk for itself.

>     for i in 0 ..! len(a):
>         a[i] += 1

Looks like everybody hates that, and that's understandable, but I can't
imagine why

     for in 0 .. len(a)-1:

isn't *equally* hated!  Requiring "-1" in the most common case is simply bad
design.  Check out the Python-derivative CORBAscript, where Python's "range"
was redefined to *include* the endpoint.  Virtually every program I've seen
in it bristles with ugly

    for i in range(len(a)-1)

lines.  Yuck.

but-back-to-2.0-ly y'rs  - tim




From jeremy@beopen.com  Thu Aug 31 00:34:14 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 19:34:14 -0400 (EDT)
Subject: [Python-Dev] Release deadline looming (patches by Aug. 31)
Message-ID: <14765.39414.944199.794554@bitdiddle.concentric.net>

[Apologies for the short notice here; this was lost in a BeOpen mail
server for about 24 hours.]

We are still on schedule to release 2.0b1 on Sept. 5 (Tuesday).  There
are a few outstanding items that we need to resolve.  In order to
leave time for the admistrivia necessary to produce a release, we will
need to have a code freeze soon.

Guido says that typically, all the patches should be in two days
before the release.  The two-day deadline may be earlier than
expected, because Monday is a holiday in the US and at BeOpen.  So two
days before the release is midnight Thursday.

That's right.  All patches need to be completed by Aug. 31 at
midnight.  If this deadline is missed, the change won't make it into
2.0b1.

If you've got bugs assigned to you with a priority higher than 5,
please try to take a look at them before the deadline.

Jeremy


From jeremy@beopen.com  Thu Aug 31 02:21:23 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 21:21:23 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <14765.45843.401319.187156@bitdiddle.concentric.net>

>>>>> "AMK" == A M Kuchling <amk1@erols.com> writes:

  AMK> (Are marshals safer than pickles?  What if SerialCookie used
  AMK> marshal instead?)

I would guess that pickle makes attacks easier: It has more features,
e.g. creating instances of arbitrary classes (provided that the attacker
knows what classes are available).

But neither marshal nor pickle is safe.  It is possible to cause a
core dump by passing marshal invalid data.  It may also be possible to
launch a stack overflow attack -- not sure.

Jeremy


From gstein@lyra.org  Thu Aug 31 02:53:10 2000
From: gstein@lyra.org (Greg Stein)
Date: Wed, 30 Aug 2000 18:53:10 -0700
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.45843.401319.187156@bitdiddle.concentric.net>; from jeremy@beopen.com on Wed, Aug 30, 2000 at 09:21:23PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <20000830185310.I3278@lyra.org>

On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
>...
> But neither marshal nor pickle is safe.  It is possible to cause a
> core dump by passing marshal invalid data.  It may also be possible to
> launch a stack overflow attack -- not sure.

I believe those core dumps were fixed. Seems like I remember somebody doing
some work on that.

??


Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From greg@cosc.canterbury.ac.nz  Thu Aug 31 02:47:10 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 13:47:10 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <04d101c01286$7444d6c0$0900a8c0@SPIFF>
Message-ID: <200008310147.NAA17316@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <fredrik@pythonware.com>:

> footnote: "..." isn't really token:

Whatever it is technically, it's an existing part of the
language, and it seems redundant and confusing to introduce
another very similar one.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From jeremy@beopen.com  Thu Aug 31 02:55:24 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 21:55:24 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830185310.I3278@lyra.org>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
 <14765.45843.401319.187156@bitdiddle.concentric.net>
 <20000830185310.I3278@lyra.org>
Message-ID: <14765.47884.801312.292059@bitdiddle.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

  GS> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
  >> ...  But neither marshal nor pickle is safe.  It is possible to
  >> cause a core dump by passing marshal invalid data.  It may also
  >> be possible to launch a stack overflow attack -- not sure.

  GS> I believe those core dumps were fixed. Seems like I remember
  GS> somebody doing some work on that.

  GS> ??

Aha!  I hadn't notice that patch sneaking in.  I brought it up with
Guido a few months ago and he didn't want to make changes to marshal
because, IIRC, marshal exists only because .pyc files need it.

Jeremy


From greg@cosc.canterbury.ac.nz  Thu Aug 31 02:59:34 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 13:59:34 +1200 (NZST)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.32888.769808.560154@cj42289-a.reston1.va.home.com>
Message-ID: <200008310159.NAA17320@s454.cosc.canterbury.ac.nz>

"Fred L. Drake, Jr." <fdrake@beopen.com>:

> it maintains the *issue*, not compatibility.  ;( 

A confusing choice of word! Usually one only talks about
"maintaining" something that one *want* maintained...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 31 03:33:36 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:33:36 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>
Message-ID: <200008310233.OAA17325@s454.cosc.canterbury.ac.nz>

Tim Peters <tim_one@email.msn.com>:

> I can't imagine why
> 
>     for in 0 .. len(a)-1:
> 
> isn't *equally* hated!  Requiring "-1" in the most common case is simply bad
> design.

I agree with that. I didn't mean to suggest that I thought it was
a good idea.

The real problem is in defining a..b to include b, which gives
you a construct that is intuitive but not very useful in the
context of the rest of the language.

On the other hand, if a..b *doesn't* include b, it's more
useful, but less intuitive.

(By "intuitive" here, I mean "does what you would expect based
on your experience with similar notations in other programming
languages or in mathematics".)

I rather like the a:b idea, because it ties in with the half-open 
property of slices. Unfortunately, it gives the impression that
you should be able to say

   a = [1,2,3,4,5,6]
   b = 2:5
   c = a[b]

and get c == [3,4,5].

>    for i = 1 to 10:           #  1 to 10 inclusive

Endpoint problem again. You would be forever saying

   for i = 0 to len(a)-1:

I do like the idea of keywords, however. All we need to do
is find a way of spelling

   for i = 0 uptobutnotincluding len(a):

without running out of breath.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 31 03:37:00 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:37:00 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <39ACF6FB.66BAB739@nowonder.de>
Message-ID: <200008310237.OAA17328@s454.cosc.canterbury.ac.nz>

Peter Schneider-Kamp <nowonder@nowonder.de>:

> As far as I know adding a builtin indices() has been
> rejected as an idea.

But why? I know it's been suggested, but I don't remember seeing any
convincing arguments against it. Or much discussion at all.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Aug 31 03:57:07 2000
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:57:07 +1200 (NZST)
Subject: [Python-Dev] Pragmas: Just say "No!"
In-Reply-To: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>
Message-ID: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz>

Greg Wilson <gvwilson@nevex.com>:

> Pragmas are a way to embed programs for the
> parser in the file being parsed.

I hope the BDFL has the good sense to run screaming from
anything that has the word "pragma" in it. As this discussion
demonstrates, it's far too fuzzy and open-ended a concept --
nobody can agree on what sort of thing a pragma is supposed
to be.

INTERVIEWER: Tell us how you came to be drawn into the
world of pragmas.

COMPILER WRITER: Well, it started off with little things. Just
a few boolean flags, a way to turn asserts on and off, debug output,
that sort of thing. I thought, what harm can it do? It's not like
I'm doing anything you couldn't do with command line switches,
right? Then it got a little bit heavier, integer values for
optimisation levels, even the odd string or two. Before I
knew it I was doing the real hard stuff, constant expressions,
conditionals, the whole shooting box. Then one day when I put
in a hook for making arbitrary calls into the interpreter, that
was when I finally realised I had a problem...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From trentm@ActiveState.com  Thu Aug 31 05:34:44 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Wed, 30 Aug 2000 21:34:44 -0700
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830185310.I3278@lyra.org>; from gstein@lyra.org on Wed, Aug 30, 2000 at 06:53:10PM -0700
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net> <20000830185310.I3278@lyra.org>
Message-ID: <20000830213444.C20461@ActiveState.com>

On Wed, Aug 30, 2000 at 06:53:10PM -0700, Greg Stein wrote:
> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
> >...
> > But neither marshal nor pickle is safe.  It is possible to cause a
> > core dump by passing marshal invalid data.  It may also be possible to
> > launch a stack overflow attack -- not sure.
> 
> I believe those core dumps were fixed. Seems like I remember somebody doing
> some work on that.
> 
> ??

Nope, I think that there may have been a few small patches but the
discussions to fix some "brokeness" in marshal did not bear fruit:

http://www.python.org/pipermail/python-dev/2000-June/011132.html


Oh, I take that back. Here is patch that supposedly fixed some core dumping:

http://www.python.org/pipermail/python-checkins/2000-June/005997.html
http://www.python.org/pipermail/python-checkins/2000-June/006029.html


Trent


-- 
Trent Mick
TrentM@ActiveState.com


From bwarsaw@beopen.com  Thu Aug 31 05:50:20 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 00:50:20 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>
 <200008310233.OAA17325@s454.cosc.canterbury.ac.nz>
Message-ID: <14765.58380.529345.814715@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    GE> I do like the idea of keywords, however. All we need to do
    GE> is find a way of spelling

    GE>    for i = 0 uptobutnotincluding len(a):

    GE> without running out of breath.

for i until len(a):

-Barry


From nhodgson@bigpond.net.au  Thu Aug 31 07:21:06 2000
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Thu, 31 Aug 2000 16:21:06 +1000
Subject: [Python-Dev] Pragmas: Just say "No!"
References: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz>
Message-ID: <005301c01313$a66a3ae0$8119fea9@neil>

Greg Ewing:
> Greg Wilson <gvwilson@nevex.com>:
>
> > Pragmas are a way to embed programs for the
> > parser in the file being parsed.
>
> I hope the BDFL has the good sense to run screaming from
> anything that has the word "pragma" in it. As this discussion
> demonstrates, it's far too fuzzy and open-ended a concept --
> nobody can agree on what sort of thing a pragma is supposed
> to be.

   It is a good idea, however, to claim a piece of syntactic turf as early
as possible so that if/when it is needed, it is unlikely to cause problems
with previously written code. My preference would be to introduce a reserved
word 'directive' for future expansion here. 'pragma' has connotations of
'ignorable compiler hint' but most of the proposed compiler directives will
cause incorrect behaviour if ignored.

   Neil




From m.favas@per.dem.csiro.au  Thu Aug 31 07:11:31 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 31 Aug 2000 14:11:31 +0800
Subject: [Python-Dev] Threads & autoconf
Message-ID: <39ADF713.53E6B37D@per.dem.csiro.au>

[Thomas}
>I'm trying to clean up the autoconf (and README) mess wrt. threads a bit,
>but I think I need some hints ;) I can't figure out why there is a separate
>--with-dec-threads option... Is there a reason it can't be autodetected like
>we do for other thread systems ? Does DEC Unix do something very different
>but functional when leaving out the '-threads' option (which is the only
>thing -dec- adds) or is it just "hysterical raisins" ?

Yes, DEC Unix does do something very different without the "-threads"
option to the "cc" line that finally builds the python executable - the
following are unresolved:

cc   python.o \
          ../libpython2.0.a -L/usr/local/lib -ltk8.0 -ltcl8.0 -lX11   
-ldb     
 -L/usr/local/lib -lz  -lnet  -lpthreads -lm  -o python 
ld:
Unresolved:
_PyGC_Insert
_PyGC_Remove
__pthread_mutex_init
__pthread_mutex_destroy
__pthread_mutex_lock
__pthread_mutex_unlock
__pthread_cond_init
__pthread_cond_destroy
__pthread_cond_signal
__pthread_cond_wait
__pthread_create
__pthread_detach
make[1]: *** [link] Error 1

So, it is still needed. It should be possible, though, to detect that
the system is OSF1 during configure and set this without having to do
"--with-dec-threads". I think DEEC/Compaq/Tru 64 Unix is the only
current Unix that reports itself as OSF1. If there are other legacy
systems that do, and don't need "-threads", they could do "configure
--without-dec-threads" <grin>.

Mark
 
-- 
Email - m.favas@per.dem.csiro.au        Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074            CSIRO Exploration &
Mining
Fax   - +61 8 9333 6121                          Private Bag No 5
                                                 Wembley, Western
Australia 6913


From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 07:41:20 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 08:41:20 +0200
Subject: [Python-Dev] Cookie.py security
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <004301c01316$7ef57e40$766940d5@hagrid>

jeremy wrote:
> I would guess that pickle makes attacks easier: It has more features,
> e.g. creating instances of arbitrary classes (provided that the attacker
> knows what classes are available).

well, if not else, he's got the whole standard library to
play with...

:::

(I haven't looked at the cookie code, so I don't really know
what I'm talking about here)

cannot you force the user to pass in a list of valid classes to
the cookie constructor, and use a subclass of pickle.Unpickler
to get a little more control over what's imported:

    class myUnpickler(Unpicker):
        def __init__(self, data, classes):
            self.__classes = classes
            Unpickler.__init__(self, StringIO.StringIO(data))
        def find_class(self, module, name):
            for cls in self.__classes__:
                if cls.__module__ == module and cls.__name__ == name:
                    return cls
            raise SystemError, "failed to import class"

> But neither marshal nor pickle is safe.  It is possible to cause a
> core dump by passing marshal invalid data.  It may also be possible to
> launch a stack overflow attack -- not sure.

</F>



From fdrake@beopen.com  Thu Aug 31 08:09:33 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 03:09:33 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <200008310702.AAA32318@slayer.i.sourceforge.net>
References: <200008310702.AAA32318@slayer.i.sourceforge.net>
Message-ID: <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Fix grouping: this is how I intended it, misguided as I was in boolean
 > operator associativity.

  And to think I spent time digging out my reference material to make
sure I didn't change anything!
  This is why compilers have warnings like that!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From thomas@xs4all.net  Thu Aug 31 08:22:13 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 09:22:13 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 31, 2000 at 03:09:33AM -0400
References: <200008310702.AAA32318@slayer.i.sourceforge.net> <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>
Message-ID: <20000831092213.G12695@xs4all.nl>

On Thu, Aug 31, 2000 at 03:09:33AM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > Fix grouping: this is how I intended it, misguided as I was in boolean
>  > operator associativity.

>   And to think I spent time digging out my reference material to make
> sure I didn't change anything!

Well, if you'd dug out the PEP, you'd have known what way the parentheses
were *intended* to go :-) 'HASINPLACE' is a macro that does a
Py_HasFeature() for the _inplace_ struct members, and those struct members
shouldn't be dereferenced if HASINPLACE is false :)

>   This is why compilers have warnings like that!

Definately ! Now if only there was a permanent way to add -Wall.... hmm...
Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From m.favas@per.dem.csiro.au  Thu Aug 31 08:23:43 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 31 Aug 2000 15:23:43 +0800
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
Message-ID: <39AE07FF.478F413@per.dem.csiro.au>

(Tru64 Unix) - test_gettext fails with the message:
IOError: [Errno 0] Bad magic number: './xx/LC_MESSAGES/gettext.mo'

This is because the magic number is read in by the code in
Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
buf[:4])[0]), and checked against LE_MAGIC (defined as 950412DE) and
BE_MAGIC (calculated as FFFFFFFFDE120495 using
struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0]) These format strings
work for machines where a Python integer is the same size as a C int,
but not for machines where a Python integer is larger than a C int. The
problem arises because the LE_MAGIC number is negative if a 32-bit int,
but positive if Python integers are 64-bit. Replacing the "i" in the
code that generates BE_MAGIC and reads in "magic" by "I" makes the test
work for me, but there's other uses of "i" and "ii" when the rest of the
.mo file is processed that I'm unsure about with different inputs.

Mark
-- 
Email - m.favas@per.dem.csiro.au        Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074            CSIRO Exploration &
Mining
Fax   - +61 8 9333 6121                          Private Bag No 5
                                                 Wembley, Western
Australia 6913


From tim_one@email.msn.com  Thu Aug 31 08:24:35 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 03:24:35 -0400
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>


-----Original Message-----
From: python-list-admin@python.org
[mailto:python-list-admin@python.org]On Behalf Of Sachin Desai
Sent: Thursday, August 31, 2000 2:49 AM
To: python-list@python.org
Subject: test_largefile cause kernel panic in Mac OS X DP4



Has anyone experienced this. I updated my version of python to the latest
source from the CVS repository and successfully built it. Upon executing a
"make test", my machine ended up in a kernel panic when the test being
executed was "test_largefile".

My configuration is:
    Powerbook G3
    128M RAM
    Mac OS X DP4

I guess my next step is to log a bug with Apple.




-- 
http://www.python.org/mailman/listinfo/python-list



From fdrake@beopen.com  Thu Aug 31 08:37:24 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 03:37:24 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <20000831092213.G12695@xs4all.nl>
References: <200008310702.AAA32318@slayer.i.sourceforge.net>
 <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>
 <20000831092213.G12695@xs4all.nl>
Message-ID: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
 > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

  I'd be happy with this.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From Moshe Zadka <moshez@math.huji.ac.il>  Thu Aug 31 08:45:19 2000
From: Moshe Zadka <moshez@math.huji.ac.il> (Moshe Zadka)
Date: Thu, 31 Aug 2000 10:45:19 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects
 abstract.c,2.49,2.50
In-Reply-To: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008311045010.20952-100000@sundial>

On Thu, 31 Aug 2000, Fred L. Drake, Jr. wrote:

> 
> Thomas Wouters writes:
>  > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
>  > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)
> 
>   I'd be happy with this.

For 2.1, I suggest going for -Werror too.
--
Moshe Zadka <moshez@math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez



From thomas@xs4all.net  Thu Aug 31 09:06:01 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 10:06:01 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <Pine.GSO.4.10.10008311045010.20952-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 31, 2000 at 10:45:19AM +0300
References: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com> <Pine.GSO.4.10.10008311045010.20952-100000@sundial>
Message-ID: <20000831100601.H12695@xs4all.nl>

On Thu, Aug 31, 2000 at 10:45:19AM +0300, Moshe Zadka wrote:
> > Thomas Wouters writes:
> >  > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
> >  > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

> For 2.1, I suggest going for -Werror too.

No, don't think so. -Werror is severe: it would cause compile-failures on
systems not quite the same as ours. For instance, when using
Linux-2.4.0-test-kernels (bleeding edge ;) I consistently get a warning
about a redefine in <sys/resource.h>. That isn't Python's fault, and we
can't do anything about it, but with -Werror it would cause
compile-failures. The warning is annoying, but not fatal.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From bwarsaw@beopen.com  Thu Aug 31 11:47:34 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 06:47:34 -0400 (EDT)
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AE07FF.478F413@per.dem.csiro.au>
Message-ID: <14766.14278.609327.610929@anthem.concentric.net>

>>>>> "MF" == Mark Favas <m.favas@per.dem.csiro.au> writes:

    MF> This is because the magic number is read in by the code in
    MF> Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
    MF> buf[:4])[0]), and checked against LE_MAGIC (defined as
    MF> 950412DE) and BE_MAGIC (calculated as FFFFFFFFDE120495 using
    MF> struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0])

I was trying to be too clever.  Just replace the BE_MAGIC value with
0xde120495, as in the included patch.

    MF> Replacing the "i" in the code that generates BE_MAGIC and
    MF> reads in "magic" by "I" makes the test work for me, but
    MF> there's other uses of "i" and "ii" when the rest of the .mo
    MF> file is processed that I'm unsure about with different inputs.

Should be fine, I think.  With < and > leading characters, those
format strings should select `standard' sizes:

    Standard size and alignment are as follows: no alignment is
    required for any type (so you have to use pad bytes); short is 2
    bytes; int and long are 4 bytes. float and double are 32-bit and
    64-bit IEEE floating point numbers, respectively.

Please run the test again with this patch and let me know.
-Barry

Index: gettext.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/gettext.py,v
retrieving revision 1.4
diff -u -r1.4 gettext.py
--- gettext.py	2000/08/30 03:29:58	1.4
+++ gettext.py	2000/08/31 10:40:41
@@ -125,7 +125,7 @@
 class GNUTranslations(NullTranslations):
     # Magic number of .mo files
     LE_MAGIC = 0x950412de
-    BE_MAGIC = struct.unpack('>i', struct.pack('<i', LE_MAGIC))[0]
+    BE_MAGIC = 0xde120495
 
     def _parse(self, fp):
         """Override this method to support alternative .mo formats."""



From mal@lemburg.com  Thu Aug 31 13:33:28 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 14:33:28 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
Message-ID: <39AE5098.36746F4B@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > So the check would look something like this:
> >
> > if (tstate->recursion_depth >= 50 &&
> >     tstate->recursion_depth%10 == 0 &&
> >     PyOS_CheckStack()) {
> >                 PyErr_SetString(PyExc_MemoryError, "Stack overflow");
> >                 return NULL;
> >         }
> 
> That sounds like a good solution to me. A recursion depth of 50 should
> be guaranteed on most systems supported by Python.

Jeremy: Could get at least this patch into 2.0b1 ?
 
> > I'm not exactly sure how large the safety margin is with
> > Martin's patch, but this seems a good idea.
> 
> I chose 3% of the rlimit, which must accomodate the space above the
> known start of stack plus a single page. That number was chosen
> arbitarily; on my Linux system, the stack limit is 8MB, so 3% give
> 200k. Given the maximum limitation of environment pages and argv
> pages, I felt that this is safe enough. OTOH, if you've used more than
> 7MB of stack, it is likely that the last 200k won't help, either.

Looks like I don't have any limits set on my dev-machine...
Linux has no problems offering me 3GB of (virtual) stack space
even though it only has 64MB real memory and 200MB swap
space available ;-)

I guess the proposed user settable recursion depth limit is the
best way to go. Testing for the right limit is rather easy by
doing some trial and error processing using Python.

At least for my Linux installation a limit of 9000 seems
reasonable. Perhaps everybody on the list could do a quick
check on their platform ?

Here's a sample script:

i = 0
def foo(x):
    global i
    print i
    i = i + 1
    foo(x)

foo(None)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From gstein@lyra.org  Thu Aug 31 13:48:04 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 05:48:04 -0700
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE5098.36746F4B@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 02:33:28PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com>
Message-ID: <20000831054804.A3278@lyra.org>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
>...
> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?
> 
> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

10k iterations on my linux box

-g

-- 
Greg Stein, http://www.lyra.org/


From thomas@xs4all.net  Thu Aug 31 13:46:45 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 14:46:45 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE5098.36746F4B@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 02:33:28PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com>
Message-ID: <20000831144645.I12695@xs4all.nl>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:

> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?

On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
set it higher even without help from root, and much higher with help) I can
go as high as 8k recursions of the simple python-function type, and 5k
recursions of one involving a C call (like a recursive __str__()).

I don't remember ever seeing a system with less than 2Mbyte stack, except
for seriously memory-deprived systems. I do know that the 2Mbyte stacklimit
on BSDI is enough to cause 'pine' (sucky but still popular mailprogram) much
distress when handling large mailboxes, so we usually set the limit higher
anyway.

Mutt-forever-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mal@lemburg.com  Thu Aug 31 14:32:41 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 15:32:41 +0200
Subject: [Python-Dev] Pragmas: Just say "No!"
References: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz> <005301c01313$a66a3ae0$8119fea9@neil>
Message-ID: <39AE5E79.C2C91730@lemburg.com>

Neil Hodgson wrote:
> 
> Greg Ewing:
> > Greg Wilson <gvwilson@nevex.com>:
> >
> > > Pragmas are a way to embed programs for the
> > > parser in the file being parsed.
> >
> > I hope the BDFL has the good sense to run screaming from
> > anything that has the word "pragma" in it. As this discussion
> > demonstrates, it's far too fuzzy and open-ended a concept --
> > nobody can agree on what sort of thing a pragma is supposed
> > to be.
> 
>    It is a good idea, however, to claim a piece of syntactic turf as early
> as possible so that if/when it is needed, it is unlikely to cause problems
> with previously written code. My preference would be to introduce a reserved
> word 'directive' for future expansion here. 'pragma' has connotations of
> 'ignorable compiler hint' but most of the proposed compiler directives will
> cause incorrect behaviour if ignored.

The objectives the "pragma" statement follows should be clear
by now. If it's just the word itself that's bugging you, then
we can have a separate discussion on that. Perhaps "assume"
or "declare" would be a better candidates.

We need some kind of logic of this sort in Python. Otherhwise
important features like source code encoding will not be
possible.

As I said before, I'm not advertising adding compiler
programs to Python, just a simple way of passing information
for the compiler.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From nascheme@enme.ucalgary.ca  Thu Aug 31 14:53:21 2000
From: nascheme@enme.ucalgary.ca (Neil Schemenauer)
Date: Thu, 31 Aug 2000 07:53:21 -0600
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.45843.401319.187156@bitdiddle.concentric.net>; from Jeremy Hylton on Wed, Aug 30, 2000 at 09:21:23PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <20000831075321.A3099@keymaster.enme.ucalgary.ca>

On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
> I would guess that pickle makes attacks easier: It has more features,
> e.g. creating instances of arbitrary classes (provided that the attacker
> knows what classes are available).

marshal can handle code objects.  That seems pretty scary to me.  I
would vote for not including these unsecure classes in the standard
distribution.  Software that expects them should include their own
version of Cookie.py or be fixed.

  Neil


From mal@lemburg.com  Thu Aug 31 14:58:55 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 15:58:55 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <20000831144645.I12695@xs4all.nl>
Message-ID: <39AE649F.A0E818C1@lemburg.com>

Thomas Wouters wrote:
> 
> On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
> 
> > At least for my Linux installation a limit of 9000 seems
> > reasonable. Perhaps everybody on the list could do a quick
> > check on their platform ?
> 
> On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
> set it higher even without help from root, and much higher with help) I can
> go as high as 8k recursions of the simple python-function type, and 5k
> recursions of one involving a C call (like a recursive __str__()).

Ok, this give us a 5000 limit as default... anyone with less ;-)

(Note that with the limit being user settable making a lower limit
 the default shouldn't hurt anyone.)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Thu Aug 31 15:06:23 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 16:06:23 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE649F.A0E818C1@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 03:58:55PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <20000831144645.I12695@xs4all.nl> <39AE649F.A0E818C1@lemburg.com>
Message-ID: <20000831160623.J12695@xs4all.nl>

On Thu, Aug 31, 2000 at 03:58:55PM +0200, M.-A. Lemburg wrote:
> Thomas Wouters wrote:

> > On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
> > set it higher even without help from root, and much higher with help) I can
> > go as high as 8k recursions of the simple python-function type, and 5k
> > recursions of one involving a C call (like a recursive __str__()).

> Ok, this give us a 5000 limit as default... anyone with less ;-)

I would suggest going for something a lot less than 5000, tho, to account
for 'large' frames. Say, 2000 or so, max.

> (Note that with the limit being user settable making a lower limit
>  the default shouldn't hurt anyone.)

Except that it requires yet another step ... ;P It shouldn't hurt anyone if
it isn't *too* low. However, I have no clue how 'high' it would have to be
for, for instance, Zope, or any of the other 'large' Python apps.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jack@oratrix.nl  Thu Aug 31 15:20:45 2000
From: jack@oratrix.nl (Jack Jansen)
Date: Thu, 31 Aug 2000 16:20:45 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <20000831142046.20C21303181@snelboot.oratrix.nl>

I'm confused now: how is this counting-stack-limit different from the maximum 
recursion depth we already have?

The whole point of PyOS_StackCheck is to do an _actual_ check of whether 
there's space left for the stack so we can hopefully have an orderly cleanup 
before we hit the hard limit.

If computing it is too difficult because getrlimit isn't available or doesn't 
do what we want we should probe it, as the windows code does or my example 
code posted yesterday does. Note that the testing only has to be done every 
*first* time the stack goes past a certain boundary: the probing can remember 
the deepest currently known valid stack location, and everything that is 
shallower is okay from that point on (making PyOS_StackCheck a subroutine call 
and a compare in the normal case).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From mal@lemburg.com  Thu Aug 31 15:44:09 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 16:44:09 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <39AE6F39.2DAEB3E9@lemburg.com>

Jack Jansen wrote:
> 
> I'm confused now: how is this counting-stack-limit different from the maximum
> recursion depth we already have?
> 
> The whole point of PyOS_StackCheck is to do an _actual_ check of whether
> there's space left for the stack so we can hopefully have an orderly cleanup
> before we hit the hard limit.
> 
> If computing it is too difficult because getrlimit isn't available or doesn't
> do what we want we should probe it, as the windows code does or my example
> code posted yesterday does. Note that the testing only has to be done every
> *first* time the stack goes past a certain boundary: the probing can remember
> the deepest currently known valid stack location, and everything that is
> shallower is okay from that point on (making PyOS_StackCheck a subroutine call
> and a compare in the normal case).

getrlimit() will not always work: in case there is no limit
imposed on the stack, it will return huge numbers (e.g. 2GB)
which wouldn't make any valid assumption possible. 

Note that you can't probe for this since you can not be sure whether
the OS overcommits memory or not. Linux does this heavily and
I haven't yet even found out why my small C program happily consumes
20MB of memory without segfault at recursion level 60000 while Python
already segfaults at recursion level 9xxx with a memory footprint
of around 5MB.

So, at least for Linux, the only safe way seems to make the
limit a user option and to set a reasonably low default.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From cgw@fnal.gov  Thu Aug 31 15:50:01 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 09:50:01 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000831142046.20C21303181@snelboot.oratrix.nl>
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <14766.28825.35228.221474@buffalo.fnal.gov>

Jack Jansen writes:
 > I'm confused now: how is this counting-stack-limit different from
 > the maximum recursion depth we already have?

Because on Unix the maximum allowable stack space is not fixed (it can
be controlled by "ulimit" or "setrlimit"), so a hard-coded maximum
recursion depth is not appropriate.

 > The whole point of PyOS_StackCheck is to do an _actual_ check of
 > whether before we hit the hard limit.

 > If computing it is too difficult because getrlimit isn't available
 > or doesn't do what we want we should probe it

getrlimit is available and works fine.  It's getrusage that is
problematic.

I seriously think that instead of trying to slip this in `under the
wire' we should defer for 2.0b1 and try to do it right for either the
next 2.0x.  Getting this stuff right on Unix, portably, is tricky.
There may be a lot of different tricks required to make this work
right on different flavors of Unix.





From guido@beopen.com  Thu Aug 31 16:58:49 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 10:58:49 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 14:33:28 +0200."
 <39AE5098.36746F4B@lemburg.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
 <39AE5098.36746F4B@lemburg.com>
Message-ID: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>

> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

Please try this again on various platforms with this version:

    i = 0
    class C:
      def __getattr__(self, name):
	  global i
	  print i
	  i += 1
	  return self.name # common beginners' mistake

    C() # This tries to get __init__, triggering the recursion

I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
have no idea what units).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Thu Aug 31 17:07:16 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:07:16 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 16:20:45 +0200."
 <20000831142046.20C21303181@snelboot.oratrix.nl>
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <200008311607.LAA15693@cj20424-a.reston1.va.home.com>

> I'm confused now: how is this counting-stack-limit different from
> the maximum recursion depth we already have?
> 
> The whole point of PyOS_StackCheck is to do an _actual_ check of
> whether there's space left for the stack so we can hopefully have an
> orderly cleanup before we hit the hard limit.
> 
> If computing it is too difficult because getrlimit isn't available
> or doesn't do what we want we should probe it, as the windows code
> does or my example code posted yesterday does. Note that the testing
> only has to be done every *first* time the stack goes past a certain
> boundary: the probing can remember the deepest currently known valid
> stack location, and everything that is shallower is okay from that
> point on (making PyOS_StackCheck a subroutine call and a compare in
> the normal case).

The point is that there's no portable way to do PyOS_CheckStack().
Not even for Unix.  So we use a double strategy:

(1) Use a user-settable recursion limit with a conservative default.
This can be done portably.  It is set low by default so that under
reasonable assumptions it will stop runaway recursion long before the
stack is actually exhausted.  Note that Emacs Lisp has this feature
and uses a default of 500.  I would set it to 1000 in Python.  The
occasional user who is fond of deep recursion can set it higher and
tweak his ulimit -s to provide the actual stack space if necessary.

(2) Where implementable, use actual stack probing with
PyOS_CheckStack().  This provides an additional safeguard for e.g. (1)
extensions allocating lots of C stack space during recursion; (2)
users who set the recursion limit too high; (3) long-running server
processes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw@fnal.gov  Thu Aug 31 16:14:02 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 10:14:02 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
 <39ACDA4F.3EF72655@lemburg.com>
 <000d01c0126c$dfe700c0$766940d5@hagrid>
 <39ACE51F.3AEC75AB@lemburg.com>
 <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
 <39AE5098.36746F4B@lemburg.com>
 <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <14766.30266.156124.961607@buffalo.fnal.gov>

Guido van Rossum writes:
 > 
 > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
 > have no idea what units).

That would be Kb.  But -c controls core-file size, not stack.  
You wanted -s.  ulimit -a shows all the limits.


From guido@beopen.com  Thu Aug 31 17:23:21 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:23:21 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 10:14:02 EST."
 <14766.30266.156124.961607@buffalo.fnal.gov>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
 <14766.30266.156124.961607@buffalo.fnal.gov>
Message-ID: <200008311623.LAA15877@cj20424-a.reston1.va.home.com>

> That would be Kb.  But -c controls core-file size, not stack.  
> You wanted -s.  ulimit -a shows all the limits.

Typo.  I did use ulimit -s.  ulimit -a confirms that it's 8192 kbytes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From mal@lemburg.com  Thu Aug 31 16:24:58 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 17:24:58 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
 <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <39AE78CA.809E660A@lemburg.com>

Guido van Rossum wrote:
> 
> > Here's a sample script:
> >
> > i = 0
> > def foo(x):
> >     global i
> >     print i
> >     i = i + 1
> >     foo(x)
> >
> > foo(None)
> 
> Please try this again on various platforms with this version:
> 
>     i = 0
>     class C:
>       def __getattr__(self, name):
>           global i
>           print i
>           i += 1
>           return self.name # common beginners' mistake
> 
>     C() # This tries to get __init__, triggering the recursion
> 
> I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> have no idea what units).

8192 refers to kB, i.e. 8 MB.

I get 6053 on SuSE Linux 6.2 without resource stack limit set.

Strange enough if I put the above inside a script, the class
isn't instantiated. The recursion only starts when I manually
trigger C() in interactive mode or do something like
'print C()'. Is this a bug or a feature ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/


From Vladimir.Marangozov@inrialpes.fr  Thu Aug 31 16:32:29 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Thu, 31 Aug 2000 17:32:29 +0200 (CEST)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 31, 2000 10:58:49 AM
Message-ID: <200008311532.RAA04028@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Please try this again on various platforms with this version:
> 
>     i = 0
>     class C:
>       def __getattr__(self, name):
> 	  global i
> 	  print i
> 	  i += 1
> 	  return self.name # common beginners' mistake
> 
>     C() # This tries to get __init__, triggering the recursion
> 

            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Are you sure?

Although strange, this is not the case and instantiating C succeeds
(try "python rec.py", where rec.py is the above code).

A closer look at the code shows that Instance_New goes on calling
getattr2 which calls class_lookup, which returns NULL, etc, etc,
but the presence of __getattr__ is not checked in this path.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From trentm@ActiveState.com  Thu Aug 31 16:28:21 2000
From: trentm@ActiveState.com (Trent Mick)
Date: Thu, 31 Aug 2000 08:28:21 -0700
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 31, 2000 at 03:24:35AM -0400
References: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>
Message-ID: <20000831082821.B3569@ActiveState.com>

Tim (or anyone with python-list logs), can you forward this to Sachin (who
reported the bug).

On Thu, Aug 31, 2000 at 03:24:35AM -0400, Tim Peters wrote:
> 
> 
> -----Original Message-----
> From: python-list-admin@python.org
> [mailto:python-list-admin@python.org]On Behalf Of Sachin Desai
> Sent: Thursday, August 31, 2000 2:49 AM
> To: python-list@python.org
> Subject: test_largefile cause kernel panic in Mac OS X DP4
> 
> 
> 
> Has anyone experienced this. I updated my version of python to the latest
> source from the CVS repository and successfully built it. Upon executing a
> "make test", my machine ended up in a kernel panic when the test being
> executed was "test_largefile".
> 
> My configuration is:
>     Powerbook G3
>     128M RAM
>     Mac OS X DP4
> 
> I guess my next step is to log a bug with Apple.
> 

I added this test module. It would be nice to have a little bit more
information seeing as I have never played on a Mac (OS X acts like BSD under
the hood, right?)

1. Can you tell me, Sachin, *where* in test_largefile it is failing? The file
   is python/dist/src/Lib/test/test_largefile.py. Try running it directly:
   > python Lib/test/test_largefile.py
2. If it dies before it produces any output can you tell me if it died on
   line 18:
      f.seek(2147483649L)
   which, I suppose is possible. Maybe this is not a good way to determine if
   the system has largefile support.


Jeremy, Tim, Guido, 
As with the NetBSD compile bug. I won't have time to fix this by the freeze
today unless it is I get info from the people who encountered these bugs and
it is *really* easy to fix.


Trent
    

-- 
Trent Mick
TrentM@ActiveState.com


From guido@beopen.com  Thu Aug 31 17:30:48 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:30:48 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 17:24:58 +0200."
 <39AE78CA.809E660A@lemburg.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
 <39AE78CA.809E660A@lemburg.com>
Message-ID: <200008311630.LAA16022@cj20424-a.reston1.va.home.com>

> > Please try this again on various platforms with this version:
> > 
> >     i = 0
> >     class C:
> >       def __getattr__(self, name):
> >           global i
> >           print i
> >           i += 1
> >           return self.name # common beginners' mistake
> > 
> >     C() # This tries to get __init__, triggering the recursion
> > 
> > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> > have no idea what units).
> 
> 8192 refers to kB, i.e. 8 MB.
> 
> I get 6053 on SuSE Linux 6.2 without resource stack limit set.
> 
> Strange enough if I put the above inside a script, the class
> isn't instantiated. The recursion only starts when I manually
> trigger C() in interactive mode or do something like
> 'print C()'. Is this a bug or a feature ?

Aha.  I was wrong -- it's happening in repr(), not during
construction.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From cgw@fnal.gov  Thu Aug 31 16:50:38 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 10:50:38 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311630.LAA16022@cj20424-a.reston1.va.home.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
 <39ACDA4F.3EF72655@lemburg.com>
 <000d01c0126c$dfe700c0$766940d5@hagrid>
 <39ACE51F.3AEC75AB@lemburg.com>
 <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
 <39AE5098.36746F4B@lemburg.com>
 <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
 <39AE78CA.809E660A@lemburg.com>
 <200008311630.LAA16022@cj20424-a.reston1.va.home.com>
Message-ID: <14766.32462.663536.177308@buffalo.fnal.gov>

Guido van Rossum writes:
 > > > Please try this again on various platforms with this version:
 > > > 
 > > >     i = 0
 > > >     class C:
 > > >       def __getattr__(self, name):
 > > >           global i
 > > >           print i
 > > >           i += 1
 > > >           return self.name # common beginners' mistake
 > > > 
 > > >     C() # This tries to get __init__, triggering the recursion
 > > > 
 > > > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
 > > > have no idea what units).

I get a core dump after 4824 iterations on a not-quite-Red-Hat box,
with an 8MB stack limit.

What about the idea that was suggested to use a sigsegv catcher?  Or
reading info from /proc (yes, there is a lot of overhead here, but if
we do in infrequently enough we might just get away with it.  It could
be a configure-time option disable by default).  I still think there
are even more tricks possible here, and we should pursue this after
2.0b1.  I volunteer to help work on it ;-)





From bwarsaw@beopen.com  Thu Aug 31 16:53:19 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 11:53:19 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
 <39ACDA4F.3EF72655@lemburg.com>
 <000d01c0126c$dfe700c0$766940d5@hagrid>
 <39ACE51F.3AEC75AB@lemburg.com>
 <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
 <39AE5098.36746F4B@lemburg.com>
 <20000831054804.A3278@lyra.org>
Message-ID: <14766.32623.705548.109625@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein@lyra.org> writes:

    GS> 10k iterations on my linux box

9143 on mine.

I'll note that Emacs has a similar concept, embodied in
max-lisp-eval-depth.  The documentation for this variable clearly
states that its purpose is to avoid infinite recursions that would
overflow the C stack and crash Emacs.  On my XEmacs 21.1.10,
max-lisp-eval-depth is 500.  Lisp tends to be more recursive than
Python, but it's also possible that there are fewer ways to `hide'
lots of C stack between Lisp function calls.

So random.choice(range(500, 9143)) seems about right to me <wink>.

-Barry


From jeremy@beopen.com  Thu Aug 31 16:56:20 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 11:56:20 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000831075321.A3099@keymaster.enme.ucalgary.ca>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
 <14765.45843.401319.187156@bitdiddle.concentric.net>
 <20000831075321.A3099@keymaster.enme.ucalgary.ca>
Message-ID: <14766.32804.933498.914265@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme@enme.ucalgary.ca> writes:

  NS> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
  >> I would guess that pickle makes attacks easier: It has more
  >> features, e.g. creating instances of arbitrary classes (provided
  >> that the attacker knows what classes are available).

  NS> marshal can handle code objects.  That seems pretty scary to me.
  NS> I would vote for not including these unsecure classes in the
  NS> standard distribution.  Software that expects them should
  NS> include their own version of Cookie.py or be fixed.

If a server is going to use cookies that contain marshal or pickle
data, they ought to be encrypted or protected by a secure hash.

Jeremy


From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 18:47:45 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 19:47:45 +0200
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
Message-ID: <008701c01373$95ced1e0$766940d5@hagrid>

iirc, I've been bitten by this a couple of times too
(before I switched to asyncore...)

any special reason why the input socket is unbuffered
by default?

</F>

----- Original Message ----- 
From: "Andy Bond" <bond@dstc.edu.au>
Newsgroups: comp.lang.python
Sent: Thursday, August 31, 2000 8:41 AM
Subject: SocketServer and makefile()


> I've been working with BaseHTTPServer which in turn uses SocketServer to
> write a little web server.  It is used to accept PUT requests of 30MB chunks
> of data.  I was having a problem where data was flowing at the rate of
> something like 64K per second over a 100MB network.  Weird.  Further tracing
> showed that the rfile variable from SocketServer (used to suck in data to
> the http server) was created using makefile on the original socket
> descriptor.  It was created with an option of zero for buffering (see
> SocketServer.py) which means unbuffered.
> 
> Now some separate testing with socket.py showed that I could whip a 30MB
> file across using plain sockets and send/recv but if I made the receivor use
> makefile on the socket and then read, it slowed down to my 1 sec per 64K.
> If I specify a buffer (something big but less than 64K ... IP packet size?)
> then I am back in speedy territory.  The unbuffered mode seems almost like
> it is sending the data 1 char at a time AND this is the default mode used in
> SocketServer and subsequently BaseHTTPServer ...
> 
> This is on solaris 7, python 1.5.2.  Anyone else found this to be a problem
> or am I doing something wrong?
> 
> andy



From jeremy@beopen.com  Thu Aug 31 19:34:23 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 14:34:23 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
Message-ID: <14766.42287.968420.289804@bitdiddle.concentric.net>

Is the test for linuxaudiodev supposed to play the Spanish Inquistion
.au file?  I just realized that the test does absolutely nothing on my
machine.  (I guess I need to get my ears to raise an exception if they
don't hear anything.)

I can play the .au file and I use a variety of other audio tools
regularly.  Is Peter still maintaining it or can someone else offer
some assistance?

Jeremy


From guido@beopen.com  Thu Aug 31 20:57:17 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 14:57:17 -0500
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: Your message of "Thu, 31 Aug 2000 14:34:23 -0400."
 <14766.42287.968420.289804@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <200008311957.OAA22338@cj20424-a.reston1.va.home.com>

> Is the test for linuxaudiodev supposed to play the Spanish Inquistion
> .au file?  I just realized that the test does absolutely nothing on my
> machine.  (I guess I need to get my ears to raise an exception if they
> don't hear anything.)

Correct.

> I can play the .au file and I use a variety of other audio tools
> regularly.  Is Peter still maintaining it or can someone else offer
> some assistance?

Does your machine have a sound card & speakers?  Mine doesn't.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From cgw@fnal.gov  Thu Aug 31 20:04:15 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 14:04:15 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.42287.968420.289804@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <14766.44079.900005.766299@buffalo.fnal.gov>

The problem is that the test file 

audiotest.au: Sun/NeXT audio data: 8-bit ISDN u-law, mono, 8000 Hz

and the linuxaudiodev module seems to be (implicitly) expecting ".wav" format
(Microsoft RIFF).

If you open a .wav file and write it to the linuxaudiodev object, it works

There is a function in linuxaudiodev to set audio format - there
doesn't seem to be much documentation, the source has:

if (!PyArg_ParseTuple(args, "iiii:setparameters",
                          &rate, &ssize, &nchannels, &fmt))
        return NULL;
  
 and when I do

x = linuxaudiodev.open('w')
x.setparameters(8000, 1, 8, linuxaudiodev.AFMT_MU_LAW )

I get:
linuxaudiodev.error: (0, 'Error')

Also tried '1' for the sample size, thinking it might be in bytes.

The sample size really ought to be implicit in the format.  

The code in linuxaudiodev.c looks sort of dubious to me.  This model
is a little too simple for the variety of audio hardware and software
on Linux systems.  I have some homebrew audio stuff I've written which
I think works better, but it's nowhere near ready for distribution.
Maybe I'll clean it up and submit it for inclusion post-1.6

In the meanwhile, you could ship a .wav file for use on Linux (and
Windows?) machines.  (Windows doesn't usually like .au either)






From jeremy@beopen.com  Thu Aug 31 20:11:18 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 15:11:18 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <200008311957.OAA22338@cj20424-a.reston1.va.home.com>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
 <200008311957.OAA22338@cj20424-a.reston1.va.home.com>
Message-ID: <14766.44502.812468.677142@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido@beopen.com> writes:

  >> I can play the .au file and I use a variety of other audio tools
  >> regularly.  Is Peter still maintaining it or can someone else
  >> offer some assistance?

  GvR> Does your machine have a sound card & speakers?  Mine doesn't.

Yes.  (I bought the Cambridge Soundworks speakers that were on my old
machine from CNRI.)

Jeremy


From gstein@lyra.org  Thu Aug 31 20:18:26 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 12:18:26 -0700
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: <008701c01373$95ced1e0$766940d5@hagrid>; from effbot@telia.com on Thu, Aug 31, 2000 at 07:47:45PM +0200
References: <008701c01373$95ced1e0$766940d5@hagrid>
Message-ID: <20000831121826.F11297@lyra.org>

I ran into this same problem on the client side.

The server does a makefile() so that it can do readline() to fetch the HTTP
request line and then the MIME headers. The *problem* is that if you do
something like:

    f = sock.makefile()
    line = f.readline()
    data = sock.recv(1000)

You're screwed if you have buffering enabled. "f" will read in a bunch of
data -- past the end of the line. That data now sits inside f's buffer and
is not available to the sock.recv() call.

If you forget about sock and just stick to f, then you'd be okay. But
SocketServer and/or BaseHTTPServer doesn't -- it uses both objects to do the
reading.

Solution? Don't use rfile for reading, but go for the socket itself. Or
revamp the two classes to forget about the socket once the files (wfile and
rfile) are created. The latter might not be possible, tho.

Dunno why the unbuffered reading would be slow. I'd think it would still
read large chunks at a time when you request it.

Cheers,
-g

On Thu, Aug 31, 2000 at 07:47:45PM +0200, Fredrik Lundh wrote:
> iirc, I've been bitten by this a couple of times too
> (before I switched to asyncore...)
> 
> any special reason why the input socket is unbuffered
> by default?
> 
> </F>
> 
> ----- Original Message ----- 
> From: "Andy Bond" <bond@dstc.edu.au>
> Newsgroups: comp.lang.python
> Sent: Thursday, August 31, 2000 8:41 AM
> Subject: SocketServer and makefile()
> 
> 
> > I've been working with BaseHTTPServer which in turn uses SocketServer to
> > write a little web server.  It is used to accept PUT requests of 30MB chunks
> > of data.  I was having a problem where data was flowing at the rate of
> > something like 64K per second over a 100MB network.  Weird.  Further tracing
> > showed that the rfile variable from SocketServer (used to suck in data to
> > the http server) was created using makefile on the original socket
> > descriptor.  It was created with an option of zero for buffering (see
> > SocketServer.py) which means unbuffered.
> > 
> > Now some separate testing with socket.py showed that I could whip a 30MB
> > file across using plain sockets and send/recv but if I made the receivor use
> > makefile on the socket and then read, it slowed down to my 1 sec per 64K.
> > If I specify a buffer (something big but less than 64K ... IP packet size?)
> > then I am back in speedy territory.  The unbuffered mode seems almost like
> > it is sending the data 1 char at a time AND this is the default mode used in
> > SocketServer and subsequently BaseHTTPServer ...
> > 
> > This is on solaris 7, python 1.5.2.  Anyone else found this to be a problem
> > or am I doing something wrong?
> > 
> > andy
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/


From cgw@fnal.gov  Thu Aug 31 20:11:30 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 14:11:30 -0500 (CDT)
Subject: Silly correction to: [Python-Dev] linuxaudiodev test does nothing
Message-ID: <14766.44514.531109.440309@buffalo.fnal.gov>

I wrote:

 >  x.setparameters(8000, 1, 8, linuxaudiodev.AFMT_MU_LAW )

where I meant:

 > x.setparameters(8000, 8, 1, linuxaudiodev.AFMT_MU_LAW )

In fact I tried just about every combination of arguments, closing and
reopening the device each time, but still no go.

I also wrote:

 > Maybe I'll clean it up and submit it for inclusion post-1.6

where of course I meant to say post-2.0b1




From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 20:46:54 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 21:46:54 +0200
Subject: [Python-Dev] one last SRE headache
Message-ID: <023301c01384$39b2bdc0$766940d5@hagrid>

can anyone tell me how Perl treats this pattern?

    r'((((((((((a))))))))))\41'

in SRE, this is currently a couple of nested groups, surrounding
a single literal, followed by a back reference to the fourth group,
followed by a literal "1" (since there are less than 41 groups)

in PRE, it turns out that this is a syntax error; there's no group 41.

however, this test appears in the test suite under the section "all
test from perl", but they're commented out:

# Python does not have the same rules for \\41 so this is a syntax error
#    ('((((((((((a))))))))))\\41', 'aa', FAIL),
#    ('((((((((((a))))))))))\\41', 'a!', SUCCEED, 'found', 'a!'),

if I understand this correctly, Perl treats as an *octal* escape
(chr(041) == "!").

now, should I emulate PRE, Perl, or leave it as it is...

</F>

PS. in case anyone wondered why I haven't seen this before, it's
because I just discovered that the test suite masks syntax errors
under some circumstances...



From guido@beopen.com  Thu Aug 31 21:48:16 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 15:48:16 -0500
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: Your message of "Thu, 31 Aug 2000 12:18:26 MST."
 <20000831121826.F11297@lyra.org>
References: <008701c01373$95ced1e0$766940d5@hagrid>
 <20000831121826.F11297@lyra.org>
Message-ID: <200008312048.PAA23324@cj20424-a.reston1.va.home.com>

> I ran into this same problem on the client side.
> 
> The server does a makefile() so that it can do readline() to fetch the HTTP
> request line and then the MIME headers. The *problem* is that if you do
> something like:
> 
>     f = sock.makefile()
>     line = f.readline()
>     data = sock.recv(1000)
> 
> You're screwed if you have buffering enabled. "f" will read in a bunch of
> data -- past the end of the line. That data now sits inside f's buffer and
> is not available to the sock.recv() call.
> 
> If you forget about sock and just stick to f, then you'd be okay. But
> SocketServer and/or BaseHTTPServer doesn't -- it uses both objects to do the
> reading.
> 
> Solution? Don't use rfile for reading, but go for the socket itself. Or
> revamp the two classes to forget about the socket once the files (wfile and
> rfile) are created. The latter might not be possible, tho.

I was about to say that you have it backwards, and that you should
only use rfile & wfile, when I realized that CGIHTTPServer.py needs
this!  The subprocess needs to be able to read the rest of the socket,
for POST requests.  So you're right.

Solution?  The buffer size should be an instance or class variable.
Then SocketServer can set it to buffered by default, and CGIHTTPServer
can set it to unbuffered.

> Dunno why the unbuffered reading would be slow. I'd think it would still
> read large chunks at a time when you request it.

System call overhead?  I had the same complaint about Windows, where
apparently winsock makes you pay more of a performance penalty than
Unix does in the same case.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From akuchlin@mems-exchange.org  Thu Aug 31 20:46:03 2000
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Thu, 31 Aug 2000 15:46:03 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <023301c01384$39b2bdc0$766940d5@hagrid>; from effbot@telia.com on Thu, Aug 31, 2000 at 09:46:54PM +0200
References: <023301c01384$39b2bdc0$766940d5@hagrid>
Message-ID: <20000831154603.A15688@kronos.cnri.reston.va.us>

On Thu, Aug 31, 2000 at 09:46:54PM +0200, Fredrik Lundh wrote:
>can anyone tell me how Perl treats this pattern?
>    r'((((((((((a))))))))))\41'

>if I understand this correctly, Perl treats as an *octal* escape
>(chr(041) == "!").

Correct.  From perlre:

       You may have as many parentheses as you wish.  If you have more
       than 9 substrings, the variables $10, $11, ... refer to the
       corresponding substring.  Within the pattern, \10, \11,
       etc. refer back to substrings if there have been at least that
       many left parentheses before the backreference.  Otherwise (for
       backward compatibility) \10 is the same as \010, a backspace,
       and \11 the same as \011, a tab.  And so on.  (\1 through \9
       are always backreferences.)  

In other words, if there were 41 groups, \41 would be a backref to
group 41; if there aren't, it's an octal escape.  This magical
behaviour was deemed not Pythonic, so pre uses a different rule: it's
always a character inside a character class ([\41] isn't a syntax
error), and outside a character class it's a character if there are
exactly 3 octal digits; otherwise it's a backref.  So \41 is a backref
to group 41, but \041 is the literal character ASCII 33.

--amk



From gstein@lyra.org  Thu Aug 31 21:04:18 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 13:04:18 -0700
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: <200008312048.PAA23324@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 31, 2000 at 03:48:16PM -0500
References: <008701c01373$95ced1e0$766940d5@hagrid> <20000831121826.F11297@lyra.org> <200008312048.PAA23324@cj20424-a.reston1.va.home.com>
Message-ID: <20000831130417.K11297@lyra.org>

On Thu, Aug 31, 2000 at 03:48:16PM -0500, Guido van Rossum wrote:
> I wrote:
>...
> > Solution? Don't use rfile for reading, but go for the socket itself. Or
> > revamp the two classes to forget about the socket once the files (wfile and
> > rfile) are created. The latter might not be possible, tho.
> 
> I was about to say that you have it backwards, and that you should
> only use rfile & wfile, when I realized that CGIHTTPServer.py needs
> this!  The subprocess needs to be able to read the rest of the socket,
> for POST requests.  So you're right.

Ooh! I hadn't considered that case. Yes: you can't transfer the contents of
a FILE's buffer to the CGI, but you can pass a file descriptor (the socket).

> Solution?  The buffer size should be an instance or class variable.
> Then SocketServer can set it to buffered by default, and CGIHTTPServer
> can set it to unbuffered.

Seems reasonable.

> > Dunno why the unbuffered reading would be slow. I'd think it would still
> > read large chunks at a time when you request it.
> 
> System call overhead?  I had the same complaint about Windows, where
> apparently winsock makes you pay more of a performance penalty than
> Unix does in the same case.

Shouldn't be. There should still be an rfile.read(1000) in that example app
(with the big transfers). That read() should be quite fast -- the buffering
should have almost no effect.

So... what is the underlying problem?

[ IOW, there are two issues: the sock vs file thing; and why rfile is so
  darn slow; I have no insights on the latter. ]

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 21:08:23 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 22:08:23 +0200
Subject: [Python-Dev] one last SRE headache
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>
Message-ID: <027f01c01387$3ae9fde0$766940d5@hagrid>

amk wrote:
> outside a character class it's a character if there are exactly
> 3 octal digits; otherwise it's a backref.  So \41 is a backref
> to group 41, but \041 is the literal character ASCII 33.

so what's the right way to parse this?

read up to three digits, check if they're a valid octal
number, and treat them as a decimal group number if
not?

</F>



From guido@beopen.com  Thu Aug 31 22:10:19 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 16:10:19 -0500
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: Your message of "Thu, 31 Aug 2000 13:04:18 MST."
 <20000831130417.K11297@lyra.org>
References: <008701c01373$95ced1e0$766940d5@hagrid> <20000831121826.F11297@lyra.org> <200008312048.PAA23324@cj20424-a.reston1.va.home.com>
 <20000831130417.K11297@lyra.org>
Message-ID: <200008312110.QAA23506@cj20424-a.reston1.va.home.com>

> > > Dunno why the unbuffered reading would be slow. I'd think it would still
> > > read large chunks at a time when you request it.
> > 
> > System call overhead?  I had the same complaint about Windows, where
> > apparently winsock makes you pay more of a performance penalty than
> > Unix does in the same case.
> 
> Shouldn't be. There should still be an rfile.read(1000) in that example app
> (with the big transfers). That read() should be quite fast -- the buffering
> should have almost no effect.
> 
> So... what is the underlying problem?
> 
> [ IOW, there are two issues: the sock vs file thing; and why rfile is so
>   darn slow; I have no insights on the latter. ]

Should, shouldn't...

It's a quality of implementation issue in stdio.  If stdio, when
seeing a large read on an unbuffered file, doesn't do something smart
but instead calls getc() for each character, that would explain this.
It's dumb, but not illegal.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From guido@beopen.com  Thu Aug 31 22:12:29 2000
From: guido@beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 16:12:29 -0500
Subject: [Python-Dev] one last SRE headache
In-Reply-To: Your message of "Thu, 31 Aug 2000 22:08:23 +0200."
 <027f01c01387$3ae9fde0$766940d5@hagrid>
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>
 <027f01c01387$3ae9fde0$766940d5@hagrid>
Message-ID: <200008312112.QAA23526@cj20424-a.reston1.va.home.com>

> amk wrote:
> > outside a character class it's a character if there are exactly
> > 3 octal digits; otherwise it's a backref.  So \41 is a backref
> > to group 41, but \041 is the literal character ASCII 33.
> 
> so what's the right way to parse this?
> 
> read up to three digits, check if they're a valid octal
> number, and treat them as a decimal group number if
> not?

Suggestion:

If there are fewer than 3 digits, it's a group.

If there are exactly 3 digits and you have 100 or more groups, it's a
group -- too bad, you lose octal number support.  Use \x. :-)

If there are exactly 3 digits and you have at most 99 groups, it's an
octal escape.

(Can you even have more than 99 groups in SRE?)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


From m.favas@per.dem.csiro.au  Thu Aug 31 21:17:14 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 04:17:14 +0800
Subject: [Fwd: [Python-Dev] test_gettext.py fails on 64-bit architectures]
Message-ID: <39AEBD4A.55ABED9E@per.dem.csiro.au>

This is a multi-part message in MIME format.
--------------8FDA7E7BE838D95AC7E3DCE7
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913
--------------8FDA7E7BE838D95AC7E3DCE7
Content-Type: message/rfc822
Content-Disposition: inline

Message-ID: <39AEBD01.601F7A83@per.dem.csiro.au>
Date: Fri, 01 Sep 2000 04:16:01 +0800
From: Mark Favas <m.favas@per.dem.csiro.au>
Organization: CSIRO Exploration & Mining
X-Mailer: Mozilla 4.75 [en] (X11; U; OSF1 V4.0 alpha)
X-Accept-Language: en
MIME-Version: 1.0
To: "Barry A. Warsaw" <bwarsaw@beopen.com>
Subject: Re: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AE07FF.478F413@per.dem.csiro.au> <14766.14278.609327.610929@anthem.concentric.net>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi Barry,

Close, but no cigar - fixes the miscalculation of BE_MAGIC, but "magic"
is still read from the .mo file as 0xffffffff950412de (the 64-bit rep of
the 32-bit negative integer 0x950412de)

Mark

"Barry A. Warsaw" wrote:
> 
> >>>>> "MF" == Mark Favas <m.favas@per.dem.csiro.au> writes:
> 
>     MF> This is because the magic number is read in by the code in
>     MF> Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
>     MF> buf[:4])[0]), and checked against LE_MAGIC (defined as
>     MF> 950412DE) and BE_MAGIC (calculated as FFFFFFFFDE120495 using
>     MF> struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0])
> 
> I was trying to be too clever.  Just replace the BE_MAGIC value with
> 0xde120495, as in the included patch.
> 
>     MF> Replacing the "i" in the code that generates BE_MAGIC and
>     MF> reads in "magic" by "I" makes the test work for me, but
>     MF> there's other uses of "i" and "ii" when the rest of the .mo
>     MF> file is processed that I'm unsure about with different inputs.
> 
> Should be fine, I think.  With < and > leading characters, those
> format strings should select `standard' sizes:
> 
>     Standard size and alignment are as follows: no alignment is
>     required for any type (so you have to use pad bytes); short is 2
>     bytes; int and long are 4 bytes. float and double are 32-bit and
>     64-bit IEEE floating point numbers, respectively.
> 
> Please run the test again with this patch and let me know.
> -Barry
> 
> Index: gettext.py
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Lib/gettext.py,v
> retrieving revision 1.4
> diff -u -r1.4 gettext.py
> --- gettext.py  2000/08/30 03:29:58     1.4
> +++ gettext.py  2000/08/31 10:40:41
> @@ -125,7 +125,7 @@
>  class GNUTranslations(NullTranslations):
>      # Magic number of .mo files
>      LE_MAGIC = 0x950412de
> -    BE_MAGIC = struct.unpack('>i', struct.pack('<i', LE_MAGIC))[0]
> +    BE_MAGIC = 0xde120495
> 
>      def _parse(self, fp):
>          """Override this method to support alternative .mo formats."""

-- 
Email  - m.favas@per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913

--------------8FDA7E7BE838D95AC7E3DCE7--



From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 21:33:11 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 22:33:11 +0200
Subject: [Python-Dev] one last SRE headache
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>              <027f01c01387$3ae9fde0$766940d5@hagrid>  <200008312112.QAA23526@cj20424-a.reston1.va.home.com>
Message-ID: <028d01c0138a$b2de46a0$766940d5@hagrid>

guido wrote:
> Suggestion:
> 
> If there are fewer than 3 digits, it's a group.
> 
> If there are exactly 3 digits and you have 100 or more groups, it's a
> group -- too bad, you lose octal number support.  Use \x. :-)
> 
> If there are exactly 3 digits and you have at most 99 groups, it's an
> octal escape.

I had to add one rule:

    If it starts with a zero, it's always an octal number.
    Up to two more octal digits are accepted after the
    leading zero.

but this still fails on this pattern:

    r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'

where the last part is supposed to be a reference to
group 11, followed by a literal '9'.

more ideas?

> (Can you even have more than 99 groups in SRE?)

yes -- the current limit is 100 groups.  but that's an
artificial limit, and it should be removed.

</F>



From m.favas@per.dem.csiro.au  Thu Aug 31 21:32:52 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 04:32:52 +0800
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <39AEC0F4.746656E2@per.dem.csiro.au>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
>...
> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?
> 
> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

On my DEC/Compaq/OSF1/Tru64 Unix box with the default stacksize of 2048k
I get 6225 iterations before seg faulting...
-- 
Mark


From ping@lfw.org  Thu Aug 31 22:04:26 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 16:04:26 -0500 (CDT)
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <028d01c0138a$b2de46a0$766940d5@hagrid>
Message-ID: <Pine.LNX.4.10.10008311559180.10613-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Fredrik Lundh wrote:
> I had to add one rule:
> 
>     If it starts with a zero, it's always an octal number.
>     Up to two more octal digits are accepted after the
>     leading zero.

Fewer rules are better.  Let's not arbitrarily rule out
the possibility of more than 100 groups.

The octal escapes are a different kind of animal than the
backreferences: for a backreference, there is *actually*
a backslash followed by a number in the regular expression;
but we already have a reasonable way to put funny characters
into regular expressions.

That is, i propose *removing* the translation of octal
escapes from the regular expression engine.  That's the
job of the string literal:

    r'\011'    is a backreference to group 11

    '\\011'    is a backreference to group 11

    '\011'     is a tab character

This makes automatic construction of regular expressions
a tractable problem.  We don't want to introduce so many
exceptional cases that an attempt to automatically build
regular expressions will turn into a nightmare of special
cases.
    

-- ?!ng



From jeremy@beopen.com  Thu Aug 31 21:47:39 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 16:47:39 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AEC0F4.746656E2@per.dem.csiro.au>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
Message-ID: <14766.50283.758598.632542@bitdiddle.concentric.net>

I've just checked in Misc/find_recursionlimit.py that uses recursion
through various __ methods (.e.g __repr__) to generate infinite
recursion.  These tend to use more C stack frames that a simple
recursive function.

I've set the Python recursion_limit down to 2500, which is safe for
all tests in find_recursionlimit on my Linux box.  The limit can be
bumped back up, so I'm happy to have it set low by default.

Does anyone have a platform where this limit is no low enough?

Jeremy


From ping@lfw.org  Thu Aug 31 22:07:32 2000
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 16:07:32 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008310237.OAA17328@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008311604500.10613-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Greg Ewing wrote:
> Peter Schneider-Kamp <nowonder@nowonder.de>:
> 
> > As far as I know adding a builtin indices() has been
> > rejected as an idea.
> 
> But why? I know it's been suggested, but I don't remember seeing any
> convincing arguments against it. Or much discussion at all.

I submitted a patch to add indices() and irange() previously.  See:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101129&group_id=5470

Guido rejected it:

    gvanrossum: 2000-Aug-17 12:16
        I haven't seen the debate! But I'm asked to pronounce
        anyway, and I just don't like this. Learn to write code
        that doesn't need the list index!

    tim_one: 2000-Aug-15 15:08
        Assigned to Guido for Pronouncement.  The debate's been
        debated, close it out one way or the other.

    ping: 2000-Aug-09 03:00
        There ya go.  I have followed the style of the builtin_range()
        function, and docstrings are included.


-- ?!ng



From bwarsaw@beopen.com  Thu Aug 31 21:55:32 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 16:55:32 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules Makefile.pre.in,1.63,1.64
References: <200008311656.JAA20666@slayer.i.sourceforge.net>
Message-ID: <14766.50756.893007.253356@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake <fdrake@users.sourceforge.net> writes:

    Fred> If Setup is older than Setup.in, issue a bold warning that
    Fred> the Setup may need to be checked to make sure all the latest
    Fred> information is present.

    Fred> This closes SourceForge patch #101275.

Not quite.  When I run make in the top-level directory, I see this
message:

-------------------------------------------
./Setup.in is newer than Setup;
check to make sure you have all the updates
you need in your Setup file.
-------------------------------------------

I have to hunt around in my compile output to notice that, oh, make
cd'd into Modules so it must be talking about /that/ Setup file.
"Then why did it say ./Setup.in"? :)

The warning should say Modules/Setup.in is newer than Modules/Setup.

-Barry


From cgw@fnal.gov  Thu Aug 31 21:59:12 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 15:59:12 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.44502.812468.677142@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
 <200008311957.OAA22338@cj20424-a.reston1.va.home.com>
 <14766.44502.812468.677142@bitdiddle.concentric.net>
Message-ID: <14766.50976.102853.695767@buffalo.fnal.gov>

Jeremy Hylton writes:
 >   >> I can play the .au file and I use a variety of other audio tools
 >   >> regularly.  Is Peter still maintaining it or can someone else
 >   >> offer some assistance?

The Linux audio programming docs do clearly state:

>    There are three parameters which affect quality (and memory/bandwidth requirements) of sampled audio
>    data. These parameters are the following:		    
>
>           Sample format (sometimes called as number of bits) 
>           Number of channels (mono/stereo) 
>           Sampling rate (speed) 
>
>           NOTE!  
>              It is important to set these parameters always in the above order. Setting speed before
>              number of channels doesn't work with all devices.  

linuxaudiodev.c does this:
    ioctl(self->x_fd, SOUND_PCM_WRITE_RATE, &rate)
    ioctl(self->x_fd, SNDCTL_DSP_SAMPLESIZE, &ssize)
    ioctl(self->x_fd, SNDCTL_DSP_STEREO, &stereo)
    ioctl(self->x_fd, SNDCTL_DSP_SETFMT, &audio_types[n].a_fmt)

which is exactly the reverse order of what is recommended!

Alas, even after fixing this, I *still* can't get linuxaudiodev to
play the damned .au file.  It works fine for the .wav formats.

I'll continue hacking on this as time permits.


From m.favas@per.dem.csiro.au  Thu Aug 31 22:04:48 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 05:04:48 +0800
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net>
Message-ID: <39AEC870.3E1CDAFD@per.dem.csiro.au>

Compaq/DEC/OSF1/Tru64 Unix, default stacksize 2048k:
I get "Limit of 2100 is fine" before stack overflow and segfault.
(On Guido's test script, I got 3532 before crashing, and 6225 on MAL's
test).

Mark

Jeremy Hylton wrote:
> 
> I've just checked in Misc/find_recursionlimit.py that uses recursion
> through various __ methods (.e.g __repr__) to generate infinite
> recursion.  These tend to use more C stack frames that a simple
> recursive function.
> 
> I've set the Python recursion_limit down to 2500, which is safe for
> all tests in find_recursionlimit on my Linux box.  The limit can be
> bumped back up, so I'm happy to have it set low by default.
> 
> Does anyone have a platform where this limit is no low enough?


From bwarsaw@beopen.com  Thu Aug 31 22:14:59 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 17:14:59 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
Message-ID: <14766.51923.685753.319113@anthem.concentric.net>

I wonder if find_recursionlimit.py shouldn't go in Tools and perhaps
be run as a separate rule in the Makefile (with a corresponding
cleanup of the inevitable core file, and a printing of the last
reasonable value returned).  Or you can write a simple Python wrapper
around find_recursionlimit.py that did the parenthetical tasks.

-Barry


From jeremy@beopen.com  Thu Aug 31 22:22:20 2000
From: jeremy@beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 17:22:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
In-Reply-To: <14766.51923.685753.319113@anthem.concentric.net>
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
 <14766.51923.685753.319113@anthem.concentric.net>
Message-ID: <14766.52364.742061.188332@bitdiddle.concentric.net>

>>>>> "BAW" == Barry A Warsaw <bwarsaw@beopen.com> writes:

  BAW> I wonder if find_recursionlimit.py shouldn't go in Tools and
  BAW> perhaps be run as a separate rule in the Makefile (with a
  BAW> corresponding cleanup of the inevitable core file, and a
  BAW> printing of the last reasonable value returned).  Or you can
  BAW> write a simple Python wrapper around find_recursionlimit.py
  BAW> that did the parenthetical tasks.

Perhaps.  It did not imagine we would use the results to change the
recursion limit at compile time or run time automatically.  It seemed
a bit hackish, so I put it in Misc.  Maybe Tools would be better, but
that would require an SF admin request (right?).

Jeremy


From skip@mojam.com (Skip Montanaro)  Thu Aug 31 22:32:58 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 31 Aug 2000 16:32:58 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.50283.758598.632542@bitdiddle.concentric.net>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
Message-ID: <14766.53002.467504.523298@beluga.mojam.com>

    Jeremy> Does anyone have a platform where this limit is no low enough?

Yes, apparently I do.  My laptop is configured so:

     Pentium III
     128MB RAM
     211MB swap
     Mandrake Linux 7.1

It spits out 2400 as the last successful test, even fresh after a reboot
with no swap space in use and lots of free memory and nothing else running
besides boot-time daemons.

Skip


From bwarsaw@beopen.com  Thu Aug 31 22:43:54 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 17:43:54 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
 <14766.51923.685753.319113@anthem.concentric.net>
 <14766.52364.742061.188332@bitdiddle.concentric.net>
Message-ID: <14766.53658.752985.58503@anthem.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy@beopen.com> writes:

    JH> Perhaps.  It did not imagine we would use the results to
    JH> change the recursion limit at compile time or run time
    JH> automatically.  It seemed a bit hackish, so I put it in Misc.
    JH> Maybe Tools would be better, but that would require an SF
    JH> admin request (right?).

Yes, to move the ,v file, but there hasn't been enough revision
history to worry about it.  Just check it in someplace in Tools and
cvsrm it from Misc.


From cgw@fnal.gov  Thu Aug 31 22:45:15 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 16:45:15 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.53002.467504.523298@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
Message-ID: <200008312145.QAA10295@buffalo.fnal.gov>

Skip Montanaro writes:
 >      211MB swap
 >      Mandrake Linux 7.1
 > 
 > It spits out 2400 as the last successful test, even fresh after a reboot
 > with no swap space in use and lots of free memory and nothing else running
 > besides boot-time daemons.

I get the exact same value.  Of course the amount of other stuff
running makes no differemce, you get the core dump because you've hit
the RLIMIT for stack usage, not because you've exhausted memory.
Amount of RAM in the machine, or swap space in use has nothing to do
with it.  Do "ulimit -s unlimited" and see what happens...

There can be no universally applicable default value here because
different people will have different rlimits depending on how their
sysadmins chose to set this up.



From cgw@fnal.gov  Thu Aug 31 22:52:29 2000
From: cgw@fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 16:52:29 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.54008.173276.72324@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
 <14766.54008.173276.72324@beluga.mojam.com>
Message-ID: <14766.54173.228568.55862@buffalo.fnal.gov>

Skip Montanaro writes:

 > Makes no difference:

Allright, I'm confused,  I'll shut up now ;-)


From skip@mojam.com (Skip Montanaro)  Thu Aug 31 22:52:33 2000
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 31 Aug 2000 16:52:33 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.53381.634928.615048@buffalo.fnal.gov>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
 <14766.50283.758598.632542@bitdiddle.concentric.net>
 <14766.53002.467504.523298@beluga.mojam.com>
 <14766.53381.634928.615048@buffalo.fnal.gov>
Message-ID: <14766.54177.584090.198596@beluga.mojam.com>


    Charles> I get the exact same value.  Of course the amount of other
    Charles> stuff running makes no differemce, you get the core dump
    Charles> because you've hit the RLIMIT for stack usage, not because
    Charles> you've exhausted memory.  Amount of RAM in the machine, or swap
    Charles> space in use has nothing to do with it.  Do "ulimit -s
    Charles> unlimited" and see what happens...

Makes no difference:

    % ./python
    Python 2.0b1 (#81, Aug 31 2000, 15:53:42)  [GCC 2.95.3 19991030 (prerelease)] on linux2
    Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
    Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
    >>>
    % ulimit -a
    core file size (blocks)     0
    data seg size (kbytes)      unlimited
    file size (blocks)          unlimited
    max locked memory (kbytes)  unlimited
    max memory size (kbytes)    unlimited
    open files                  1024
    pipe size (512 bytes)       8
    stack size (kbytes)         unlimited
    cpu time (seconds)          unlimited
    max user processes          2048
    virtual memory (kbytes)     unlimited
    % ./python Misc/find_recursionlimit.py
    ...
    Limit of 2300 is fine
    recurse
    add
    repr
    init
    getattr
    getitem
    Limit of 2400 is fine
    recurse
    add
    repr
    Segmentation fault

Skip


From tim_one@email.msn.com  Thu Aug 31 22:55:56 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:55:56 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <023301c01384$39b2bdc0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEIHDAA.tim_one@email.msn.com>

The PRE documentation expresses the true intent:

    \number
    Matches the contents of the group of the same number. Groups
    are numbered starting from 1. For example, (.+) \1 matches 'the the'
    or '55 55', but not 'the end' (note the space after the group). This
    special sequence can only be used to match one of the first 99 groups.
    If the first digit of number is 0, or number is 3 octal digits long,
    it will not be interpreted as a group match, but as the character with
    octal value number. Inside the "[" and "]" of a character class, all
    numeric escapes are treated as characters

This was discussed at length when we decided to go the Perl-compatible
route, and Perl's rules for backreferences were agreed to be just too ugly
to emulate.  The meaning of \oo in Perl depends on how many groups precede
it!  In this case, there are fewer than 41 groups, so Perl says "octal
escape"; but if 41 or more groups had preceded, it would mean
"backreference" instead(!).  Simply unbearably ugly and error-prone.

> -----Original Message-----
> From: python-dev-admin@python.org [mailto:python-dev-admin@python.org]On
> Behalf Of Fredrik Lundh
> Sent: Thursday, August 31, 2000 3:47 PM
> To: python-dev@python.org
> Subject: [Python-Dev] one last SRE headache
>
>
> can anyone tell me how Perl treats this pattern?
>
>     r'((((((((((a))))))))))\41'
>
> in SRE, this is currently a couple of nested groups, surrounding
> a single literal, followed by a back reference to the fourth group,
> followed by a literal "1" (since there are less than 41 groups)
>
> in PRE, it turns out that this is a syntax error; there's no group 41.
>
> however, this test appears in the test suite under the section "all
> test from perl", but they're commented out:
>
> # Python does not have the same rules for \\41 so this is a syntax error
> #    ('((((((((((a))))))))))\\41', 'aa', FAIL),
> #    ('((((((((((a))))))))))\\41', 'a!', SUCCEED, 'found', 'a!'),
>
> if I understand this correctly, Perl treats as an *octal* escape
> (chr(041) == "!").
>
> now, should I emulate PRE, Perl, or leave it as it is...
>
> </F>
>
> PS. in case anyone wondered why I haven't seen this before, it's
> because I just discovered that the test suite masks syntax errors
> under some circumstances...
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://www.python.org/mailman/listinfo/python-dev




From m.favas@per.dem.csiro.au  Thu Aug 31 22:56:25 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 05:56:25 +0800
Subject: [Python-Dev] Syntax error in Makefile for "make install"
Message-ID: <39AED489.F953E9EE@per.dem.csiro.au>

Makefile in the libainstall target of "make install" uses the following
construct:
                @if [ "$(MACHDEP)" == "beos" ] ; then \
This "==" is illegal in all the /bin/sh's I have lying around, and leads
to make failing with:
/bin/sh: test: unknown operator ==
make: *** [libainstall] Error 1

-- 
Mark


From fdrake@beopen.com  Thu Aug 31 23:01:41 2000
From: fdrake@beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 18:01:41 -0400 (EDT)
Subject: [Python-Dev] Syntax error in Makefile for "make install"
In-Reply-To: <39AED489.F953E9EE@per.dem.csiro.au>
References: <39AED489.F953E9EE@per.dem.csiro.au>
Message-ID: <14766.54725.466043.196080@cj42289-a.reston1.va.home.com>

Mark Favas writes:
 > Makefile in the libainstall target of "make install" uses the following
 > construct:
 >                 @if [ "$(MACHDEP)" == "beos" ] ; then \
 > This "==" is illegal in all the /bin/sh's I have lying around, and leads
 > to make failing with:
 > /bin/sh: test: unknown operator ==
 > make: *** [libainstall] Error 1

  Fixed; thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member



From tim_one@email.msn.com  Thu Aug 31 22:01:10 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:01:10 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <200008312112.QAA23526@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEJHDAA.tim_one@email.msn.com>

> Suggestion:
>
> If there are fewer than 3 digits, it's a group.

Unless it begins with a 0 (that's what's documented today -- read the docs
<wink>).

> If there are exactly 3 digits and you have 100 or more groups, it's a
> group -- too bad, you lose octal number support.  Use \x. :-)

The docs say you can't use backreferences for groups higher than 99.

> If there are exactly 3 digits and you have at most 99 groups, it's an
> octal escape.

If we make the meaning depend on the number of preceding groups, we may as
well emulate *all* of Perl's ugliness here.




From m.favas@per.dem.csiro.au  Thu Aug 31 23:29:47 2000
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 06:29:47 +0800
Subject: [Python-Dev] Namespace collision between lib/xml and site-packages/xml
Message-ID: <39AEDC5B.333F737E@per.dem.csiro.au>

On July 26 I reported that the new xml package in the standard library
collides with and overrides the xml package from the xml-sig that may be
installed in site-packages. This is still the case. The new package does
not have the same functionality as the one in site-packages, and hence
my application (and others relying on similar functionality) gets an
import error. I understood that it was planned that the new library xml
package would check for the site-package version, and transparently hand
over to it if it existed. It's not really an option to remove/rename the
xml package in the std lib, or to break existing xml-based code...

Of course, this might be fixed by 2.0b1, or is it a feature that will be
frozen out <wry smile>?

Fred's response was:
"  I expect we'll be making the package in site-packages an extension
provider for the xml package in the standard library.  I'm planning to
discuss this issue at today's PythonLabs meeting." 
-- 
Mark


From thomas@xs4all.net  Thu Aug 31 22:38:59 2000
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 23:38:59 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 31, 2000 at 10:58:49AM -0500
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <20000831233859.K12695@xs4all.nl>

On Thu, Aug 31, 2000 at 10:58:49AM -0500, Guido van Rossum wrote:

>     C() # This tries to get __init__, triggering the recursion

> I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> have no idea what units).

That's odd... On BSDI, with a 2Mbyte stacklimit (ulimit -s says 2048) I get
almost as many recursions: 5136. That's very much not what I would expect...
With a stack limit of 8192, I can go as high as 19997 recursions! I wonder
why that is...

Wait a minute... The Linux SEGV isn't stacksize related at all! Observe:

centurion:~ > limit stacksize 8192
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 65536
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 2048
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 128
centurion:~ > python teststack.py | tail -3
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 1024
centurion:~ > python teststack.py | tail -3
2677
2678
26Segmentation fault (core dumped) 

centurion:~ > limit stacksize 1500
centurion:~ > python teststack.py | tail -3
3496
3497
349Segmentation fault (core dumped) 

I don't have time to pursue this, however. I'm trying to get my paid work
finished tomorrow, so that I can finish my *real* work over the weekend:
augassign docs & some autoconf changes :-) 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 23:47:03 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:47:03 +0200
Subject: [Python-Dev] threadmodule.c comment error? (from comp.lang.python)
Message-ID: <00d001c0139d$7be87900$766940d5@hagrid>

as noted by curtis jensen over at comp.lang.python:

the parse tuple string doesn't quite match the error message
given if the 2nd argument isn't a tuple.  on the other hand, the
args argument is initialized to NULL...

thread_PyThread_start_new_thread(PyObject *self, PyObject *fargs)
{
 PyObject *func, *args = NULL, *keyw = NULL;
 struct bootstate *boot;

 if (!PyArg_ParseTuple(fargs, "OO|O:start_new_thread", &func, &args, &keyw))
  return NULL;
 if (!PyCallable_Check(func)) {
  PyErr_SetString(PyExc_TypeError,
    "first arg must be callable");
  return NULL;
 }
 if (!PyTuple_Check(args)) {
  PyErr_SetString(PyExc_TypeError,
    "optional 2nd arg must be a tuple");
  return NULL;
 }
 if (keyw != NULL && !PyDict_Check(keyw)) {
  PyErr_SetString(PyExc_TypeError,
    "optional 3rd arg must be a dictionary");
  return NULL;
 }

what's the right way to fix this? (change the error message
and remove the initialization, or change the parsetuple string
and the tuple check)

</F>



From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 23:30:23 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:30:23 +0200
Subject: [Python-Dev] one last SRE headache
References: <LNBBLJKPBEHFEDALKOLCEEELHDAA.tim_one@email.msn.com>
Message-ID: <009301c0139b$0ea31000$766940d5@hagrid>

tim:

> [/F]
> > I had to add one rule:
> >
> >     If it starts with a zero, it's always an octal number.
> >     Up to two more octal digits are accepted after the
> >     leading zero.
> >
> > but this still fails on this pattern:
> >
> >     r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'
> >
> > where the last part is supposed to be a reference to
> > group 11, followed by a literal '9'.
> 
> But 9 isn't an octal digit, so it fits w/ your new rule just fine.

last time I checked, "1" wasn't a valid zero.

but nevermind; I think I've figured it out (see other mail)

</F>



From Fredrik Lundh" <effbot@telia.com  Thu Aug 31 23:28:40 2000
From: Fredrik Lundh" <effbot@telia.com (Fredrik Lundh)
Date: Fri, 1 Sep 2000 00:28:40 +0200
Subject: [Python-Dev] one last SRE headache
References: <LNBBLJKPBEHFEDALKOLCEEEIHDAA.tim_one@email.msn.com>
Message-ID: <008701c0139a$d1619ae0$766940d5@hagrid>

tim peters:
> The PRE documentation expresses the true intent:
> 
>     \number
>     Matches the contents of the group of the same number. Groups
>     are numbered starting from 1. For example, (.+) \1 matches 'the the'
>     or '55 55', but not 'the end' (note the space after the group). This
>     special sequence can only be used to match one of the first 99 groups.
>     If the first digit of number is 0, or number is 3 octal digits long,
>     it will not be interpreted as a group match, but as the character with
>     octal value number.

yeah, I've read that.  clear as coffee.

but looking at again, I suppose that the right way to
implement this is (doing the tests in the given order):

    if it starts with zero, it's an octal escape
    (1 or 2 octal digits may follow)

    if it starts with an octal digit, AND is followed
    by two other octal digits, it's an octal escape

    if it starts with any digit, it's a reference
    (1 extra decimal digit may follow)

oh well.  too bad my scanner only provides a one-character
lookahead...

</F>



From tim_one@email.msn.com  Thu Aug 31 22:07:37 2000
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:07:37 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <028d01c0138a$b2de46a0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEELHDAA.tim_one@email.msn.com>

[/F]
> I had to add one rule:
>
>     If it starts with a zero, it's always an octal number.
>     Up to two more octal digits are accepted after the
>     leading zero.
>
> but this still fails on this pattern:
>
>     r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'
>
> where the last part is supposed to be a reference to
> group 11, followed by a literal '9'.

But 9 isn't an octal digit, so it fits w/ your new rule just fine.  \117
here instead would be an octal escape.




From bwarsaw@beopen.com  Thu Aug 31 23:12:23 2000
From: bwarsaw@beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 18:12:23 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules Makefile.pre.in,1.64,1.65
References: <200008312153.OAA03214@slayer.i.sourceforge.net>
Message-ID: <14766.55367.854732.727671@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake <fdrake@users.sourceforge.net> writes:

    Fred> "Modules/Setup.in is newer than Moodules/Setup;"; \ !  echo
------------------------------------------^^^
who let the cows in here?


From Vladimir.Marangozov@inrialpes.fr  Thu Aug 31 23:32:50 2000
From: Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov)
Date: Fri, 1 Sep 2000 00:32:50 +0200 (CEST)
Subject: [Python-Dev] lookdict
Message-ID: <200008312232.AAA14305@python.inrialpes.fr>

I'd like to request some clarifications on the recently checked
dict patch. How it is supposed to work and why is this solution okay?

What's the exact purpose of the 2nd string specialization patch?

Besides that, I must say that now the interpreter is noticeably slower
and MAL and I were warning you kindly about this code, which was
fine tuned over the years. It is very sensible and was optimized to death.
The patch that did make it was labeled "not ready" and I would have
appreciated another round of review. Not that I disagree, but now I feel
obliged to submit another patch to make some obvious perf improvements
(at least), which simply duplicates work... Fred would have done them
very well, but I haven't had the time to say much about the implementation
because the laconic discussion on the Patch Manager went about
functionality.

Now I'd like to bring this on python-dev and see what exactly happened
to lookdict and what the BeOpen team agreed on regarding this function.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov@inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252


From skip at mojam.com  Tue Aug  1 00:07:02 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 31 Jul 2000 17:07:02 -0500 (CDT)
Subject: [Python-Dev] SET_LINENO and python options
In-Reply-To: <20000730080718.A22903@newcnri.cnri.reston.va.us>
References: <LNBBLJKPBEHFEDALKOLCAEOEGMAA.tim_one@email.msn.com>
	<200007300239.EAA21825@python.inrialpes.fr>
	<20000730080718.A22903@newcnri.cnri.reston.va.us>
Message-ID: <14725.63622.190585.197392@beluga.mojam.com>

    amk> It always seemed odd to me that the current line number is always
    amk> kept up to date, even though 99.999% of the time, no one will care.
    amk> Why not just keep a small table that holds the offset in the
    amk> bytecode at which each line starts, and look it up when it's
    amk> needed?

(I'm probably going to wind up seeming like a fool, responding late to this
thread without having read it end-to-end, but...)

Isn't that what the code object's co_lnotab is for?  I thought the idea was
to dispense with SET_LINENO altogether and just compute line numbers using
co_lnotab on those rare occasions (debugging, tracebacks, etc) when you
needed them.

Skip



From greg at cosc.canterbury.ac.nz  Tue Aug  1 01:45:02 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 11:45:02 +1200 (NZST)
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended
 slicing for lists)
In-Reply-To: <Pine.LNX.4.10.10007290934240.5008-100000@localhost>
Message-ID: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>

I think there are some big conceptual problems with allowing
negative steps in a slice.

With ordinary slices, everything is very clear if you think
of the indices as labelling the points between the list
elements.

With a step, this doesn't work any more, and you have to
think in terms of including the lower index but excluding the
upper index.

But what do "upper" and "lower" mean when the step is negative?
There are several things that a[i:j:-1] could plausibly mean:

   [a[i], a[i-1], ..., a[j+1]]

   [a[i-1], a[i-2], ..., a[j]]

   [a[j], a[j-1], ..., a[i+1]]

   [a[j-1], a[j-2], ..., a[i]]

And when you consider negative starting and stopping values,
it just gets worse. These have no special meaning to range(),
but in list indexing they do. So what do they mean in a slice
with a step? Whatever is chosen, it can't be consistent with
both.

In the face of such confusion, the only Pythonic thing would
seem to be to disallow these things.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Aug  1 02:01:45 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 12:01:45 +1200 (NZST)
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: <200007281147.GAA04007@cj20424-a.reston1.va.home.com>
Message-ID: <200008010001.MAA10295@s454.cosc.canterbury.ac.nz>

> The way I understand this, mixing indices and slices is used all
> the time to reduce the dimensionality of an array.

I wasn't really suggesting that they should be disallowed.
I was trying to point out that their existence makes it
hard to draw a clear distinction between indexing and slicing.

If it were the case that

   a[i,j,...,k]

was always equivalent to

   a[i][j]...[k]

then there would be no problem -- you could consider each
subscript individually as either an index or a slice. But
that's not the way it is.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Aug  1 02:07:08 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 12:07:08 +1200 (NZST)
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEPDGMAA.tim_one@email.msn.com>
Message-ID: <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>

Tim Peters:

> The problem isn't that repr sticks in backslash escapes, the problem is that
> repr gets called when repr is inappropriate.

Seems like we need another function that does something in
between str() and repr(). It would be just like repr() except
that it wouldn't put escape sequences in strings unless
absolutely necessary, and it would apply this recursively
to sub-objects.

Not sure what to call it -- goofy() perhaps :-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From bwarsaw at beopen.com  Tue Aug  1 02:25:43 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 31 Jul 2000 20:25:43 -0400 (EDT)
Subject: [Python-Dev] Should repr() of string should observe locale?
References: <LNBBLJKPBEHFEDALKOLCKEPDGMAA.tim_one@email.msn.com>
	<200008010007.MAA10298@s454.cosc.canterbury.ac.nz>
Message-ID: <14726.6407.729299.113509@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Seems like we need another function that does something in
    GE> between str() and repr().

I'd bet most people don't even understand why there has to be two
functions that do almost the same thing.

-Barry



From guido at beopen.com  Tue Aug  1 05:32:18 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 31 Jul 2000 22:32:18 -0500
Subject: [Python-Dev] test_re fails with re==pre
In-Reply-To: Your message of "Mon, 31 Jul 2000 23:59:34 +0200."
             <20000731215940.28A11E266F@oratrix.oratrix.nl> 
References: <20000731215940.28A11E266F@oratrix.oratrix.nl> 
Message-ID: <200008010332.WAA25069@cj20424-a.reston1.va.home.com>

> Test_re now works fine if re is sre, but it still fails if re is pre.
> 
> Is this an artifact of the test harness or is there still some sort of
> incompatibility lurking in there?

It's because the tests are actually broken for sre: it prints a bunch
of "=== Failed incorrectly ..." messages.  We added these as "expected
output" to the test/output/test_re file.  The framework just notices
there's a difference and blames pre.

Effbot has promised a new SRE "real soon now" ...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Aug  1 06:01:34 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 31 Jul 2000 23:01:34 -0500
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
In-Reply-To: Your message of "Tue, 01 Aug 2000 11:45:02 +1200."
             <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> 
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> 
Message-ID: <200008010401.XAA25180@cj20424-a.reston1.va.home.com>

> I think there are some big conceptual problems with allowing
> negative steps in a slice.
> 
> With ordinary slices, everything is very clear if you think
> of the indices as labelling the points between the list
> elements.
> 
> With a step, this doesn't work any more, and you have to
> think in terms of including the lower index but excluding the
> upper index.
> 
> But what do "upper" and "lower" mean when the step is negative?
> There are several things that a[i:j:-1] could plausibly mean:
> 
>    [a[i], a[i-1], ..., a[j+1]]
> 
>    [a[i-1], a[i-2], ..., a[j]]
> 
>    [a[j], a[j-1], ..., a[i+1]]
> 
>    [a[j-1], a[j-2], ..., a[i]]
> 
> And when you consider negative starting and stopping values,
> it just gets worse. These have no special meaning to range(),
> but in list indexing they do. So what do they mean in a slice
> with a step? Whatever is chosen, it can't be consistent with
> both.
> 
> In the face of such confusion, the only Pythonic thing would
> seem to be to disallow these things.

You have a point!  I just realized today that my example L[9:-1:-1]
does *not* access L[0:10] backwards, because of the way the first -1
is interpreted as one before the end of the list L. :(

But I'm not sure we can forbid this behavior (in general) because the
NumPy folks are already using this.  Since these semantics are up to
the object, and no built-in objects support extended slices (yet), I'm
not sure that this behavior has been documented anywhere except in
NumPy.

However, for built-in lists I think it's okay to forbid a negative
step until we've resolved this...

This is something to consider for patch 100998 which currently
implements (experimental) extended slices for lists...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From ping at lfw.org  Tue Aug  1 02:02:40 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 31 Jul 2000 17:02:40 -0700 (PDT)
Subject: [Python-Dev] Reordering opcodes (PEP 203 Augmented Assignment)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEIGDCAA.MarkH@ActiveState.com>
Message-ID: <Pine.LNX.4.10.10007311701050.5008-100000@localhost>

On Mon, 31 Jul 2000, Mark Hammond wrote:
> IDLE and Pythonwin are able to debug arbitary programs once they have
> started - and they are both written in Python.

But only if you start them *in* IDLE or Pythonwin, right?

> * You do not want to debug the IDE itself, just a tiny bit of code running
> under the IDE.  Making the IDE take the full hit simply because it wants to
> run a debugger occasionally isnt fair.

Well, running with trace hooks in place is no different from
the way things run now.

> The end result is that all IDEs will run with debugging enabled.

Right -- that's what currently happens.  I don't see anything wrong
with that.

> * Python often is embedded, for example, in a Web Server, or used for CGI.
> It should be possible to debug these programs directly.

But we don't even have a way to do this now.  Attaching to an
external running process is highly system-dependent trickery.

If printing out tracebacks and other information isn't enough
and you absolutely have to step the program under a debugger,
the customary way of doing this now is to run a non-forking
server under the debugger.  In that case, you just start a
non-forking server under IDLE which sets -g, and you're fine.


Anyway, i suppose this is all rather moot now that Vladimir has a
clever scheme for tracing even without SET_LINENO.  Go Vladimir!
Your last proposal sounded great.


-- ?!ng




From effbot at telia.com  Tue Aug  1 08:20:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 08:20:01 +0200
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>
Message-ID: <001a01bffb80$87514860$f2a6b5d4@hagrid>

greg wrote:

> I think there are some big conceptual problems with allowing
> negative steps in a slice.

wasn't "slices" supposed to work the same way as "ranges"?

from PEP-204:

    "Extended slices do show, however, that there is already a
    perfectly valid and applicable syntax to denote ranges in a way
    that solve all of the earlier stated disadvantages of the use of
    the range() function"

> In the face of such confusion, the only Pythonic thing would
> seem to be to disallow these things.

...and kill PEP-204 at the same time.

</F>




From tim_one at email.msn.com  Tue Aug  1 08:16:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 02:16:41 -0400
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <14726.6407.729299.113509@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEPGNAA.tim_one@email.msn.com>

[Barry A. Warsaw]
> I'd bet most people don't even understand why there has to be two
> functions that do almost the same thing.

Indeed they do not.  The docs are too vague about the intended differences
between str and repr; in 1.5.2 and earlier, string was just about the only
builtin type that actually had distinct str() and repr() implementations, so
it was easy to believe that strings were somehow a special case with unique
behavior; 1.6 extends that (just) to floats, where repr(float) now displays
enough digits so that the output can be faithfully converted back to the
float you started with.  This is starting to bother people in the way that
distinct __str__ and __repr__ functions have long frustrated me in my own
classes:  the default (repr) at the prompt leads to bloated output that's
almost always not what I want to see.  Picture repr() applied to a matrix
object!  If it meets the goal of producing a string sufficient to reproduce
the object when eval'ed, it may spray megabytes of string at the prompt.
Many classes implement __repr__ to do what __str__ was intended to do as a
result, just to get bearable at-the-prompt behavior.  So "learn by example"
too often teaches the wrong lesson too.  I'm not surprised that users are
confused!

Python is *unusual* in trying to cater to more than one form of to-string
conversion across the board.  It's a mondo cool idea that hasn't got the
praise it deserves, but perhaps that's just because the current
implementation doesn't really work outside the combo of the builtin types +
plain-ASCII strings.  Unescaping locale printables in repr() is the wrong
solution to a small corner of the right problem.





From effbot at telia.com  Tue Aug  1 08:27:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 08:27:15 +0200
Subject: [Python-Dev] Reordering opcodes (PEP 203 Augmented Assignment)
References: <Pine.LNX.4.10.10007311701050.5008-100000@localhost>
Message-ID: <006401bffb81$89a7ed20$f2a6b5d4@hagrid>

ping wrote:

> > * Python often is embedded, for example, in a Web Server, or used for CGI.
> > It should be possible to debug these programs directly.
> 
> But we don't even have a way to do this now.  Attaching to an
> external running process is highly system-dependent trickery.

not under Python: just add an import statement to the script, tell
the server to reload it, and off you go...

works on all platforms.

</F>




From paul at prescod.net  Tue Aug  1 08:34:53 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 02:34:53 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBAEEGDCAA.mhammond@skippinet.com.au>
Message-ID: <39866F8D.FCFA85CB@prescod.net>

Mark Hammond wrote:
> 
> >   Interesting; I'd understood from Paul that you'd given approval to
> > this module.
> 
> Actually, it was more more along the lines of me promising to spend some
> time "over the next few days", and not getting to it.  However, I believe
> it was less than a week before it was just checked in.

It was checked in the day before the alpha was supposed to go out. I
thought that was what you wanted! On what schedule would you have
preferred us to do it?

> I fear this may be a general symptom of the new flurry of activity; no-one
> with a real job can keep up with this list, meaning valuable feedback on
> many proposals is getting lost.  For example, DigiCool have some obviously
> smart people, but they are clearly too busy to offer feedback on anything
> lately.  That is a real shame, and a good resource we are missing out on.

>From my point of view, it was the appearance of _winreg that prompted
the "flurry of activity" that led to winreg. I would never have bothered
with winreg if I were not responding to the upcoming "event" of the
defacto standardization of _winreg. It was clearly designed (and I use
the word loosely) by various people at Microsoft over several years --
with sundry backwards and forwards compatibility hacks embedded.

I'm all for slow and steady, deliberate design. I'm sorry _winreg was
rushed but I could only work with the time I had and the interest level
of the people around. Nobody else wanted to discuss it. Nobody wanted to
review the module. Hardly anyone here even knew what was in the OLD
module.

> I am quite interested to hear from people like Gordon and Bill
> about their thoughts.

I am too. I would *also* be interested in hearing from people who have
not spent the last five years with the Microsoft API because _winreg was
a very thin wrapper over it and so will be obvious to those who already
know it.

I have the feeling that an abstraction over the APIs would never be as
"comfortable" as the Microsoft API you've been using for all of these
years.
-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"





From paul at prescod.net  Tue Aug  1 09:16:30 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 03:16:30 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
Message-ID: <3986794E.ADBB938C@prescod.net>

(reorganizing the important stuff to the top)

Mark Hammond wrote:
> Still-can't-see-the-added-value-ly,

I had no personal interest in an API for the windows registry but I
could not, in good conscience, let the original one become the 
standard Python registry API. 

Here are some examples:

(num_subkeys, num_values, last_modified ) = winreg.QueryInfoKey( key )
for i in range( num_values ):
	(name,value)=winreg.EnumValue( key, i )
		if name==valuename: print "found"

Why am I enumerating but not using the Python enumeration protocol? Why
do I have to get a bogus 3-tuple before I begin enumerating? Where else
are the words "Query" and "Enum" used in Python APIs?

and

winreg.SetValueEx( key, "ProgramFilesDir", None, winreg.REG_SZ,
r"c:\programs" )

Note that the first argument is the key object (so why isn't this a
method?) and the third argument is documented as bogus. In fact, in
the OpenKey documentation you are requested to "always pass 0 please".

All of that was appropriate when winreg was documented "by reference" to
the Microsoft documentation but if it is going to be a real, documented
module in the Python library then the bogus MS junk should go.

The truth is I would prefer NOT to work on winreg and leave both 
versions out of the library. But Unless someone else is willing to 
design and implement a decent API, I took that burden upon myself 
rather than see more weird stuff in the Python API.

So the value add is:

 * uses Python iteration protocol
 * uses Python mapping protocol
 * uses Python method invocation syntax
 * uses only features that will be documented
 * does not expose integers as object handles (even for HKLM etc.)
 * uses inspectable, queryable objects even as docstrings
 * has a much more organized module dictionary (do a dir( _winreg))

If you disagree with those principles then we are in trouble. If you
have quibbles about specifics then let's talk.

> Ive just updated the test suite so that test_winreg2.py actually works.
> 
> It appears that the new winreg.py module is still in a state of flux, but
> all work has ceased.  The test for this module has lots of placeholders
> that are not filled in. Worse, the test code was checked in an obviously
> broken state (presumably "to be done", but guess who the bunny who had to
> do it was :-(

The tests ran fine on my machine. Fred had to make minor changes before
he checked it in for me because of module name changes. It's possible
that he mistyped a search and replace or something...or that I had a 
system dependency. Since I changed jobs I no longer have access to 
Visual C++ and have not had luck getting GCC to compile _winreg. This
makes further testing difficult until someone cuts a Windows binary 
build of Python (which is perpetually imminent).

The test cases are not all filled in. The old winreg test tested each
method on average one time. The new winreg tries harder to test each in
a variety of situations. Rather than try to keep all cases in my head I
created empty function bodies. Now we have clear documentation of what
is done and tested and what is to be tested still. Once an alpha is cut,
(or I fix my compiler situation) I can finish that process.

> Browsing the source made it clear that the module docstrings are still
> incomplete (eg "For information on the key API, open a key and look at its
> docstring.").  

The docstrings are not complete, but they are getting better and the old
winreg documentation was certainly not complete either! I admit I got
into a little bit of recursive projects wherein I didn't want to write
the winreg, minidom, SAX, etc. documentation twice so I started working
on stuff that would extract the docstrings and generate LaTeX. That's
almost done and I'll finish up the documentation. That's what the beta
period is for, right?

> Eg, the specific example I had a problem with was:
> 
> key[value]
> 
> Returns a result that includes the key index!  This would be similar to a
> dictionary index _always_ returning the tuple, and the first element of the
> tuple is _always_ the key you just indexed.

There is a very clearly labelled (and docstring-umented) getValueData
method:

key.getValueData("FOO") 

That gets only the value. Surely that's no worse than the original:

winreg.QueryValue( key, "FOO" )

If this is your only design complaint then I don't see cause for alarm
yet.

Here's why I did it that way:

You can fetch data values by their names or by their indexes. If
you've just indexed by the name then of course you know it. If you've
just fetched by the numeric index then you don't. I thought it was more
consistent to have the same value no matter how you indexed. Also, when
you get a value, you should also get a type, because the types can be
important. In that case it still has to be a tuple, so it's just a
question of a two-tuple or a three-tuple. Again, I thought that the
three-tuple was more consistent. Also, this is the same return value
returned by the existing EnumValue function.

> Has anyone else actually looked at or played with this, and still believe
> it is an improvement over _winreg?  I personally find it unintuitive, and
> will personally continue to use _winreg.  If we can't find anyone to
> complete it, document it, and stand up and say they really like it, I
> suggest we pull it.

I agree that it needs more review. I could not get anyone interested in
a discussion of how the API should look, other than pointing at old
threads.

You are, of course, welcome to use whatever you want but I think it
would be productive to give the new API a workout in real code and then
report specific design complaints. If others concur, we can change it.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"





From mwh21 at cam.ac.uk  Tue Aug  1 08:59:11 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 01 Aug 2000 07:59:11 +0100
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
In-Reply-To: "Fredrik Lundh"'s message of "Tue, 1 Aug 2000 08:20:01 +0200"
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> <001a01bffb80$87514860$f2a6b5d4@hagrid>
Message-ID: <m34s55a2m8.fsf@atrus.jesus.cam.ac.uk>

"Fredrik Lundh" <effbot at telia.com> writes:

> greg wrote:
> 
> > I think there are some big conceptual problems with allowing
> > negative steps in a slice.
> 
> wasn't "slices" supposed to work the same way as "ranges"?

The problem is that for slices (& indexes in general) that negative
indices have a special interpretation:

range(10,-1,-1)
range(10)[:-1]

Personally I don't think it's that bad (you just have to remember to
write :: instead of :-1: when you want to step all the way back to the
beginning).  More serious is what you do with out of range indices -
and NumPy is a bit odd with this one, it seems:

>>> l = Numeric.arrayrange(10)
>>> l[30::-2]
array([0, 8, 6, 4, 2, 0])

What's that initial "0" doing there?  Can someone who actually
understands NumPy explain this?

Cheers,
M.

(PS: PySlice_GetIndices is in practice a bit useless because when it
fails it offers no explanation of why!  Does any one use this
function, or should I submit a patch to make it a bit more helpful (&
support longs)?)

-- 
    -Dr. Olin Shivers,
     Ph.D., Cranberry-Melon School of Cucumber Science
                                           -- seen in comp.lang.scheme




From tim_one at email.msn.com  Tue Aug  1 09:57:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 03:57:06 -0400
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEFEGNAA.tim_one@email.msn.com>

[Greg Ewing]
> Seems like we need another function that does something in
> between str() and repr(). It would be just like repr() except
> that it wouldn't put escape sequences in strings unless
> absolutely necessary, and it would apply this recursively
> to sub-objects.
>
> Not sure what to call it -- goofy() perhaps :-)

In the previous incarnation of this debate, a related (more str-like than
repr-like) intermediate was named ssctsoos().  Meaning, of course <wink>,
"str() special casing the snot out of strings".  It was purely a hack, and I
was too busy working at Dragon at the time to give it the thought it needed.
Now I'm too busy working at PythonLabs <0.5 wink>.

not-a-priority-ly y'rs  - tim





From MarkH at ActiveState.com  Tue Aug  1 09:59:22 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 1 Aug 2000 17:59:22 +1000
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <39866F8D.FCFA85CB@prescod.net>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>

I am going to try very hard to avoid antagonistic statements - it doesnt
help anyone or anything when we are so obviously at each others throats.

Let me start by being conciliatory:  I do appreciate the fact that you made
the effort on the winreg module, and accept it was done for all the right
reasons.  The fact that I dont happen to like it doesnt imply any personal
critisism - I believe we simply have a philosophical disagreement.  But
then again, they are the worst kinds of disagreements to have!

> > Actually, it was more more along the lines of me promising to
> spend some
> > time "over the next few days", and not getting to it.
> However, I believe
> > it was less than a week before it was just checked in.
>
> It was checked in the day before the alpha was supposed to go out. I
> thought that was what you wanted! On what schedule would you have
> preferred us to do it?

Im not sure, but one that allowed everyone with relevant input to give it.
Guido also stated he was not happy with the process.  I would have
preferred to have it miss the alpha than to go out with a design we are not
happy with.

> >From my point of view, it was the appearance of _winreg that prompted
> the "flurry of activity" that led to winreg. I would never have bothered
> with winreg if I were not responding to the upcoming "event" of the
> defacto standardization of _winreg. It was clearly designed (and I use
> the word loosely) by various people at Microsoft over several years --
> with sundry backwards and forwards compatibility hacks embedded.

Agreed.  However, the main problem was that people were assuming win32api
was around to get at the registry.  The win32api module's registry
functions have been around for _ages_.  None of its users have ever
proposed a more Pythonic API.  Thus I find it a little strange that someone
without much experience in the API should find it such an abomination,
while experienced users of the API were clearly happy (ok - maybe "happy"
isnt the right word - but no unhappy enough to complain :-)

If nothing else, it allows the proliferation of documentation on the Win32
API to apply to Python.  This is clearly not true with the new module.

This is also a good case for using the .NET API.  However, it still would
not provide Python indexing, iteration etc.  However, as I state below, Im
not convinced this is a problem.

> I'm all for slow and steady, deliberate design. I'm sorry _winreg was
> rushed but I could only work with the time I had and the interest level
> of the people around. Nobody else wanted to discuss it. Nobody wanted to
> review the module. Hardly anyone here even knew what was in the OLD
> module.

I dont belive that is fair.  As I said, plenty of people has used win32api,
and were sick of insisting their users install my extensions.  distutils
was the straw that broke the serpents back, IIRC.

It is simply the sheer volume of people who _did_ use the win32api registry
functions that forced the new winreg module.

The fact that no one else wanted to discuss it, or review it, or generally
seemed to care should have been indication that the new winreg was not
really required, rather than taken as proof that a half-baked module that
has not had any review should be checked in.

> I am too. I would *also* be interested in hearing from people who have
> not spent the last five years with the Microsoft API because _winreg was
> a very thin wrapper over it and so will be obvious to those who already
> know it.

Agreed - but it isnt going to happen.  There are not enough people on this
list who are not experienced with Windows, but also intend getting that
experience during the beta cycle.  I hope you would agree that adding an
experimental module to Python simply as a social experiment is not the
right thing to do.  Once winreg is released, it will be too late to remove,
even if the consesus is that it should never have been released in the
first place.

> I have the feeling that an abstraction over the APIs would never be as
> "comfortable" as the Microsoft API you've been using for all of these
> years.

Again agreed - although we should replace the "you've" with "you and every
other Windows programmer" - which tends to make the case for _winreg
stronger, IMO.

Moving to the second mail:

> All of that was appropriate when winreg was documented "by reference" to
> the Microsoft documentation but if it is going to be a real, documented
> module in the Python library then the bogus MS junk should go.

I agree in principal, but IMO it is obvious this will not happen.  It hasnt
happened yet, and you yourself have moved into more interesting PEPs.  How
do you propose this better documentation will happen?

> The truth is I would prefer NOT to work on winreg and leave both
> versions out of the library.

Me too - it has just cost me work so far, and offers me _zero_ benefit (if
anyone in the world can assume that the win32api module is around, it
surely must be me ;-).  However, this is a debate you need to take up with
the distutils people, and everyone else who has asked for registry access
in the core.  Guido also appears to have heard these calls, hence we had
his complete support for some sort of registry module for the core.

> So the value add is:
...
> If you disagree with those principles then we are in trouble. If you
> have quibbles about specifics then let's talk.

I'm afraid we are in a little trouble ;-)  These appear dubious to me.  If
I weigh in the number of calls over the years for a more Pythonic API over
the win32api functions, I become more convinced.

The registry is a tree structure similar to a file system.  There havent
been any calls I have heard to move the os.listdir() function or the glob
module to a more "oo" style?  I dont see a "directory" object that supports
Python style indexing or iteration.  I dont see any of your other benefits
being applied to Python's view of the file system - so why is the registry
so different?

To try and get more productive:  Bill, Gordon et al appear to have the
sense to stay out of this debate.  Unless other people do chime in, Paul
and I will remain at an impasse, and no one will be happy.  I would much
prefer to move this forward than to vent at each other regarding mails
neither of us can remember in detail ;-)

So what to do?  Anyone?  If even _one_ experienced Windows developer on
this list can say they believe "winreg" is appropriate and intuitive, I am
happy to shut up (and then simply push for better winreg documentation ;-)

Mark.




From moshez at math.huji.ac.il  Tue Aug  1 10:36:29 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 1 Aug 2000 11:36:29 +0300 (IDT)
Subject: [Python-Dev] Access to the Bug Database
Message-ID: <Pine.GSO.4.10.10008011134540.9510-100000@sundial>

Hi!

I think I need access to the bug database -- but in the meantime,
anyone who wants to mark 110612 as closed is welcome to.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Tue Aug  1 10:40:53 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 04:40:53 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFFGNAA.tim_one@email.msn.com>

FWIW, I ignored all the winreg modules, and all the debate about them.  Why?
Just because Mark's had been in use for years already, so was already
battle-tested.  There's no chance that any other platform will ever make use
of this module, and given that its appeal is thus solely to Windows users,
it was fine by me if it didn't abstract *anything* away from MS's Win32 API.
MS's APIs are hard enough to understand without somebody else putting their
own layers of confusion <0.9 wink> on top of them.

May as well complain that the SGI-specific cd.open() function warns that if
you pass anything at all to its optional "mode" argument, it had better be
the string "r" (maybe that makes some kind of perverse sense to SGI weenies?
fine by me if so).

So, sorry, but I haven't even looked at Paul's code.  I probably should,
but-- jeez! --there are so many other things that *need* to get done.  I did
look at Mark's (many months ago) as part of helping him reformat it to
Guido's tastes, and all I remember thinking about it then is "yup, looks a
whole lot like the Windows registry API -- when I need it I'll be able to
browse the MS docs lightly and use it straight off -- good!".

So unless Mark went and did something like clean it up <wink>, I still think
it's good.





From tim_one at email.msn.com  Tue Aug  1 11:27:59 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 05:27:59 -0400
Subject: [Python-Dev] Access to the Bug Database
In-Reply-To: <Pine.GSO.4.10.10008011134540.9510-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFGGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> I think I need access to the bug database

Indeed, you had no access to the SF bug database at all.  Neither did a
bunch of others.  I have a theory about that:  I mentioned several weeks ago
that IE5 simply could not display the Member Permissions admin page
correctly, after it reached a certain size.  I downloaded a stinking
Netscape then, and that worked fine until it reached *another*, larger size,
at which point the display of some number of the bottom-most entries (like
where moshez lives) gets totally screwed up *sometimes*.  My *suspicion* is
that if an admin changes a permission while either IE5 or NS is in this
screwed-up state, it wreaks havoc with the permissions of the members whose
display lines are screwed up.  It's a weak suspicion <wink>, but a real one:
I've only seen NS screw up some contiguous number of the bottom-most lines,
I expect all the admins are using NS, and it was all and only a contiguous
block of developers at the bottom of the page who had their Bug Manager
permissions set to None (along with other damaged values) when I brought up
the page.

So, admins, look out for that!

Anyway, I just went thru and gave every developer admin privileges on the SF
Bug Manager.  Recall that it will probably take about 6 hours to take
effect, though.

> -- but in the meantime, anyone who wants to mark 110612 as
> closed is welcome to.

No, they're not:  nobody who knows *why* the bug is being closed should even
think about closing it.  It's still open.

you're-welcome<wink>-ly y'rs  - tim





From Vladimir.Marangozov at inrialpes.fr  Tue Aug  1 11:53:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 1 Aug 2000 11:53:36 +0200 (CEST)
Subject: [Python-Dev] SET_LINENO and python options
In-Reply-To: <14725.63622.190585.197392@beluga.mojam.com> from "Skip Montanaro" at Jul 31, 2000 05:07:02 PM
Message-ID: <200008010953.LAA02082@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> Isn't that what the code object's co_lnotab is for?  I thought the idea was
> to dispense with SET_LINENO altogether and just compute line numbers using
> co_lnotab on those rare occasions (debugging, tracebacks, etc) when you
> needed them.

Don't worry about it anymore. It's all in Postponed patch #101022 at SF.
It makes the current "-O" the default (and uses co_lnotab), and reverts
back to the current default with "-d".

I give myself a break on this. You guys need to test it now and report
some feedback and impressions. If only to help Guido making up his mind
and give him a chance to pronounce on it <wink>. 

[?!ng]
> Anyway, i suppose this is all rather moot now that Vladimir has a
> clever scheme for tracing even without SET_LINENO.  Go Vladimir!
> Your last proposal sounded great.

Which one? They are all the latest <wink>.
See also the log msg of the latest tiny patch update at SF.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nhodgson at bigpond.net.au  Tue Aug  1 12:47:12 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Tue, 1 Aug 2000 20:47:12 +1000
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
Message-ID: <010501bffba5$db4ebf90$8119fea9@neil>

> So what to do?  Anyone?  If even _one_ experienced Windows developer on
> this list can say they believe "winreg" is appropriate and intuitive, I am
> happy to shut up (and then simply push for better winreg documentation ;-)

   Sorry but my contribution isn't going to help much with breaking the
impasse.

   Registry code tends to be little lumps of complexity you don't touch once
it is working. The Win32 Reg* API is quite ugly - RegCreateKeyEx
takes/returns 10 parameters but you only normally want 3 and the return
status and everyone asks for KEY_ALL_ACCESS until the installation testers
tell you it fails for non-Administrators. So it would be good if the API was
simpler and defaulted everything you don't need to set.

   But I only hack the registry about once a year with Python. So if its
closer to the Win32 API then that helps me to use existing knowledge and
documentation.

   When writing an urllib patch recently, winreg seemed OK. Is it complete
enough? Are the things you can't do with it important for its role? IMO, if
winreg can handle the vast majority of cases (say, 98%) then its a useful
tool and people who need RegSetKeySecurity and similar can go to win32api.
Do the distutils developers know how much registry access they need?

   Enough fence sitting for now,

   Neil






From MarkH at ActiveState.com  Tue Aug  1 13:08:58 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 1 Aug 2000 21:08:58 +1000
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <010501bffba5$db4ebf90$8119fea9@neil>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCELIDCAA.MarkH@ActiveState.com>

Just to clarify (or confuse) the issue:

>    When writing an urllib patch recently, winreg seemed OK. Is
> it complete
> enough? Are the things you can't do with it important for its
> role? IMO, if
> winreg can handle the vast majority of cases (say, 98%) then its a useful
> tool and people who need RegSetKeySecurity and similar can go to
> win32api.

Note that Neil was actually using _winreg - the exposure of the raw Win32
API.  Part of my applying the patch was to rename the usage of "winreg" to
"_winreg".

Between the time of you writing the original patch and it being applied,
the old "winreg" module was renamed to "_winreg", and Paul's new
"winreg.py" was added.  The bone of contention is the new "winreg.py"
module, which urllib does _not_ use.

Mark.




From jim at interet.com  Tue Aug  1 15:28:40 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Tue, 01 Aug 2000 09:28:40 -0400
Subject: [Python-Dev] InfoWorld July 17 looks at Zope and Python
References: <397DB146.C68F9CD0@interet.com> <398654A8.37EB17BA@prescod.net>
Message-ID: <3986D088.E82E2162@interet.com>

Paul Prescod wrote:
> 
> Would you mind giving me the jist of the review? 20-word summary, if you
> don't mind.

Please note that I don't necessarily agree with the
reviews.  Also, there is no such thing as bad publicity.

Page 50: "Zope is a powerful application server.  Version
2.2 beta scales well, but enterprise capability, Python
language raise costs beyond the competition's."

Author claims he must purchase ZEO for $25-50K which is
too expensive.  Zope is dedicated to OOP, but shops not
doing OOP will have problems understanding it.  Python
expertise is necessary, but shops already know VB, C++ and
JavaScript.

Page 58:  "After many tutorials, I'm still waiting to
become a Zope addict."

Zope is based on Python, but that is no problem because
you do most programming in DTML which is like HTML.  It is
hard to get started in Zope because of lack of documentation,
it is hard to write code in browser text box, OOP-to-the-max
philosophy is unlike a familiar relational data base.
Zope has an unnecessarily high nerd factor.  It fails to
automate simple tasks.


My point in all this is that we design features to
appeal to computer scientists instead of "normal users".

JimA



From billtut at microsoft.com  Tue Aug  1 15:57:37 2000
From: billtut at microsoft.com (Bill Tutt)
Date: Tue, 1 Aug 2000 06:57:37 -0700 
Subject: [Python-Dev] New winreg module really an improvement?
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A610A@red-msg-07.redmond.corp.microsoft.com>

Mark wrote: 
> To try and get more productive:  Bill, Gordon et al appear to have the
> sense to stay out of this debate.  Unless other people do chime in, Paul
> and I will remain at an impasse, and no one will be happy.  I would much
> prefer to move this forward than to vent at each other regarding mails
> neither of us can remember in detail ;-)

I'm actually in the process of checking it out, and am hoping to compose
some comments on it later today.
I do know this about abstracting the registry APIs. If it doesn't allow you
to do everything you can do with the normal APIs, then you've failed in your
abstraction. (Which is probably why I've never yet seen a successful
abstraction of the API. :) )
The registry is indeed a bizarre critter. Key's have values, and values have
values. Ugh.... It's enough to drive a sane man bonkers, and here I was
being annoyed by the person who originally designed the NT console APIs,
silly me....

Bill





From gmcm at hypernet.com  Tue Aug  1 18:16:54 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 1 Aug 2000 12:16:54 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
References: <39866F8D.FCFA85CB@prescod.net>
Message-ID: <1246975873-72274187@hypernet.com>

[Mark]
> To try and get more productive:  Bill, Gordon et al appear to
> have the sense to stay out of this debate.  

Wish I had that much sense...

I'm only +0 on the philosophy of a friendly Pythonic wrapper: 
the registry is only rarely the "right" solution. You need it 
when you have small amounts of persistent data that needs to 
be available to multiple apps and / or Windows. I actively 
discourage use of the registry for any other purposes. So 
making it easy to use is of very low priority for me. 

In addition, I doubt that a sane wrapper can be put over it. At 
first blush, it looks like a nested dict. But the keys are 
ordered. And a leaf is more like a list of tuples [(value, data), ]. 
But if you pull up regedit and look at how it's used, the (user-
speak) "value" may be a (MS-speak) "key", "value" or "data". 
Just note the number of entries where a keyname has one 
(value, data) pair that consists of ("(Default)", "(value not 
set)"). Or the number where keyname must be opened, but 
the (value, data) pair is ("(Default)", something). (It doesn't 
help that "key" may mean "keyname" or "keyhandle", and 
"name" may mean "keyname" or "valuename" and "value" 
may mean "valuename" or "datavalue".)

IOW, this isn't like passing lists (instead of FD_SETs) to  
select. No known abstract container matches the registry. My 
suspicion is that any attempt to map it just means the user 
will have to understand both the underlying API and the 
mapping.

As a practical matter, it looks to me like winreg (under any but 
the most well-informed usage) may well leak handles. If so, 
that would be a disaster. But I don't have time to check it out.

In sum:
 - I doubt the registry can be made to look elegant
 - I use it so little I don't really care

- Gordon



From paul at prescod.net  Tue Aug  1 18:52:45 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 12:52:45 -0400
Subject: [Python-Dev] Winreg recap
Message-ID: <3987005C.9C45D7B6@prescod.net>

I specifically asked everyone here if an abstraction was a good idea. I
got three + votes and no - votes. One of the + votes requested that we
still ship the underlying module. Fine. I was actually pointed (on
python-dev) to specs for an abstraction layer that AFAIK had been
designed *on Python-dev*.

Back then, I said:

> > I've just had a chance to look at the winreg module. I think that it is
> > too low-level.

Mark Hammond said:
> I agree. There was a proposal (from Thomas Heller, IIRC) to do just this.
> I successfully argued there should be _2_ modules for Python - the raw
> low-level API, which guarantees you can do (almost) anything.  A
> higher-level API could cover the 80% of cases.
> ...
> I have no real problem with your proposed design, as long as it it written
> in Python, _using_ the low-level API.  It could be called "registry" or I
> would even be happy for "winreg.pyd" -> "_winreg.pyd" and your new module
> to be called "winreg.py"

Gordon pointed me to the spec. I took it and expanded on it to cover a
wider range of cases.

So now I go off and code it up and in addition to complaining about one
detail, I'm also told that there is no real point to having a high level
API. Windows users are accustomed to hacks and pain so crank it up!

> FWIW, I ignored all the winreg modules, and all the debate about them.  Why?
> Just because Mark's had been in use for years already, so was already
> battle-tested.  There's no chance that any other platform will ever make use
> of this module, and given that its appeal is thus solely to Windows users,
> it was fine by me if it didn't abstract *anything* away from MS's Win32 API.

It is precisely because it is for Windows users -- often coming from VB,
JavaScript or now C# -- that it needs to be abstracted.

I have the philosophy that I come to Python (both the language and the
library) because I want things to be easy and powerful at the same time.
Whereever feasible, our libraries *should* be cleaner and better than
the hacks that they cover up. Shouldn't they? I mean even *Microsoft*
abstracted over the registry API for VB, JavaScript, C# (and perhaps
Java). Are we really going to do less for our users?

To me, Python (language and library) is a refuge from the hackiness of
the rest of the world.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From paul at prescod.net  Tue Aug  1 18:53:31 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 12:53:31 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBAEEGDCAA.mhammond@skippinet.com.au> <200007281206.HAA04102@cj20424-a.reston1.va.home.com>
Message-ID: <3987008B.35D5C2A2@prescod.net>

Guido van Rossum wrote:
> 
> I vaguely remember that I wasn't happy with the way this was handled
> either, but was too busy at the time to look into it.  (I can't say
> whether I'm happy with the module or not, since I've never tried to
> use it.  But I do feel unhappy about the process.)

I was also unhappy with the process but from a differEnt perspective.

A new module appeared in the Python library: _winreg It was based on
tried and true code: _winreg, but it's API had many placeholder
arguments (where Microsoft had placeholder arguments) used function call
syntax for things that were clearly methods (as Microsoft did for C),
had an enumeration mechanism that seems, to me, be very unPythonic, 
had many undocumented features and constants, and the documented 
methods and properties often have weird Microsoft conventions 
(e.g. SetValueEx).

The LaTeX documentation for the old winreg says of one method: "This is
Lame Lame Lame, DO NOT USE THIS!!!"

Now I am still working on new winreg. I got involved in a recursive 
project to avoid writing the docs twice in two different formats. We 
are still in the beta period so there is no need to panic about 
documentation yet.

I would love nothing more than to hear that Windows registry handling is
hereby delayed until Python 7 or until someone more interested wants to
work on it for the love of programming. But if that is not going to
happen then I will strongly advise against falling back to _winreg which
is severely non-Pythonic.

> I vaguely remember that Paul Prescod's main gripe with the _winreg API
> was that it's not object-oriented enough -- but that seems his main
> gripe about most code these days. :-)

In this case it wasn't a mild preference, it was a strong allergic
reaction!

> Paul, how much experience with using the Windows registry did you have
> when you designed the new API?

I use it off and on. There are still corners of _winreg that I don't
understand. That's part of why I thought it needed to be covered up with
something that could be fully documented. To get even the level of
understanding I have, of the *original* _winreg, I had to scour the Web.
The perl docs were the most helpful. :)

Anyhow, Mark isn't complaining about me misunderstanding it, he's
complaining about my mapping into the Python object model. That's fair.
That's what we python-dev is for.

As far as Greg using _winreg, my impression was that that code predates
new winreg. I think that anyone who reads even just the docstrings for
the new one and the documentation for the other is going to feel that 
the new one is at the very least more organized and thought out. Whether
it is properly designed is up to users to decide.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Tue Aug  1 20:20:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 13:20:23 -0500
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: Your message of "Tue, 01 Aug 2000 03:16:30 -0400."
             <3986794E.ADBB938C@prescod.net> 
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>  
            <3986794E.ADBB938C@prescod.net> 
Message-ID: <200008011820.NAA30284@cj20424-a.reston1.va.home.com>

Paul wrote:
> I had no personal interest in an API for the windows registry but I
> could not, in good conscience, let the original one become the 
> standard Python registry API.

and later:
> I use it off and on. There are still corners of _winreg that I don't
> understand. That's part of why I thought it needed to be covered up with
> something that could be fully documented. To get even the level of
> understanding I have, of the *original* _winreg, I had to scour the Web.
> The perl docs were the most helpful. :)

I believe this is the crux of the problem.  Your only mistake was that
you criticized and then tried to redesign a (poorly designed) API that
you weren't intimately familiar with.

My boss tries to do this occasionally; he has a tendency to complain
that my code doesn't contain enough classes.  I tell him to go away --
he only just started learning Python from a book that I've never seen,
so he wouldn't understand...

Paul, I think that the best thing to do now is to withdraw winreg.py,
and to keep (and document!) the _winreg extension with the
understanding that it's a wrapper around poorly designed API but at
least it's very close to the C API.  The leading underscore should be
a hint that this is not a module for every day use.

Hopefully someday someone will eventually create a set of higher level
bindings modeled after the Java, VB or C# version of the API.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Tue Aug  1 19:43:16 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 1 Aug 2000 13:43:16 -0400 (EDT)
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
	<3986794E.ADBB938C@prescod.net>
	<200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <14727.3124.622333.980689@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > and to keep (and document!) the _winreg extension with the
 > understanding that it's a wrapper around poorly designed API but at
 > least it's very close to the C API.  The leading underscore should be
 > a hint that this is not a module for every day use.

  It is documented (as _winreg), but I've not reviewed the section in
great detail (yet!).  It will need to be revised to not refer to the
winreg module as the preferred API.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Tue Aug  1 20:30:48 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 1 Aug 2000 21:30:48 +0300 (IDT)
Subject: [Python-Dev] Bug Database
Message-ID: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>

I've just had a quick view over the database, and saw what we can prune at
no cost:

110647 -- Segmentation fault in "%.1200d" % 1. Fixed for me...
110649 -- Core dumps on compiling big expressions ('['+'1,'*100000+'1]'). 
          Fixed for me -- now throws a SyntaxError
110653 -- Complain about how 
          class foo:
	
		def __init__(self):
			self.bar1 = bar

		def bar(self):
			pass
         Creates cycles. A notabug if I ever saw one.
110654 -- 1+0j tested false. The bug was fixed.
110679 -- math.log(0) dumps core. Gives OverflowError for me...(I'm using
          a different OS, but the same CPU family (intel))
110710 -- range(10**n) gave segfault. Works for me -- either works, or throws
          MemoryError
110711 -- apply(foo, bar, {}) throws MemoryError. Works for me. (But might
          be an SGI problem)
110712 -- seems to be a duplicate of 110711
110715 -- urllib.urlretrieve() segfaults under kernel 2.2.12. Works for
          me with 2.2.15. 
110740, 110741, 110743, 110745, 110746, 110747, 110749, 110750 -- dups of 11715

I've got to go to sleep now....

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From jeremy at beopen.com  Tue Aug  1 20:47:47 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 1 Aug 2000 14:47:47 -0400 (EDT)
Subject: [Python-Dev] Bug Database
In-Reply-To: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
Message-ID: <14727.6995.164586.983795@bitdiddle.concentric.net>

Thanks for doing some triage, Moshe!

I am in the process of moving the bug database from jitterbug to
SourceForge.  There are still a few kinks in the process, which I am
trying to finish today.  There are two problems you will see with the
current database:

    * Many bugs do not contain all the followup messages that
    Jitterbug has.  I am working to add them.

    * There are many duplicates of some bugs.  The duplicates are the
    result of the debugging process for my Jitterbug -> SF conversion
    script.  I will remove these before I am done.  Any bug numbered
    higher than 110740 is probably a duplicate at this point.

The conversion process has placed most of the Jitterbug entries in the
SF bug tracker.  The PR number is in the SF summary and most of the
relevant Jitterbug headers (submittor, data, os, version) are part of
the body of the SF bug.  Any followups to the Jitterbug report are
stored as followup comments in SF.

The SF bug tracker has several fields that we can use to manage bug
reports.

* Category: Describes what part of Python the bug applies to.  Current
values are parser/compiler, core, modules, library, build, windows,
documentation.  We can add more categories, e.g. library/xml, if that
is helpful.

* Priority: We can assign a value from 1 to 9, where 9 is the highest
priority.  We will have to develop some guidelines for what those
priorities mean.  Right now everthing is priority 5 (medium).  I would
hazard a guess that bugs causing core dumps should have much higher
priority.

* Group: These reproduce some of the Jitterbug groups, like trash,
platform-specific, and irreproducible.  These are rough categories
that we can use, but I'm not sure how valuable they are.

* Resolution: What we plan to do about the bug.

* Assigned To: We can now assign bugs to specific people for
resolution.

* Status: Open or Closed.  When a bug has been fixed in the CVS
repository and a test case added to cover the bug, change its status
to Closed.

New bug reports should use the sourceforge interface.

Jeremy



From guido at beopen.com  Tue Aug  1 22:14:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 15:14:39 -0500
Subject: [Python-Dev] Bug Database
In-Reply-To: Your message of "Tue, 01 Aug 2000 14:47:47 -0400."
             <14727.6995.164586.983795@bitdiddle.concentric.net> 
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>  
            <14727.6995.164586.983795@bitdiddle.concentric.net> 
Message-ID: <200008012014.PAA31076@cj20424-a.reston1.va.home.com>

> * Category: Describes what part of Python the bug applies to.  Current
> values are parser/compiler, core, modules, library, build, windows,
> documentation.  We can add more categories, e.g. library/xml, if that
> is helpful.

Before it's too late, would it make sense to try and get the
categories to be the same in the Bug and Patch managers?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From m.favas at per.dem.csiro.au  Tue Aug  1 22:30:42 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Wed, 02 Aug 2000 04:30:42 +0800
Subject: [Python-Dev] regression test failure in test_tokenize?
Message-ID: <39873372.1C6F8CE1@per.dem.csiro.au>

Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:

./python Lib/test/regrtest.py test_tokenize.py 
test_tokenize
test test_tokenize failed -- Writing: "57,4-57,5:\011NUMBER\011'3'",
expected: "57,4-57,8:\011NUMBER\011'3."
1 test failed: test_tokenize

Test produces (snipped):
57,4-57,5:      NUMBER  '3'

Test should produce (if supplied output correct):
57,4-57,8:      NUMBER  '3.14'

Is this just me, or an un-checked checkin? (I noticed some new sre bits
in my current CVS version.)

Mark

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From akuchlin at mems-exchange.org  Tue Aug  1 22:47:57 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 1 Aug 2000 16:47:57 -0400
Subject: [Python-Dev] regression test failure in test_tokenize?
In-Reply-To: <39873372.1C6F8CE1@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Wed, Aug 02, 2000 at 04:30:42AM +0800
References: <39873372.1C6F8CE1@per.dem.csiro.au>
Message-ID: <20000801164757.B27333@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 04:30:42AM +0800, Mark Favas wrote:
>Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:
>Is this just me, or an un-checked checkin? (I noticed some new sre bits
>in my current CVS version.)

test_tokenize works fine using the current CVS on Linux; perhaps this
is a 64-bit problem in sre manifesting itself?

--amk



From effbot at telia.com  Tue Aug  1 23:16:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 23:16:15 +0200
Subject: [Python-Dev] regression test failure in test_tokenize?
References: <39873372.1C6F8CE1@per.dem.csiro.au> <20000801164757.B27333@kronos.cnri.reston.va.us>
Message-ID: <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid>

andrew wrote:
> On Wed, Aug 02, 2000 at 04:30:42AM +0800, Mark Favas wrote:
> >Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:
> >Is this just me, or an un-checked checkin? (I noticed some new sre bits
> >in my current CVS version.)
> 
> test_tokenize works fine using the current CVS on Linux; perhaps this
> is a 64-bit problem in sre manifesting itself?

I've confirmed (and fixed) the bug reported by Mark.  It was a nasty
little off-by-one error in the "branch predictor" code...

But I think I know why you didn't see anything: Guido just checked
in the following change to re.py:

*** 21,26 ****
  #
  
! engine = "sre"
! # engine = "pre"
  
  if engine == "sre":
--- 21,26 ----
  #
  
! # engine = "sre"
! engine = "pre"
  
  if engine == "sre":

</F>




From guido at beopen.com  Wed Aug  2 00:21:51 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 17:21:51 -0500
Subject: [Python-Dev] regression test failure in test_tokenize?
In-Reply-To: Your message of "Tue, 01 Aug 2000 23:16:15 +0200."
             <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid> 
References: <39873372.1C6F8CE1@per.dem.csiro.au> <20000801164757.B27333@kronos.cnri.reston.va.us>  
            <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid> 
Message-ID: <200008012221.RAA05722@cj20424-a.reston1.va.home.com>

> But I think I know why you didn't see anything: Guido just checked
> in the following change to re.py:
> 
> *** 21,26 ****
>   #
>   
> ! engine = "sre"
> ! # engine = "pre"
>   
>   if engine == "sre":
> --- 21,26 ----
>   #
>   
> ! # engine = "sre"
> ! engine = "pre"
>   
>   if engine == "sre":

Ouch.  did I really?  I didn't intend to!  I'll back out right away...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From barry at scottb.demon.co.uk  Wed Aug  2 01:01:29 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Wed, 2 Aug 2000 00:01:29 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000701bff108$950ec9f0$060210ac@private>
Message-ID: <000801bffc0c$6d985490$060210ac@private>

If someone in the core of Python thinks a patch implementing
what I've outlined is useful please let me know and I will
generate the patch.

	Barry




From MarkH at ActiveState.com  Wed Aug  2 01:13:31 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Wed, 2 Aug 2000 09:13:31 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000801bffc0c$6d985490$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIENBDCAA.MarkH@ActiveState.com>

> If someone in the core of Python thinks a patch implementing
> what I've outlined is useful please let me know and I will
> generate the patch.

Umm - I'm afraid that I dont keep my python-dev emils for that long, and
right now I'm too lazy/busy to dig around the archives.

Exactly what did you outline?  I know it went around a few times, and I
can't remember who said what.  For my money, I liked Fredrik's solution
best (check Py_IsInitialized() in Py_InitModule4()), but as mentioned that
only solves for the next version of Python; it doesnt solve the fact 1.5
modules will crash under 1.6/2.0

It would definately be excellent to get _something_ in the CNRI 1.6
release, so the BeOpen 2.0 release can see the results.

But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,

Mark.





From jeremy at beopen.com  Wed Aug  2 01:56:27 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 1 Aug 2000 19:56:27 -0400 (EDT)
Subject: [Python-Dev] Bug Database
In-Reply-To: <200008012014.PAA31076@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
	<14727.6995.164586.983795@bitdiddle.concentric.net>
	<200008012014.PAA31076@cj20424-a.reston1.va.home.com>
Message-ID: <14727.25515.570860.775496@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  >> * Category: Describes what part of Python the bug applies to.
  >> Current values are parser/compiler, core, modules, library,
  >> build, windows, documentation.  We can add more categories,
  >> e.g. library/xml, if that is helpful.

  GvR> Before it's too late, would it make sense to try and get the
  GvR> categories to be the same in the Bug and Patch managers?

Yes, as best we can do.  We've got all the same names, though the
capitalization varies sometimes.

Jeremy



From gstein at lyra.org  Wed Aug  2 03:26:51 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 1 Aug 2000 18:26:51 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PC _winreg.c,1.7,1.8
In-Reply-To: <200007280344.UAA12335@slayer.i.sourceforge.net>; from mhammond@users.sourceforge.net on Thu, Jul 27, 2000 at 08:44:43PM -0700
References: <200007280344.UAA12335@slayer.i.sourceforge.net>
Message-ID: <20000801182651.S19525@lyra.org>

This could be simplified quite a bit by using PyObject_AsReadBuffer() from
abstract.h ...

Cheers,
-g

On Thu, Jul 27, 2000 at 08:44:43PM -0700, Mark Hammond wrote:
> Update of /cvsroot/python/python/dist/src/PC
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv12325
> 
> Modified Files:
> 	_winreg.c 
> Log Message:
> Allow any object supporting the buffer protocol to be written as a binary object.
> 
> Index: _winreg.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/PC/_winreg.c,v
> retrieving revision 1.7
> retrieving revision 1.8
> diff -C2 -r1.7 -r1.8
> *** _winreg.c	2000/07/16 12:04:32	1.7
> --- _winreg.c	2000/07/28 03:44:41	1.8
> ***************
> *** 831,837 ****
>   				*retDataSize = 0;
>   			else {
> ! 				if (!PyString_Check(value))
> ! 					return 0;
> ! 				*retDataSize = PyString_Size(value);
>   				*retDataBuf = (BYTE *)PyMem_NEW(char,
>   								*retDataSize);
> --- 831,844 ----
>   				*retDataSize = 0;
>   			else {
> ! 				void *src_buf;
> ! 				PyBufferProcs *pb = value->ob_type->tp_as_buffer;
> ! 				if (pb==NULL) {
> ! 					PyErr_Format(PyExc_TypeError, 
> ! 						"Objects of type '%s' can not "
> ! 						"be used as binary registry values", 
> ! 						value->ob_type->tp_name);
> ! 					return FALSE;
> ! 				}
> ! 				*retDataSize = (*pb->bf_getreadbuffer)(value, 0, &src_buf);
>   				*retDataBuf = (BYTE *)PyMem_NEW(char,
>   								*retDataSize);
> ***************
> *** 840,847 ****
>   					return FALSE;
>   				}
> ! 				memcpy(*retDataBuf,
> ! 				       PyString_AS_STRING(
> ! 				       		(PyStringObject *)value),
> ! 				       *retDataSize);
>   			}
>   			break;
> --- 847,851 ----
>   					return FALSE;
>   				}
> ! 				memcpy(*retDataBuf, src_buf, *retDataSize);
>   			}
>   			break;
> 
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://www.python.org/mailman/listinfo/python-checkins

-- 
Greg Stein, http://www.lyra.org/



From guido at beopen.com  Wed Aug  2 06:09:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 23:09:38 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
Message-ID: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>

We still don't have a new license for Python 1.6; Bob Kahn and Richard
Stallman need to talk before a decision can be made about how to deal
with the one remaining GPL incompatibility.  While we're all waiting,
we're preparing the CNRI 1.6 release at SourceForge (part of the deal
is that the PythonLabs group finishes the 1.6 release for CNRI).  The
last thing I committed today was the text (dictated by Bob Kahn) for
the new LICENSE file that will be part of the 1.6 beta 1 release.
(Modulo any changes that will be made to the license text to ensure
GPL compatibility.)

Since anyone with an anonymous CVS setup can now read the license
anyway, I might as well post a copy here so that you can all get used
to it...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

======== LICENSE =======================================================

A. HISTORY OF THE SOFTWARE

Python originated in 1991 at Stichting Mathematisch Centrum (CWI) in
the Netherlands as an outgrowth of a language called ABC.  Its
principal author was Guido van Rossum, although it included smaller
contributions from others at CWI and elsewhere.  The last version of
Python issued by CWI was Python 1.2.  In 1995, Mr. van Rossum
continued his work on Python at the Corporation for National Research
Initiatives (CNRI) in Reston, Virginia where several versions of the
software were generated.  Python 1.6 is the last of the versions
developed at CNRI.



B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING Python 1.6, beta 1


1. CNRI LICENSE AGREEMENT 

        PYTHON 1.6, beta 1

        CNRI OPEN SOURCE LICENSE AGREEMENT


IMPORTANT: PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY.

BY CLICKING ON "ACCEPT" WHERE INDICATED BELOW, OR BY COPYING,
INSTALLING OR OTHERWISE USING PYTHON 1.6, beta 1 SOFTWARE, YOU ARE
DEEMED TO HAVE AGREED TO THE TERMS AND CONDITIONS OF THIS LICENSE
AGREEMENT.

1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6, beta 1 software
in source or binary form and its associated documentation, as released
at the www.python.org Internet site on August 5, 2000 ("Python
1.6b1").

2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6b1
alone or in any derivative version, provided, however, that CNRI's
License Agreement is retained in Python 1.6b1, alone or in any
derivative version prepared by Licensee.

Alternately, in lieu of CNRI's License Agreement, Licensee may
substitute the following text (omitting the quotes): "Python 1.6, beta
1, is made available subject to the terms and conditions in CNRI's
License Agreement.  This Agreement may be located on the Internet
using the following unique, persistent identifier (known as a handle):
1895.22/1011.  This Agreement may also be obtained from a proxy server
on the Internet using the URL:http://hdl.handle.net/1895.22/1011".

3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6b1or any part thereof, and wants to make the
derivative work available to the public as provided herein, then
Licensee hereby agrees to indicate in any such work the nature of the
modifications made to Python 1.6b1.

4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.

5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.

6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.

7. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of Virginia, excluding conflict of
law provisions.  Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between CNRI and Licensee.  This License Agreement does not grant
permission to use CNRI trademarks or trade name in a trademark sense
to endorse or promote products or services of Licensee, or any third
party.

8. By clicking on the "ACCEPT" button where indicated, or by copying
installing or otherwise using Python 1.6b1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.

        ACCEPT



2. CWI PERMISSIONS STATEMENT AND DISCLAIMER

Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands.  All rights reserved.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.

STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

========================================================================



From guido at beopen.com  Wed Aug  2 06:42:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 23:42:30 -0500
Subject: [Python-Dev] BeOpen statement about Python license
Message-ID: <200008020442.XAA01587@cj20424-a.reston1.va.home.com>

Bob Weiner, BeOpen's CTO, has this to say about the Python license:

  Here's the official word from BeOpen.com regarding any potential
  license change on Python 1.6 (the last CNRI Python release) and
  subsequent versions:

    The Python license is fully open source compliant, as certified by
    the Open Source Initiative.  That means that if you look at
    www.opensource.org/osd.html, then this license complies with those
    9 precepts, allowing broad freedom of use, distribution and
    modification.

    The Python license will continue to allow fully proprietary
    software development.

    The license issues are down to one point which we are working to
    resolve together with CNRI and involving potential
    GPL-compatibility.  It is a small point regarding a requirement
    that the license be interpreted under the terms of Virginia law.
    One lawyer has said that this doesn't affect GPL-compatibility but
    Richard Stallman of the FSF has felt differently; he views it as a
    potential additional restriction of rights beyond those listed in
    the GPL.  So work continues to resolve on this point before the
    license is published or attached to any code.  We are presently
    waiting for follow-up from Stallman on this point.

  In summary, BeOpen.com is actively working to keep Python the
  extremely open platform it has traditionally been and to resolve
  legal issues such as this in ways that benefit Python users
  worldwide.  CNRI is working along the same lines as well.

  Please assure yourselves and your management that Python continues
  to allow for both open and closed software development.

  Regards,

  Bob Weiner

I (Guido) hope that this, together with the draft license text that I
just posted, clarifies matters for now!  I'll post more news as it
happens,

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Wed Aug  2 08:12:54 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 08:12:54 +0200
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <200008012122.OAA22327@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Tue, Aug 01, 2000 at 02:22:20PM -0700
References: <200008012122.OAA22327@slayer.i.sourceforge.net>
Message-ID: <20000802081254.V266@xs4all.nl>

On Tue, Aug 01, 2000 at 02:22:20PM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src/Lib
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv22316

> Modified Files:
> 	re.py 
> Log Message:
> My fix to the URL accidentally also switched back to the "pre" module.
> Undo that!

This kind of thing is one of the reasons I wish 'cvs commit' would give you
the entire patch you're about to commit in the log-message-edit screen, as
CVS: comments, rather than just the modified files. It would also help with
remembering what the patch was supposed to do ;) Is this possible with CVS,
other than an 'EDITOR' that does this for you ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From paul at prescod.net  Wed Aug  2 09:30:30 2000
From: paul at prescod.net (Paul Prescod)
Date: Wed, 02 Aug 2000 03:30:30 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>  
	            <3986794E.ADBB938C@prescod.net> <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <3987CE16.DB3E72B8@prescod.net>

Guido van Rossum wrote:
> 
> ...
> 
> I believe this is the crux of the problem.  Your only mistake was that
> you criticized and then tried to redesign a (poorly designed) API that
> you weren't intimately familiar with.

I don't think that this has been demonstrated. We have one complaint
about one method from Mark and silence from everyone else (and about
everything else). The Windows registry is weird in its terminology, but
it isn't brain surgery.

Yes, I had to do some research on what various things do but I expect
that almost anyone would have had to do that. Some of the constants in
the module are meant to be used with functions that are not even exposed
in the module. This indicates to me that nobody has clearly thought out
all of the details (and also that _winreg is not a complete binding to
the API). I probably understand the original API as well as anyone and
more than most, by now.

Anyhow, the list at the bottom should demonstrate that I understand the
API at least as well as the Microsoftie that invented the .NET API for
Java, VB and everything else.

> Hopefully someday someone will eventually create a set of higher level
> bindings modeled after the Java, VB or C# version of the API.

Mark sent me those specs and I believe that the module I sent out *is*
very similar to that higher level API.

Specifically (>>> is Python version)

Equals (inherited from Object) 
>>> __cmp__

key.Name
>>> key.name

key.SubKeyCount
>>> len( key.getSubkeys() )

key.ValueCount
>>> len( key.getValues() )

Close
>>> key.close()

CreateSubKey
>>> key.createSubkey()

DeleteSubKey
>>> key.deleteSubkey()

DeleteSubKeyTree
>>> (didn't get around to implementing/testing something like this)

DeleteValue
>>> key.deleteValue()

GetSubKeyNames
>>> key.getSubkeyNames()

GetValue
>>> key.getValueData()

GetValueNames
>>> key.getValueNames()

OpenRemoteBaseKey
>>> key=RemoteKey( ... )

OpenSubKey
>>> key.openSubkey

SetValue
>>> key.setValue()

 ToString
>>> str( key )

My API also has some features for enumerating that this does not have.
Mark has a problem with one of those. I don't see how that makes the
entire API "unintuitive", considering it is more or less a renaming of
the .NET API.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From effbot at telia.com  Wed Aug  2 09:07:27 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 2 Aug 2000 09:07:27 +0200
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>              <3986794E.ADBB938C@prescod.net>  <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <004d01bffc50$522fa2a0$f2a6b5d4@hagrid>

guido wrote:
> Paul, I think that the best thing to do now is to withdraw winreg.py,
> and to keep (and document!) the _winreg extension with the
> understanding that it's a wrapper around poorly designed API but at
> least it's very close to the C API.  The leading underscore should be
> a hint that this is not a module for every day use.

how about letting _winreg export all functions with their
win32 names, and adding a winreg.py which looks some-
thing like this:

    from _winreg import *

    class Key:
        ....

    HKEY_CLASSES_ROOT = Key(...)
    ...

where the Key class addresses the 80% level: open
keys and read NONE/SZ/EXPAND_SZ/DWORD values
(through a slightly extended dictionary API).

in 2.0, add support to create keys and write values of
the same types, and you end up supporting the needs
of 99% of all applications.

> Hopefully someday someone will eventually create a set of higher level
> bindings modeled after the Java, VB or C# version of the API.

how about Tcl?  I'm pretty sure their API (which is very
simple, iirc) addresses the 99% level...

</F>




From moshez at math.huji.ac.il  Wed Aug  2 09:00:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 10:00:40 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python
 (fwd))
Message-ID: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>

Do we have a procedure for putting more batteries in the core? I'm
not talking about stuff like PEP-206, I'm talking about small, useful
modules like Cookies.py.


--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez

---------- Forwarded message ----------
Date: Tue, 01 Aug 2000 12:03:12 PDT
From: Brian Wisti <bwisti at hotmail.com>
To: tutor at python.org
Subject: Tangent to Re: [Tutor] CGI and Python




>In contrast, i've been motivated with questions like yours which pop up
>every now and then to create a separate chapter entrily devoted to CGI pro-
>gramming and in it, to provide an example that starts out simple and builds
>to something a little more complex.  there will be lots of screen captures
>too so that you can see what's going on.  finally, there will be a more
>"advanced" section towards the end which does the complicated stuff that
>everyone wants to do, like cookies, multivalued fields, and file uploads
>with multipart data.  sorry that the book isn't out yet... trying to get
>the weeds out of it right NOW!	;-)
>

I'm looking forward to seeing the book!

Got a question, that is almost relevant to the thread.  Does anybody know 
why cookie support isn't built in to the cgi module?  I had to dig around to 
find Cookie.py, which (excellent module that it is) should be in the cgi 
package somewhere.

Just a random thought from the middle of my workday...

Later,
Brian Wisti
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com


_______________________________________________
Tutor maillist  -  Tutor at python.org
http://www.python.org/mailman/listinfo/tutor




From mal at lemburg.com  Wed Aug  2 11:12:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 02 Aug 2000 11:12:01 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>
Message-ID: <3987E5E1.A2B20241@lemburg.com>

Guido van Rossum wrote:
> 
> We still don't have a new license for Python 1.6; Bob Kahn and Richard
> Stallman need to talk before a decision can be made about how to deal
> with the one remaining GPL incompatibility.  While we're all waiting,
> we're preparing the CNRI 1.6 release at SourceForge (part of the deal
> is that the PythonLabs group finishes the 1.6 release for CNRI).  The
> last thing I committed today was the text (dictated by Bob Kahn) for
> the new LICENSE file that will be part of the 1.6 beta 1 release.
> (Modulo any changes that will be made to the license text to ensure
> GPL compatibility.)
> 
> Since anyone with an anonymous CVS setup can now read the license
> anyway, I might as well post a copy here so that you can all get used
> to it...

Is the license on 2.0 going to look the same ? I mean we now
already have two seperate licenses and if BeOpen adds another
two or three paragraphs will end up with a license two pages
long.

Oh, how I loved the old CWI license...

Some comments on the new version:
 
> A. HISTORY OF THE SOFTWARE
> 
> Python originated in 1991 at Stichting Mathematisch Centrum (CWI) in
> the Netherlands as an outgrowth of a language called ABC.  Its
> principal author was Guido van Rossum, although it included smaller
> contributions from others at CWI and elsewhere.  The last version of
> Python issued by CWI was Python 1.2.  In 1995, Mr. van Rossum
> continued his work on Python at the Corporation for National Research
> Initiatives (CNRI) in Reston, Virginia where several versions of the
> software were generated.  Python 1.6 is the last of the versions
> developed at CNRI.
> 
> B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING Python 1.6, beta 1
> 
> 1. CNRI LICENSE AGREEMENT
> 
>         PYTHON 1.6, beta 1
> 
>         CNRI OPEN SOURCE LICENSE AGREEMENT
> 
> IMPORTANT: PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY.
> 
> BY CLICKING ON "ACCEPT" WHERE INDICATED BELOW, OR BY COPYING,
> INSTALLING OR OTHERWISE USING PYTHON 1.6, beta 1 SOFTWARE, YOU ARE
> DEEMED TO HAVE AGREED TO THE TERMS AND CONDITIONS OF THIS LICENSE
> AGREEMENT.
> 
> 1. This LICENSE AGREEMENT is between the Corporation for National
> Research Initiatives, having an office at 1895 Preston White Drive,
> Reston, VA 20191 ("CNRI"), and the Individual or Organization
> ("Licensee") accessing and otherwise using Python 1.6, beta 1 software
> in source or binary form and its associated documentation, as released
> at the www.python.org Internet site on August 5, 2000 ("Python
> 1.6b1").
> 
> 2. Subject to the terms and conditions of this License Agreement, CNRI
> hereby grants Licensee a nonexclusive, royalty-free, world-wide
> license to reproduce, analyze, test, perform and/or display publicly,
> prepare derivative works, distribute, and otherwise use Python 1.6b1
> alone or in any derivative version, provided, however, that CNRI's
> License Agreement is retained in Python 1.6b1, alone or in any
> derivative version prepared by Licensee.

I don't the latter (retaining the CNRI license alone) is not
possible: you always have to include the CWI license.
 
> Alternately, in lieu of CNRI's License Agreement, Licensee may
> substitute the following text (omitting the quotes): "Python 1.6, beta
> 1, is made available subject to the terms and conditions in CNRI's
> License Agreement.  This Agreement may be located on the Internet
> using the following unique, persistent identifier (known as a handle):
> 1895.22/1011.  This Agreement may also be obtained from a proxy server
> on the Internet using the URL:http://hdl.handle.net/1895.22/1011".

Do we really need this in the license text ? It's nice to have
the text available on the Internet, but why add long descriptions
about where to get it from to the license text itself ?
 
> 3. In the event Licensee prepares a derivative work that is based on
> or incorporates Python 1.6b1or any part thereof, and wants to make the
> derivative work available to the public as provided herein, then
> Licensee hereby agrees to indicate in any such work the nature of the
> modifications made to Python 1.6b1.

In what way would those indications have to be made ? A patch
or just text describing the new features ?
 
What does "make available to the public" mean ? If I embed
Python in an application and make this application available
on the Internet for download would this fit the meaning ?

What about derived work that only uses the Python language
reference as basis for its task, e.g. new interpreters
or compilers which can read and execute Python programs ?

> 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> INFRINGE ANY THIRD PARTY RIGHTS.
> 
> 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.

I would make this "...SOME STATES AND COUNTRIES...". E.g. in
Germany the above text would only be valid after an initial
6 month period after installation, AFAIK (this period is
called "Gew?hrleistung"). Licenses from other vendors usually
add some extra license text to limit the liability in this period
to the carrier on which the software was received by the licensee,
e.g. the diskettes or CDs.
 
> 6. This License Agreement will automatically terminate upon a material
> breach of its terms and conditions.

Immediately ? Other licenses usually include a 30-60 day period
which allows the licensee to take actions. With the above text,
the license will put the Python copy in question into an illegal
state *prior* to having even been identified as conflicting with the
license.
 
> 7. This License Agreement shall be governed by and interpreted in all
> respects by the law of the State of Virginia, excluding conflict of
> law provisions.  Nothing in this License Agreement shall be deemed to
> create any relationship of agency, partnership, or joint venture
> between CNRI and Licensee.  This License Agreement does not grant
> permission to use CNRI trademarks or trade name in a trademark sense
> to endorse or promote products or services of Licensee, or any third
> party.

Would the name "Python" be considered a trademark in the above
sense ?
 
> 8. By clicking on the "ACCEPT" button where indicated, or by copying
> installing or otherwise using Python 1.6b1, Licensee agrees to be
> bound by the terms and conditions of this License Agreement.
> 
>         ACCEPT
> 
> 2. CWI PERMISSIONS STATEMENT AND DISCLAIMER
> 
> Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
> The Netherlands.  All rights reserved.
> 
> Permission to use, copy, modify, and distribute this software and its
> documentation for any purpose and without fee is hereby granted,
> provided that the above copyright notice appear in all copies and that
> both that copyright notice and this permission notice appear in
> supporting documentation, and that the name of Stichting Mathematisch
> Centrum or CWI not be used in advertising or publicity pertaining to
> distribution of the software without specific, written prior
> permission.
> 
> STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
> THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
> FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
> OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

...oh how I loved this one ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Wed Aug  2 11:43:05 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 02 Aug 2000 11:43:05 +0200
Subject: [Python-Dev] Winreg recap 
In-Reply-To: Message by Paul Prescod <paul@prescod.net> ,
	     Tue, 01 Aug 2000 12:52:45 -0400 , <3987005C.9C45D7B6@prescod.net> 
Message-ID: <20000802094305.C3006303181@snelboot.oratrix.nl>

> I specifically asked everyone here if an abstraction was a good idea. I
> got three + votes and no - votes. One of the + votes requested that we
> still ship the underlying module. Fine. I was actually pointed (on
> python-dev) to specs for an abstraction layer that AFAIK had been
> designed *on Python-dev*.

This point I very much agree to: if we can abstract 90% of the use cases of 
the registry (while still giving access to the other 10%) in a clean interface 
we can implement the same interface for Mac preference files, unix dot-files, 
X resources, etc.

A general mechanism whereby a Python program can get at a persistent setting 
that may have factory defaults, installation overrides and user overrides, and 
that is implemented in the logical way on each platform would be very powerful.

The initial call to open the preference database(s) and give identity 
information as to which app you are, etc is probably going to be machine 
dependent, but from that point on there should be a single API.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From moshez at math.huji.ac.il  Wed Aug  2 12:16:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 13:16:40 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
Message-ID: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>

Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me


--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez





From thomas at xs4all.net  Wed Aug  2 12:41:12 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 12:41:12 +0200
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>; from moshez@math.huji.ac.il on Wed, Aug 02, 2000 at 01:16:40PM +0300
References: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>
Message-ID: <20000802124112.W266@xs4all.nl>

On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:

> Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

You can close bugs now, right, Moshe ? If not, you should be able to :P Just
do what I do: close them, assign them to yourself, set the status to 'Works
For Me', explain in the log message what you did to test it, and forward a
copy of the mail you get from SF to the original complainee.

A lot of the bugs are relatively old, so a fair lot of them are likely to be
fixed already. If they aren't fixed for the complainee (or someone else),
the bug can be re-opened and possibly updated at the same time.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Wed Aug  2 13:05:06 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 14:05:06 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <20000802124112.W266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>

On Wed, 2 Aug 2000, Thomas Wouters wrote:

> On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:
> 
> > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> 
> You can close bugs now, right, Moshe?

I can, but to tell the truth, after what Tim posted here about closing
bugs, I'd appreciate a few more eyeballs before I close them.

> A lot of the bugs are relatively old, so a fair lot of them are likely to be
> fixed already. If they aren't fixed for the complainee (or someone else),
> the bug can be re-opened and possibly updated at the same time.

Hmmmmm.....OK.
But I guess I'll still wait for a goahead from the PythonLabs team. 
BTW: Does anyone know if SF has an e-mail notification of bugs, similar
to that of patches? If so, enabling it to send mail to a mailing list
similar to patches at python.org would be cool -- it would enable much more
peer review.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Wed Aug  2 13:21:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 13:21:47 +0200
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>; from moshez@math.huji.ac.il on Wed, Aug 02, 2000 at 02:05:06PM +0300
References: <20000802124112.W266@xs4all.nl> <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
Message-ID: <20000802132147.L13365@xs4all.nl>

On Wed, Aug 02, 2000 at 02:05:06PM +0300, Moshe Zadka wrote:
> On Wed, 2 Aug 2000, Thomas Wouters wrote:
> > On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:

> > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

> > You can close bugs now, right, Moshe?

> I can, but to tell the truth, after what Tim posted here about closing
> bugs, I'd appreciate a few more eyeballs before I close them.

That's why I forward the message to the original submittor. The list of bugs
is now so insanely large that it's pretty unlikely a large number of
eyeballs will caress them. Marking them closed (or atleast marking them
*something*, like moving them to the right catagory) and forwarding the
summary to the submittor is likely to have them re-check the bug.

Tim was talking about 'closing it without reason', without knowing why it
should be closed. 'Works for me' is a valid reason to close the bug, if you
have the same (kind of) platform, can't reproduce the bug and have a strong
suspicion it's already been fixed. (Which is pretty likely, if the bugreport
is old.)

> BTW: Does anyone know if SF has an e-mail notification of bugs, similar
> to that of patches? If so, enabling it to send mail to a mailing list
> similar to patches at python.org would be cool -- it would enable much more
> peer review.

I think not, but I'm not sure. It's probably up to the project admins to set
that, but I think if they did, they'd have set it before. (Then again, I'm
not sure if it's a good idea to set it, yet... I bet the current list is
going to be quickly cut down in size, and I'm not sure if I want to see all
the notifications! :) But once it's running, it would be swell.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Wed Aug  2 14:13:41 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Wed, 2 Aug 2000 14:13:41 +0200 (CEST)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021157040.20425-100000@sundial> from "Moshe Zadka" at Aug 02, 2000 01:16:40 PM
Message-ID: <200008021213.OAA06073@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

You get a compiled SRE object, right? But SRE is the new 're' and the old
're' is 'pre'. Make the example: import pre; pre.compile('[\\200-\\400]')
and I suspect you'll get the segfault (I did).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Wed Aug  2 14:17:31 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 15:17:31 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <200008021213.OAA06073@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008021512180.8980-100000@sundial>

On Wed, 2 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> 
> You get a compiled SRE object, right?

Nope -- I tested it with pre. 

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From jeremy at beopen.com  Wed Aug  2 14:31:55 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 2 Aug 2000 08:31:55 -0400 (EDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
References: <20000802124112.W266@xs4all.nl>
	<Pine.GSO.4.10.10008021402041.20425-100000@sundial>
Message-ID: <14728.5307.820982.137908@bitdiddle.concentric.net>

>>>>> "MZ" == Moshe Zadka <moshez at math.huji.ac.il> writes:

  MZ> Hmmmmm.....OK.  But I guess I'll still wait for a goahead from
  MZ> the PythonLabs team.  BTW: Does anyone know if SF has an e-mail
  MZ> notification of bugs, similar to that of patches? If so,
  MZ> enabling it to send mail to a mailing list similar to
  MZ> patches at python.org would be cool -- it would enable much more
  MZ> peer review.

Go ahead and mark as closed bugs that are currently fixed.  If you can
figure out when they were fixed (e.g. what checkin), that would be
best.  If not, just be sure that it really is fixed -- and write a
test case that would have caught the bug.

SF will send out an email, but sending it to patches at python.org would
be a bad idea, I think.  Isn't that list attached to Jitterbug?

Jeremy



From moshez at math.huji.ac.il  Wed Aug  2 14:30:16 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 15:30:16 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10008021528410.8980-100000@sundial>

On Wed, 2 Aug 2000, Jeremy Hylton wrote:

> SF will send out an email, but sending it to patches at python.org would
> be a bad idea, I think.

I've no problem with having a seperate mailing list I can subscribe to.
Perhaps it should be a mailing list along the lines of Python-Checkins....

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Wed Aug  2 16:02:00 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:02:00 -0500
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: Your message of "Wed, 02 Aug 2000 08:12:54 +0200."
             <20000802081254.V266@xs4all.nl> 
References: <200008012122.OAA22327@slayer.i.sourceforge.net>  
            <20000802081254.V266@xs4all.nl> 
Message-ID: <200008021402.JAA02711@cj20424-a.reston1.va.home.com>

> > My fix to the URL accidentally also switched back to the "pre" module.
> > Undo that!
> 
> This kind of thing is one of the reasons I wish 'cvs commit' would give you
> the entire patch you're about to commit in the log-message-edit screen, as
> CVS: comments, rather than just the modified files. It would also help with
> remembering what the patch was supposed to do ;) Is this possible with CVS,
> other than an 'EDITOR' that does this for you ?

Actually, I have made it a habit to *always* do a cvs diff before I
commit, for exactly this reason.  That's why this doesn't happen more
often.  In this case I specifically remember reviewing the diff and
thinking that it was alright, but not scrolling towards the second
half of the diff. :(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Wed Aug  2 16:06:00 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:06:00 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 10:00:40 +0300."
             <Pine.GSO.4.10.10008020958590.20425-100000@sundial> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> 
Message-ID: <200008021406.JAA02743@cj20424-a.reston1.va.home.com>

> Do we have a procedure for putting more batteries in the core? I'm
> not talking about stuff like PEP-206, I'm talking about small, useful
> modules like Cookies.py.

Cookie support in the core would be a good thing.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Wed Aug  2 15:20:52 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:20:52 -0400 (EDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>
References: <20000802124112.W266@xs4all.nl>
	<Pine.GSO.4.10.10008021402041.20425-100000@sundial>
	<14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <14728.8244.745008.301891@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > SF will send out an email, but sending it to patches at python.org would
 > be a bad idea, I think.  Isn't that list attached to Jitterbug?

  No, but Barry is working on getting a new list set up for
SourceForge to send bug messages to.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gvwilson at nevex.com  Wed Aug  2 15:22:01 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Wed, 2 Aug 2000 09:22:01 -0400 (EDT)
Subject: [Python-Dev] CVS headaches / Subversion reminder
Message-ID: <Pine.LNX.4.10.10008020913180.7103-100000@akbar.nevex.com>

Those of you who are having troubles with (or have complaints about) CVS
on SourceForge might want to check out Subversion, a "better CVS" being
developed as part of Tigris:

    subversion.tigris.org

Jason Robbins (project manager, jrobbins at collab.net) told me in Monterey
that they are still interested in feature requests, alternatives, etc.
There may still be room to add features like showing the full patch during
checkin (as per Thomas Wouters' earlier mail).

Greg

p.s. I'd be interested in hearing from anyone who's ever re-written a
medium-sized (40,000 lines) C app in Python --- how did you decide how
much of the structure to keep, and how much to re-think, etc.  Please mail
me directly to conserve bandwidth; I'll post a summary if there's enough
interest.





From fdrake at beopen.com  Wed Aug  2 15:26:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:26:28 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
Message-ID: <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > > Do we have a procedure for putting more batteries in the core? I'm
 > > not talking about stuff like PEP-206, I'm talking about small, useful
 > > modules like Cookies.py.
 > 
 > Cookie support in the core would be a good thing.

  There's also some cookie support in Grail (limited); that uses a
Netscape-style client-side database.
  Note that the Netscape format is insufficient for the most recent
cookie specifications (don't know the RFC #), but I understood from
AMK that browser writers are expecting to actually implement that
(unlike RFC 2109).  If we stick to an in-process database, that
wouldn't matter, but I'm not sure if that solves the problem for
everyone.
  Regardless of the format, there's a little bit of work to do here.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Wed Aug  2 16:32:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:32:02 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 09:26:28 -0400."
             <14728.8580.460583.760620@cj42289-a.reston1.va.home.com> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com>  
            <14728.8580.460583.760620@cj42289-a.reston1.va.home.com> 
Message-ID: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>

> Guido van Rossum writes:
>  > > Do we have a procedure for putting more batteries in the core? I'm
>  > > not talking about stuff like PEP-206, I'm talking about small, useful
>  > > modules like Cookies.py.
>  > 
>  > Cookie support in the core would be a good thing.
> 
>   There's also some cookie support in Grail (limited); that uses a
> Netscape-style client-side database.
>   Note that the Netscape format is insufficient for the most recent
> cookie specifications (don't know the RFC #), but I understood from
> AMK that browser writers are expecting to actually implement that
> (unlike RFC 2109).  If we stick to an in-process database, that
> wouldn't matter, but I'm not sure if that solves the problem for
> everyone.
>   Regardless of the format, there's a little bit of work to do here.

I think Cookie.py is for server-side management of cookies, not for
client-side.  Do we need client-side cookies too????

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Wed Aug  2 15:34:29 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 16:34:29 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor]
 CGI and Python (fwd))
In-Reply-To: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008021632340.13078-100000@sundial>

On Wed, 2 Aug 2000, Guido van Rossum wrote:

> I think Cookie.py is for server-side management of cookies, not for
> client-side.  Do we need client-side cookies too????

Not until we write a high-level interface to urllib which is similar
to the Perlish UserAgent module -- which is something that should
be done if Python wants to be a viable clients-side langugage.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From fdrake at beopen.com  Wed Aug  2 15:37:50 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:37:50 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
References: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
	<Pine.GSO.4.10.10008021632340.13078-100000@sundial>
	<Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
Message-ID: <14728.9262.635980.220234@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > I think Cookie.py is for server-side management of cookies, not for
 > client-side.  Do we need client-side cookies too????

  I think this would be highly desirable; we've seen enough requests
for it on c.l.py.

Moshe Zadka writes:
 > Not until we write a high-level interface to urllib which is similar
 > to the Perlish UserAgent module -- which is something that should
 > be done if Python wants to be a viable clients-side langugage.

  Exactly!  It has become very difficult to get anything done on the
Web without enabling cookies, and simple "screen scraping" tools need
to have this support as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Wed Aug  2 16:05:41 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 10:05:41 -0400 (EDT)
Subject: [Python-Dev] test_parser.py
Message-ID: <14728.10933.534904.378463@cj42289-a.reston1.va.home.com>

  At some point I received a message/bug report referring to
test_parser.py, which doesn't exist in the CVS repository (and never
has as far as I know).  If someone has a regression test for the
parser module hidden away, I'd love to add it to the CVS repository!
It's time to update the parser module, and a good time to cover it in
the regression test!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Wed Aug  2 17:11:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 10:11:20 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Wed, 02 Aug 2000 11:12:01 +0200."
             <3987E5E1.A2B20241@lemburg.com> 
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>  
            <3987E5E1.A2B20241@lemburg.com> 
Message-ID: <200008021511.KAA03049@cj20424-a.reston1.va.home.com>

> Is the license on 2.0 going to look the same ? I mean we now
> already have two seperate licenses and if BeOpen adds another
> two or three paragraphs will end up with a license two pages
> long.

Good question.  We can't really keep the license the same because the
old license is very specific to CNRI.  I would personally be in favor
of using the BSD license for 2.0.

> Oh, how I loved the old CWI license...

Ditto!

> Some comments on the new version:

> > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > license to reproduce, analyze, test, perform and/or display publicly,
> > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > alone or in any derivative version, provided, however, that CNRI's
> > License Agreement is retained in Python 1.6b1, alone or in any
> > derivative version prepared by Licensee.
> 
> I don't the latter (retaining the CNRI license alone) is not
> possible: you always have to include the CWI license.

Wow.  I hadn't even noticed this!  It seems you can prepare a
derivative version of the license.  Well, maybe.

> > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > substitute the following text (omitting the quotes): "Python 1.6, beta
> > 1, is made available subject to the terms and conditions in CNRI's
> > License Agreement.  This Agreement may be located on the Internet
> > using the following unique, persistent identifier (known as a handle):
> > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> 
> Do we really need this in the license text ? It's nice to have
> the text available on the Internet, but why add long descriptions
> about where to get it from to the license text itself ?

I'm not happy with this either, but CNRI can put anything they like in
their license, and they seem very fond of this particular bit of
advertising for their handle system.  I've never managed them to
convince them that it was unnecessary.

> > 3. In the event Licensee prepares a derivative work that is based on
> > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > derivative work available to the public as provided herein, then
> > Licensee hereby agrees to indicate in any such work the nature of the
> > modifications made to Python 1.6b1.
> 
> In what way would those indications have to be made ? A patch
> or just text describing the new features ?

Just text.  Bob Kahn told me that the list of "what's new" that I
always add to a release would be fine.

> What does "make available to the public" mean ? If I embed
> Python in an application and make this application available
> on the Internet for download would this fit the meaning ?

Yes, that's why he doesn't use the word "publish" -- such an action
would not be considered publication in the sense of the copyright law
(at least not in the US, and probably not according to the Bern
convention) but it is clearly making it available to the public.

> What about derived work that only uses the Python language
> reference as basis for its task, e.g. new interpreters
> or compilers which can read and execute Python programs ?

The language definition is not covered by the license at all.  Only
this particular code base.

> > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > INFRINGE ANY THIRD PARTY RIGHTS.
> > 
> > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> 
> I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> Germany the above text would only be valid after an initial
> 6 month period after installation, AFAIK (this period is
> called "Gew?hrleistung"). Licenses from other vendors usually
> add some extra license text to limit the liability in this period
> to the carrier on which the software was received by the licensee,
> e.g. the diskettes or CDs.

I'll mention this to Kahn.

> > 6. This License Agreement will automatically terminate upon a material
> > breach of its terms and conditions.
> 
> Immediately ? Other licenses usually include a 30-60 day period
> which allows the licensee to take actions. With the above text,
> the license will put the Python copy in question into an illegal
> state *prior* to having even been identified as conflicting with the
> license.

Believe it or not, this is necessary to ensure GPL compatibility!  An
earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
incompatible.  There's an easy workaround though: you fix your
compliance and download a new copy, which gives you all the same
rights again.

> > 7. This License Agreement shall be governed by and interpreted in all
> > respects by the law of the State of Virginia, excluding conflict of
> > law provisions.  Nothing in this License Agreement shall be deemed to
> > create any relationship of agency, partnership, or joint venture
> > between CNRI and Licensee.  This License Agreement does not grant
> > permission to use CNRI trademarks or trade name in a trademark sense
> > to endorse or promote products or services of Licensee, or any third
> > party.
> 
> Would the name "Python" be considered a trademark in the above
> sense ?

No, Python is not a CNRI trademark.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From trentm at ActiveState.com  Wed Aug  2 17:04:17 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 2 Aug 2000 08:04:17 -0700
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <20000802081254.V266@xs4all.nl>; from thomas@xs4all.net on Wed, Aug 02, 2000 at 08:12:54AM +0200
References: <200008012122.OAA22327@slayer.i.sourceforge.net> <20000802081254.V266@xs4all.nl>
Message-ID: <20000802080417.A16446@ActiveState.com>

On Wed, Aug 02, 2000 at 08:12:54AM +0200, Thomas Wouters wrote:
> On Tue, Aug 01, 2000 at 02:22:20PM -0700, Guido van Rossum wrote:
> > Update of /cvsroot/python/python/dist/src/Lib
> > In directory slayer.i.sourceforge.net:/tmp/cvs-serv22316
> 
> > Modified Files:
> > 	re.py 
> > Log Message:
> > My fix to the URL accidentally also switched back to the "pre" module.
> > Undo that!
> 
> This kind of thing is one of the reasons I wish 'cvs commit' would give you
> the entire patch you're about to commit in the log-message-edit screen, as
> CVS: comments, rather than just the modified files. It would also help with
> remembering what the patch was supposed to do ;) Is this possible with CVS,
> other than an 'EDITOR' that does this for you ?
> 
As Guido said, it is probably prefered that one does a cvs diff prior to
checking in. But to answer your question *unauthoritatively*, I know that CVS
allows you the change the checkin template and I *think* that it offers a
script hook to be able to generate it (not sure). If so then one could use
that script hook to put in the (commented) patch.

Trent



-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Wed Aug  2 17:14:16 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 2 Aug 2000 08:14:16 -0700
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>; from jeremy@beopen.com on Wed, Aug 02, 2000 at 08:31:55AM -0400
References: <20000802124112.W266@xs4all.nl> <Pine.GSO.4.10.10008021402041.20425-100000@sundial> <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <20000802081416.B16446@ActiveState.com>

On Wed, Aug 02, 2000 at 08:31:55AM -0400, Jeremy Hylton wrote:
> >>>>> "MZ" == Moshe Zadka <moshez at math.huji.ac.il> writes:
> 
>   MZ> Hmmmmm.....OK.  But I guess I'll still wait for a goahead from
>   MZ> the PythonLabs team.  BTW: Does anyone know if SF has an e-mail
>   MZ> notification of bugs, similar to that of patches? If so,
>   MZ> enabling it to send mail to a mailing list similar to
>   MZ> patches at python.org would be cool -- it would enable much more
>   MZ> peer review.
> 
> Go ahead and mark as closed bugs that are currently fixed.  If you can
> figure out when they were fixed (e.g. what checkin), that would be
> best.  If not, just be sure that it really is fixed -- and write a
> test case that would have caught the bug.

I think that unless

(1) you submitted the bug or can be sure that "works for me"
    is with the exact same configuration as the person who did; or
(2) you can identify where in the code the bug was and what checkin (or where
    in the code) fixed it

then you cannot close the bug.

This is the ideal case, with incomplete bug reports and extremely stale one
then these strict requirements are probably not always practical.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From jack at oratrix.nl  Wed Aug  2 17:16:06 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 02 Aug 2000 17:16:06 +0200
Subject: [Python-Dev] Still no new license -- but draft text available 
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
	     Wed, 02 Aug 2000 10:11:20 -0500 , <200008021511.KAA03049@cj20424-a.reston1.va.home.com> 
Message-ID: <20000802151606.753EF303181@snelboot.oratrix.nl>

I'm not sure I'm entirely happy with point 3. Depending on how you define 
"derivative work" and "make available" it could cause serious problems.

I assume that this clause is meant so that it is clear that MacPython and 
PythonWin and other such versions may be based on CNRI Python but are not the 
same. However, if you're building a commercial application that uses Python as 
its implementation language this "indication of modifications" becomes rather 
a long list. Just imagine that a C library came with such a license ("Anyone 
incorporating this C library or part thereof in their application should 
indicate the differences between their application and this C library":-).

Point 2 has the same problem to a lesser extent, the sentence starting with 
"Python ... is made available subject to the terms and conditions..." is fine 
for a product that is still clearly recognizable as Python, but would look 
silly if Python is just used as the implementation language.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From thomas at xs4all.net  Wed Aug  2 17:39:40 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 17:39:40 +0200
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <200008021402.JAA02711@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 09:02:00AM -0500
References: <200008012122.OAA22327@slayer.i.sourceforge.net> <20000802081254.V266@xs4all.nl> <200008021402.JAA02711@cj20424-a.reston1.va.home.com>
Message-ID: <20000802173940.X266@xs4all.nl>

On Wed, Aug 02, 2000 at 09:02:00AM -0500, Guido van Rossum wrote:
> > > My fix to the URL accidentally also switched back to the "pre" module.
> > > Undo that!

> > This kind of thing is one of the reasons I wish 'cvs commit' would give you
> > the entire patch you're about to commit in the log-message-edit screen, as
> > CVS: comments, rather than just the modified files. It would also help with
> > remembering what the patch was supposed to do ;) Is this possible with CVS,
> > other than an 'EDITOR' that does this for you ?

> Actually, I have made it a habit to *always* do a cvs diff before I
> commit, for exactly this reason.

Well, so do I, but none the less I'd like it if the patch was included in
the comment :-) I occasionally forget what I was doing (17 xterms, two of
which are running 20-session screens (6 of which are dedicated to Python,
and 3 to Mailman :), two irc channels with people asking for work-related
help or assistance, one telephone with a 'group' number of same, and enough
room around me for 5 or 6 people to stand around and ask questions... :)
Also, I sometimes wonder about the patch while I'm writing the comment. (Did
I do that right ? Didn't I forget about this ? etc.) Having it included as a
comment would be perfect, for me.

I guess I'll look at the hook thing Trent mailed about, and Subversion, if I
find the time for it :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Wed Aug  2 19:22:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 02 Aug 2000 19:22:06 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>  
	            <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>
Message-ID: <398858BE.15928F47@lemburg.com>

Guido van Rossum wrote:
> 
> > Is the license on 2.0 going to look the same ? I mean we now
> > already have two seperate licenses and if BeOpen adds another
> > two or three paragraphs will end up with a license two pages
> > long.
> 
> Good question.  We can't really keep the license the same because the
> old license is very specific to CNRI.  I would personally be in favor
> of using the BSD license for 2.0.

If that's possible, I don't think we have to argue about the
1.6 license text at all ;-) ... but then: I seriously doubt that
CNRI is going to let you put 2.0 under a different license text :-( ...

> > Some comments on the new version:
> 
> > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > license to reproduce, analyze, test, perform and/or display publicly,
> > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > alone or in any derivative version, provided, however, that CNRI's
> > > License Agreement is retained in Python 1.6b1, alone or in any
> > > derivative version prepared by Licensee.
> >
> > I don't think the latter (retaining the CNRI license alone) is 
> > possible: you always have to include the CWI license.
> 
> Wow.  I hadn't even noticed this!  It seems you can prepare a
> derivative version of the license.  Well, maybe.

I think they mean "derivative version of Python 1.6b1", but in
court, the above wording could cause serious trouble for CNRI
... it seems 2.0 can reuse the CWI license after all ;-)
 
> > > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > > substitute the following text (omitting the quotes): "Python 1.6, beta
> > > 1, is made available subject to the terms and conditions in CNRI's
> > > License Agreement.  This Agreement may be located on the Internet
> > > using the following unique, persistent identifier (known as a handle):
> > > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> >
> > Do we really need this in the license text ? It's nice to have
> > the text available on the Internet, but why add long descriptions
> > about where to get it from to the license text itself ?
> 
> I'm not happy with this either, but CNRI can put anything they like in
> their license, and they seem very fond of this particular bit of
> advertising for their handle system.  I've never managed them to
> convince them that it was unnecessary.

Oh well... the above paragraph sure looks scary to a casual
license reader.

Also I'm not sure about the usefulness of this paragraph since
the mapping of a URL to a content cannot be considered a
legal binding. They would at least have to add some crypto
signature of the license text to make verification of the
origin possible.
 
> > > 3. In the event Licensee prepares a derivative work that is based on
> > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > derivative work available to the public as provided herein, then
> > > Licensee hereby agrees to indicate in any such work the nature of the
> > > modifications made to Python 1.6b1.
> >
> > In what way would those indications have to be made ? A patch
> > or just text describing the new features ?
> 
> Just text.  Bob Kahn told me that the list of "what's new" that I
> always add to a release would be fine.

Ok, should be made explicit in the license though...
 
> > What does "make available to the public" mean ? If I embed
> > Python in an application and make this application available
> > on the Internet for download would this fit the meaning ?
> 
> Yes, that's why he doesn't use the word "publish" -- such an action
> would not be considered publication in the sense of the copyright law
> (at least not in the US, and probably not according to the Bern
> convention) but it is clearly making it available to the public.

Ouch. That would mean I'd have to describe all additions,
i.e. the embedding application, in most details in order not to
breach the terms of the CNRI license.
 
> > What about derived work that only uses the Python language
> > reference as basis for its task, e.g. new interpreters
> > or compilers which can read and execute Python programs ?
> 
> The language definition is not covered by the license at all.  Only
> this particular code base.

Ok.
 
> > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > INFRINGE ANY THIRD PARTY RIGHTS.
> > >
> > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> >
> > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > Germany the above text would only be valid after an initial
> > 6 month period after installation, AFAIK (this period is
> > called "Gew?hrleistung"). Licenses from other vendors usually
> > add some extra license text to limit the liability in this period
> > to the carrier on which the software was received by the licensee,
> > e.g. the diskettes or CDs.
> 
> I'll mention this to Kahn.
> 
> > > 6. This License Agreement will automatically terminate upon a material
> > > breach of its terms and conditions.
> >
> > Immediately ? Other licenses usually include a 30-60 day period
> > which allows the licensee to take actions. With the above text,
> > the license will put the Python copy in question into an illegal
> > state *prior* to having even been identified as conflicting with the
> > license.
> 
> Believe it or not, this is necessary to ensure GPL compatibility!  An
> earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> incompatible.  There's an easy workaround though: you fix your
> compliance and download a new copy, which gives you all the same
> rights again.

Hmm, but what about the 100.000 copies of the embedding application
that have already been downloaded -- I would have to force them
to redownload the application (or even just a demo of it) in
order to reestablish the lawfulness of the copy action.

Not that I want to violate the license in any way, but there
seem to be quite a few pitfalls in the present text, some of
which are not clear at all (e.g. the paragraph 3).

> > > 7. This License Agreement shall be governed by and interpreted in all
> > > respects by the law of the State of Virginia, excluding conflict of
> > > law provisions.  Nothing in this License Agreement shall be deemed to
> > > create any relationship of agency, partnership, or joint venture
> > > between CNRI and Licensee.  This License Agreement does not grant
> > > permission to use CNRI trademarks or trade name in a trademark sense
> > > to endorse or promote products or services of Licensee, or any third
> > > party.
> >
> > Would the name "Python" be considered a trademark in the above
> > sense ?
> 
> No, Python is not a CNRI trademark.

I think you or BeOpen on behalf of you should consider
registering the mark before someone else does it. There are
quite a few "PYTHON" marks registered, yet all refer to non-
computer business.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From akuchlin at mems-exchange.org  Wed Aug  2 21:57:09 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 15:57:09 -0400
Subject: [Python-Dev] Python HOWTO project created
Message-ID: <20000802155709.D28691@kronos.cnri.reston.va.us>

[CC'ed to python-dev and doc-sig -- followups set to doc-sig]

I've created a py-howto project on SourceForge to hold the Python
HOWTO documents.  

http://sourceforge.net/projects/py-howto/

Currently me, Fred, Moshe, and ESR are listed as developers and have
write access to CVS; if you want write access, drop me a note.  Web
pages and a py-howto-checkins mailing list will be coming soon, after
a bit more administrative fiddling around on my part.

Should I also create a py-howto-discuss list for discussing revisions,
or is the doc-sig OK?  Fred, what's your ruling about this?

--amk



From guido at beopen.com  Wed Aug  2 23:54:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 16:54:47 -0500
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: Your message of "Wed, 02 Aug 2000 03:30:30 -0400."
             <3987CE16.DB3E72B8@prescod.net> 
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au> <3986794E.ADBB938C@prescod.net> <200008011820.NAA30284@cj20424-a.reston1.va.home.com>  
            <3987CE16.DB3E72B8@prescod.net> 
Message-ID: <200008022154.QAA04109@cj20424-a.reston1.va.home.com>

OK.  Fine.  You say your module is great.  The Windows weenies here
don't want to touch it with a ten-foot pole.  I'm not going to be able
to dig all the way to the truth here -- I don't understand the
Registry API at all.

I propose that you and Mark Hammond go off-line and deal with Mark's
criticism one-on-one, and come back with a compromise that you are
both happy with.  I don't care what the compromise is, but both of you
must accept it.

If you *can't* agree, or if I haven't heard from you by the time I'm
ready to release 2.0b1 (say, end of August), winreg.py bites the dust.

I realize that this gives Mark Hammond veto power over the module, but
he's a pretty reasonable guy, *and* he knows the Registry API better
than anyone.  It should be possible for one of you to convince the
other.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Wed Aug  2 23:05:20 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 17:05:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Doc-SIG] Python HOWTO project created
In-Reply-To: <20000802155709.D28691@kronos.cnri.reston.va.us>
References: <20000802155709.D28691@kronos.cnri.reston.va.us>
Message-ID: <14728.36112.584563.516268@cj42289-a.reston1.va.home.com>

Andrew Kuchling writes:
 > Should I also create a py-howto-discuss list for discussing revisions,
 > or is the doc-sig OK?  Fred, what's your ruling about this?

  It's your project, your choice.  ;)  I've no problem with using the
Doc-SIG for this if you like, but a separate list may be a good thing
since it would have fewer distractions!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Thu Aug  3 00:18:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:18:26 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Wed, 02 Aug 2000 19:22:06 +0200."
             <398858BE.15928F47@lemburg.com> 
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>  
            <398858BE.15928F47@lemburg.com> 
Message-ID: <200008022218.RAA04178@cj20424-a.reston1.va.home.com>

[MAL]
> > > Is the license on 2.0 going to look the same ? I mean we now
> > > already have two seperate licenses and if BeOpen adds another
> > > two or three paragraphs will end up with a license two pages
> > > long.

[GvR]
> > Good question.  We can't really keep the license the same because the
> > old license is very specific to CNRI.  I would personally be in favor
> > of using the BSD license for 2.0.

[MAL}
> If that's possible, I don't think we have to argue about the
> 1.6 license text at all ;-) ... but then: I seriously doubt that
> CNRI is going to let you put 2.0 under a different license text :-( ...

What will happen is that the licenses in effect all get concatenated
in the LICENSE file.  It's a drag.

> > > Some comments on the new version:
> > 
> > > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > > license to reproduce, analyze, test, perform and/or display publicly,
> > > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > > alone or in any derivative version, provided, however, that CNRI's
> > > > License Agreement is retained in Python 1.6b1, alone or in any
> > > > derivative version prepared by Licensee.
> > >
> > > I don't think the latter (retaining the CNRI license alone) is 
> > > possible: you always have to include the CWI license.
> > 
> > Wow.  I hadn't even noticed this!  It seems you can prepare a
> > derivative version of the license.  Well, maybe.
> 
> I think they mean "derivative version of Python 1.6b1", but in
> court, the above wording could cause serious trouble for CNRI

You're right of course, I misunderstood you *and* the license.  Kahn
explains it this way:

[Kahn]
| Ok. I take the point being made. The way english works with ellipsis or 
| anaphoric references is to link back to the last anchor point. In the above 
| case, the last referent is Python 1.6b1.
| 
| Thus, the last phrase refers to a derivative version of Python1.6b1 
| prepared by Licensee. There is no permission given to make a derivative 
| version of the License.

> ... it seems 2.0 can reuse the CWI license after all ;-)

I'm not sure why you think that: 2.0 is a derivative version and is
thus bound by the CNRI license as well as by the license that BeOpen
adds.

> > > > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > > > substitute the following text (omitting the quotes): "Python 1.6, beta
> > > > 1, is made available subject to the terms and conditions in CNRI's
> > > > License Agreement.  This Agreement may be located on the Internet
> > > > using the following unique, persistent identifier (known as a handle):
> > > > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > > > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> > >
> > > Do we really need this in the license text ? It's nice to have
> > > the text available on the Internet, but why add long descriptions
> > > about where to get it from to the license text itself ?
> > 
> > I'm not happy with this either, but CNRI can put anything they like in
> > their license, and they seem very fond of this particular bit of
> > advertising for their handle system.  I've never managed them to
> > convince them that it was unnecessary.
> 
> Oh well... the above paragraph sure looks scary to a casual
> license reader.

But it's really harmless.

> Also I'm not sure about the usefulness of this paragraph since
> the mapping of a URL to a content cannot be considered a
> legal binding. They would at least have to add some crypto
> signature of the license text to make verification of the
> origin possible.

Sure.  Just don't worry about it.  Kahn again:

| They always have the option of using the full text in that case.

So clearly he isn't interested in taking it out.  I'd let it go.

> > > > 3. In the event Licensee prepares a derivative work that is based on
> > > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > > derivative work available to the public as provided herein, then
> > > > Licensee hereby agrees to indicate in any such work the nature of the
> > > > modifications made to Python 1.6b1.
> > >
> > > In what way would those indications have to be made ? A patch
> > > or just text describing the new features ?
> > 
> > Just text.  Bob Kahn told me that the list of "what's new" that I
> > always add to a release would be fine.
> 
> Ok, should be made explicit in the license though...

It's hard to specify this precisely -- in fact, the more precise you
specify it the more scary it looks and the more likely they are to be
able to find fault with the details of how you do it.  In this case, I
believe (and so do lawyers) that vague is good!  If you write "ported
to the Macintosh" and that's what you did, they can hardly argue with
you, can they?

> > > What does "make available to the public" mean ? If I embed
> > > Python in an application and make this application available
> > > on the Internet for download would this fit the meaning ?
> > 
> > Yes, that's why he doesn't use the word "publish" -- such an action
> > would not be considered publication in the sense of the copyright law
> > (at least not in the US, and probably not according to the Bern
> > convention) but it is clearly making it available to the public.
> 
> Ouch. That would mean I'd have to describe all additions,
> i.e. the embedding application, in most details in order not to
> breach the terms of the CNRI license.

No, additional modules aren't modifications to CNRI's work.  A change
to the syntax to support curly braces is.

> > > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > > INFRINGE ANY THIRD PARTY RIGHTS.
> > > >
> > > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> > >
> > > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > > Germany the above text would only be valid after an initial
> > > 6 month period after installation, AFAIK (this period is
> > > called "Gew?hrleistung"). Licenses from other vendors usually
> > > add some extra license text to limit the liability in this period
> > > to the carrier on which the software was received by the licensee,
> > > e.g. the diskettes or CDs.
> > 
> > I'll mention this to Kahn.

His response:

| Guido, Im not willing to do a study of international law here. If you
| can have the person identify one country other than the US that does
| not allow the above limitation or exclusion of liability and provide a
| copy of the section of their law, ill be happy to change this to read
| ".... SOME STATES OR COUNTRIES MAY NOT ALLOW ...." Otherwise, id just
| leave it alone (i.e. as is) for now.

Please mail this info directly to Kahn at CNRI.Reston.Va.US if you
believe you have the right information.  (You may CC me.)  Personally,
I wouldn't worry.  If the German law says that part of a license is
illegal, it doesn't make it any more or less illegal whether the
license warns you about this fact.

I believe that in the US, as a form of consumer protection, some
states not only disallow general disclaimers, but also require that
licenses containing such disclaimers notify the reader that the
disclaimer is not valid in their state, so that's where the language
comes from.  I don't know about German law.

> > > > 6. This License Agreement will automatically terminate upon a material
> > > > breach of its terms and conditions.
> > >
> > > Immediately ? Other licenses usually include a 30-60 day period
> > > which allows the licensee to take actions. With the above text,
> > > the license will put the Python copy in question into an illegal
> > > state *prior* to having even been identified as conflicting with the
> > > license.
> > 
> > Believe it or not, this is necessary to ensure GPL compatibility!  An
> > earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> > incompatible.  There's an easy workaround though: you fix your
> > compliance and download a new copy, which gives you all the same
> > rights again.
> 
> Hmm, but what about the 100.000 copies of the embedding application
> that have already been downloaded -- I would have to force them
> to redownload the application (or even just a demo of it) in
> order to reestablish the lawfulness of the copy action.

It's better not to violate the license.  But do you really think that
they would go after you immediately if you show good intentions to
rectify?

> Not that I want to violate the license in any way, but there
> seem to be quite a few pitfalls in the present text, some of
> which are not clear at all (e.g. the paragraph 3).

I've warned Kahn about this effect of making the license bigger, but
he simply disagrees (and we agree to disagree).  I don't know what
else I could do about it, apart from putting a FAQ about the license
on python.org -- which I intend to do.

> > > > 7. This License Agreement shall be governed by and interpreted in all
> > > > respects by the law of the State of Virginia, excluding conflict of
> > > > law provisions.  Nothing in this License Agreement shall be deemed to
> > > > create any relationship of agency, partnership, or joint venture
> > > > between CNRI and Licensee.  This License Agreement does not grant
> > > > permission to use CNRI trademarks or trade name in a trademark sense
> > > > to endorse or promote products or services of Licensee, or any third
> > > > party.
> > >
> > > Would the name "Python" be considered a trademark in the above
> > > sense ?
> > 
> > No, Python is not a CNRI trademark.
> 
> I think you or BeOpen on behalf of you should consider
> registering the mark before someone else does it. There are
> quite a few "PYTHON" marks registered, yet all refer to non-
> computer business.

Yes, I do intend to do this.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Wed Aug  2 23:37:52 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 2 Aug 2000 23:37:52 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
Message-ID: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>

Guido asked me to update my old SRE benchmarks, and
post them to python-dev.

Summary:

-- SRE is usually faster than the old RE module (PRE).

-- SRE is faster than REGEX on anything but very trivial
   patterns and short target strings.  And in some cases,
   it's even faster than a corresponding string.find...

-- on real-life benchmarks like XML parsing and Python
   tokenizing, SRE is 2-3 times faster than PRE.

-- using Unicode strings instead of 8-bit strings doesn't hurt
   performance (for some tests, the Unicode version is 30-40%
   faster on my machine.  Go figure...)

-- PRE is still faster for some patterns, especially when using
   long target strings.  I know why, and I plan to fix that before
   2.0 final.

enjoy /F

--------------------------------------------------------------------
These tests were made on a P3/233 MHz running Windows 95,
using a local build of the 0.9.8 release (this will go into 1.6b1,
I suppose).

--------------------------------------------------------------------
parsing xml:

running xmllib.py on hamlet.xml (280k):

sre8             7.14 seconds
sre16            7.82 seconds
pre             17.17 seconds

(for the sre16 test, the xml file was converted to unicode before
it was fed to the unmodified parser).

for comparision, here's the results for a couple of fast pure-Python
parsers:

rex/pre          2.44 seconds
rex/sre          0.59 seconds
srex/sre         0.16 seconds

(rex is a shallow XML parser, based on code by Robert Cameron.  srex
is an even simpler shallow parser, using sre's template mode).

--------------------------------------------------------------------
parsing python:

running tokenize.py on Tkinter.py (156k):

sre8             3.23 seconds
pre              7.57 seconds

--------------------------------------------------------------------
searching for literal text:

searching for "spam" in a string padded with "spaz" (1000 bytes on
each side of the target):

string.find     0.112 ms
sre8.search     0.059
pre.search      0.122

unicode.find    0.130
sre16.search    0.065

(yes, regular expressions can run faster than optimized C code -- as
long as we don't take compilation time into account ;-)

same test, without any false matches:

string.find     0.035 ms
sre8.search     0.050
pre.search      0.116

unicode.find    0.031
sre16.search    0.055

--------------------------------------------------------------------
compiling regular expressions

compiling the 480 tests in the standard test suite:

sre             1.22 seconds
pre             0.05 seconds

or in other words, pre (using a compiler written in C) can
compile just under 10,000 patterns per second.  sre can only
compile about 400 pattern per second.  do we care? ;-)

(footnote: sre's pattern cache stores 100 patterns.  pre's
cache hold 20 patterns, iirc).

--------------------------------------------------------------------
benchmark suite

to round off this report, here's a couple of "micro benchmarks".
all times are in milliseconds.

n=        0     5    50   250  1000  5000
----- ----- ----- ----- ----- ----- -----

pattern 'Python|Perl', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.013 0.013 0.016 0.027 0.079
sre16 0.014 0.014 0.015 0.018 0.025 0.076
pre   0.107 0.109 0.114 0.116 0.135 0.259
regex 0.011 0.011 0.012 0.016 0.033 0.122

pattern 'Python|Perl', string 'P'*n+'Perl'+'P'*n
sre8  0.013 0.016 0.030 0.100 0.358 1.716
sre16 0.014 0.015 0.030 0.094 0.347 1.649
pre   0.115 0.112 0.158 0.351 1.085 5.002
regex 0.010 0.016 0.060 0.271 1.022 5.162

(false matches causes problems for pre and regex)

pattern '(Python|Perl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.016 0.030 0.099 0.362 1.684
sre16 0.015 0.016 0.030 0.094 0.340 1.623
pre   0.110 0.111 0.112 0.119 0.143 0.267
regex 0.012 0.012 0.013 0.017 0.034 0.124

(in 0.9.8, sre's optimizer doesn't grok named groups, and
it doesn't realize that this pattern has to start with a "P")

pattern '(?:Python|Perl)', string '-'*n+'Perl'+'-'*n
sre8  0.013 0.013 0.014 0.016 0.027 0.079
sre16 0.015 0.014 0.016 0.018 0.026 0.075
pre   0.108 0.135 0.113 0.137 0.140 0.275
regex skip

(anonymous groups work better)

pattern 'Python', string '-'*n+'Python'+'-'*n
sre8  0.013 0.013 0.014 0.019 0.039 0.148
sre16 0.013 0.013 0.014 0.020 0.043 0.187
pre   0.129 0.105 0.109 0.117 0.191 0.277
regex 0.011 0.025 0.018 0.016 0.037 0.127

pattern 'Python', string 'P'*n+'Python'+'P'*n
sre8  0.040 0.012 0.021 0.026 0.080 0.248
sre16 0.012 0.013 0.015 0.025 0.061 0.283
pre   0.110 0.148 0.153 0.338 0.925 4.355
regex 0.013 0.013 0.041 0.155 0.535 2.628

(as we saw in the string.find test, sre is very fast when
there are lots of false matches)

pattern '.*Python', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.026 0.067 0.217 1.039
sre16 0.016 0.017 0.026 0.067 0.218 1.076
pre   0.111 0.112 0.124 0.180 0.386 1.494
regex 0.015 0.022 0.073 0.408 1.669 8.489

pattern '.*Python.*', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.030 0.089 0.315 1.499
sre16 0.016 0.018 0.032 0.090 0.314 1.537
pre   0.112 0.113 0.129 0.186 0.413 1.605
regex 0.016 0.023 0.076 0.387 1.674 8.519

pattern '.*(Python)', string '-'*n+'Python'+'-'*n
sre8  0.020 0.021 0.044 0.147 0.542 2.630
sre16 0.019 0.021 0.044 0.154 0.541 2.681
pre   0.115 0.117 0.141 0.245 0.636 2.690
regex 0.019 0.026 0.097 0.467 2.007 10.264

pattern '.*(?:Python)', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.027 0.065 0.220 1.037
sre16 0.016 0.017 0.026 0.070 0.221 1.066
pre   0.112 0.119 0.136 0.223 0.566 2.377
regex skip

pattern 'Python|Perl|Tcl', string '-'*n+'Perl'+'-'*n
sre8  0.013 0.015 0.034 0.114 0.407 1.985
sre16 0.014 0.016 0.034 0.109 0.392 1.915
pre   0.107 0.108 0.117 0.124 0.167 0.393
regex 0.012 0.012 0.013 0.017 0.033 0.123

(here's another sre compiler problem: it fails to realize
that this pattern starts with characters from a given set
[PT].  pre and regex both use bitmaps...)

pattern 'Python|Perl|Tcl', string 'P'*n+'Perl'+'P'*n
sre8  0.013 0.018 0.055 0.228 0.847 4.165
sre16 0.015 0.027 0.055 0.218 0.821 4.061
pre   0.111 0.116 0.172 0.415 1.354 6.302
regex 0.011 0.019 0.085 0.374 1.467 7.261

(but when there are lots of false matches, sre is faster
anyway.  interesting...)

pattern '(Python|Perl|Tcl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.018 0.042 0.152 0.575 2.798
sre16 0.015 0.019 0.042 0.148 0.556 2.715
pre   0.112 0.111 0.116 0.129 0.172 0.408
regex 0.012 0.013 0.014 0.018 0.035 0.124

pattern '(?:Python|Perl|Tcl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.016 0.034 0.113 0.405 1.987
sre16 0.016 0.016 0.033 0.112 0.393 1.918
pre   0.109 0.109 0.112 0.128 0.177 0.397
regex skip

pattern '(Python)\\1', string '-'*n+'PythonPython'+'-'*n
sre8  0.014 0.018 0.030 0.096 0.342 1.673
sre16 0.015 0.016 0.031 0.094 0.330 1.625
pre   0.112 0.111 0.112 0.119 0.141 0.268
regex 0.011 0.012 0.013 0.017 0.033 0.123

pattern '(Python)\\1', string 'P'*n+'PythonPython'+'P'*n
sre8  0.013 0.016 0.035 0.111 0.411 1.976
sre16 0.015 0.016 0.034 0.112 0.416 1.992
pre   0.110 0.116 0.160 0.355 1.051 4.797
regex 0.011 0.017 0.047 0.200 0.737 3.680

pattern '([0a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.084 0.091 0.143 0.371 1.160 6.165
sre16 0.086 0.090 0.142 0.470 1.258 7.827
pre   0.155 0.140 0.185 0.200 0.280 0.523
regex 0.018 0.018 0.020 0.024 0.137 0.240

(again, sre's lack of "fastmap" is rather costly)

pattern '(?:[0a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.028 0.033 0.077 0.303 1.433 7.140
sre16 0.021 0.027 0.073 0.277 1.031 5.053
pre   0.131 0.131 0.174 0.183 0.227 0.461
regex skip

pattern '([a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.032 0.038 0.083 0.288 1.109 5.404
sre16 0.033 0.038 0.083 0.292 1.035 5.802
pre   0.195 0.135 0.176 0.187 0.233 0.468
regex 0.018 0.018 0.019 0.023 0.041 0.131

pattern '(?:[a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.022 0.025 0.067 0.302 1.011 8.245
sre16 0.021 0.026 0.066 0.302 1.103 5.372
pre   0.262 0.397 0.178 0.193 0.250 0.817
regex skip

pattern '.*P.*y.*t.*h.*o.*n.*', string '-'*n+'Python'+'-'*n
sre8  0.021 0.084 0.118 0.251 0.965 5.414
sre16 0.021 0.025 0.063 0.366 1.192 4.639
pre   0.123 0.147 0.225 0.568 1.899 9.336
regex 0.028 0.060 0.258 1.269 5.497 28.334

--------------------------------------------------------------------




From bwarsaw at beopen.com  Wed Aug  2 23:40:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 2 Aug 2000 17:40:59 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
Message-ID: <14728.38251.289986.857417@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    >> Do we have a procedure for putting more batteries in the core?
    >> I'm not talking about stuff like PEP-206, I'm talking about
    >> small, useful modules like Cookies.py.

    GvR> Cookie support in the core would be a good thing.

I use Tim O'Malley's LGPL'd version (not as contagious as GPL'd) in
Mailman with one important patch.  I've uploaded it to SF as patch
#101055.  If you like it, I'm happy to check it in.

-Barry



From bwarsaw at beopen.com  Wed Aug  2 23:42:26 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 2 Aug 2000 17:42:26 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
	<200008021432.JAA02937@cj20424-a.reston1.va.home.com>
Message-ID: <14728.38338.92481.102493@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> I think Cookie.py is for server-side management of cookies,
    GvR> not for client-side.  Do we need client-side cookies too????

Ah.  AFAIK, Tim's Cookie.py is server side only.  Still very useful --
and already written!

-Barry



From guido at beopen.com  Thu Aug  3 00:44:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:44:03 -0500
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: Your message of "Wed, 02 Aug 2000 23:37:52 +0200."
             <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> 
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> 
Message-ID: <200008022244.RAA04388@cj20424-a.reston1.va.home.com>

> Guido asked me to update my old SRE benchmarks, and
> post them to python-dev.

Thanks, Fredrik!  This (plus the fact that SRE now passes all PRE
tests) makes me very happy with using SRE as the regular expression
engine of choice for 1.6 and 2.0.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug  3 00:46:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:46:35 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:40:59 -0400."
             <14728.38251.289986.857417@anthem.concentric.net> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com>  
            <14728.38251.289986.857417@anthem.concentric.net> 
Message-ID: <200008022246.RAA04405@cj20424-a.reston1.va.home.com>

>     GvR> Cookie support in the core would be a good thing.

[Barry]
> I use Tim O'Malley's LGPL'd version (not as contagious as GPL'd) in
> Mailman with one important patch.  I've uploaded it to SF as patch
> #101055.  If you like it, I'm happy to check it in.

I don't have the time to judge this code myself, but hope that others
in this group do.

Are you sure it's a good thing to add LGPL'ed code to the Python
standard library though?  AFAIK it is still more restrictive than the
old CWI license and probably also more restrictive than the new CNRI
license; so it could come under scrutiny and prevent closed,
proprietary software development using Python...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From barry at scottb.demon.co.uk  Wed Aug  2 23:50:43 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Wed, 2 Aug 2000 22:50:43 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIENBDCAA.MarkH@ActiveState.com>
Message-ID: <020901bffccb$b4bf4da0$060210ac@private>



> -----Original Message-----
> From: Mark Hammond [mailto:MarkH at activestate.com]
> Sent: 02 August 2000 00:14
> To: Barry Scott; python-dev at python.org
> Subject: RE: [Python-Dev] Preventing 1.5 extensions crashing under
> 1.6/2.0 Python
> 
> 
> > If someone in the core of Python thinks a patch implementing
> > what I've outlined is useful please let me know and I will
> > generate the patch.
> 
> Umm - I'm afraid that I dont keep my python-dev emils for that long, and
> right now I'm too lazy/busy to dig around the archives.
> 
> Exactly what did you outline?  I know it went around a few times, and I
> can't remember who said what.  For my money, I liked Fredrik's solution
> best (check Py_IsInitialized() in Py_InitModule4()), but as mentioned that
> only solves for the next version of Python; it doesnt solve the fact 1.5
> modules will crash under 1.6/2.0

	This is not a good way to solve the problem as it only works in a
	limited number of cases. 

	Attached is my proposal which works for all new and old python
	and all old and new extensions.

> 
> It would definately be excellent to get _something_ in the CNRI 1.6
> release, so the BeOpen 2.0 release can see the results.

> But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,

	Yes indeed once the story of 1.6 and 2.0 is out I expect folks
	will skip 1.6. For example, if your win32 stuff is not ported
	then Python 1.6 is not usable on Windows/NT.
	
> 
> Mark.

		Barry
-------------- next part --------------
An embedded message was scrubbed...
From: "Barry Scott" <barry at scottb.demon.co.uk>
Subject: RE: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
Date: Tue, 18 Jul 2000 23:36:15 +0100
Size: 2085
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000802/743a2eaf/attachment.eml>

From akuchlin at mems-exchange.org  Wed Aug  2 23:55:53 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 17:55:53 -0400
Subject: [Python-Dev] Cookies.py in the core 
In-Reply-To: <200008022246.RAA04405@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 05:46:35PM -0500
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
Message-ID: <20000802175553.A30340@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 05:46:35PM -0500, Guido van Rossum wrote:
>Are you sure it's a good thing to add LGPL'ed code to the Python
>standard library though?  AFAIK ... it could come under scrutiny and
>prevent closed, proprietary software development using Python...

Licence discussions are a conversational black hole...  Why not just
ask Tim O'Malley to change the licence in return for getting it added
to the core?

--amk



From akuchlin at mems-exchange.org  Thu Aug  3 00:00:59 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 18:00:59 -0400
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>; from effbot@telia.com on Wed, Aug 02, 2000 at 11:37:52PM +0200
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <20000802180059.B30340@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 11:37:52PM +0200, Fredrik Lundh wrote:
>-- SRE is usually faster than the old RE module (PRE).

Once the compiler is translated to C, it might be worth considering
making SRE available as a standalone library for use outside of
Python.  Most other regex libraries either don't do Perl's extensions,
or they don't do Unicode.  Bonus points if you can get the Perl6 team
interested in it.

Hmm... here's an old problem that's returned (recursion on repeated
group matches, I expect):

>>> p=re.compile('(x)*')
>>> p
<SRE_Pattern object at 0x8127048>
>>> p.match(500000*'x')
Segmentation fault (core dumped)

--amk



From guido at beopen.com  Thu Aug  3 01:10:14 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:10:14 -0500
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:55:53 -0400."
             <20000802175553.A30340@kronos.cnri.reston.va.us> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>  
            <20000802175553.A30340@kronos.cnri.reston.va.us> 
Message-ID: <200008022310.SAA04508@cj20424-a.reston1.va.home.com>



From guido at beopen.com  Thu Aug  3 01:10:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:10:33 -0500
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:55:53 -0400."
             <20000802175553.A30340@kronos.cnri.reston.va.us> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>  
            <20000802175553.A30340@kronos.cnri.reston.va.us> 
Message-ID: <200008022310.SAA04518@cj20424-a.reston1.va.home.com>

> Licence discussions are a conversational black hole...  Why not just
> ask Tim O'Malley to change the licence in return for getting it added
> to the core?

Excellent idea.  Go for it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug  3 01:11:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:11:39 -0500
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: Your message of "Wed, 02 Aug 2000 18:00:59 -0400."
             <20000802180059.B30340@kronos.cnri.reston.va.us> 
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>  
            <20000802180059.B30340@kronos.cnri.reston.va.us> 
Message-ID: <200008022311.SAA04529@cj20424-a.reston1.va.home.com>

> Hmm... here's an old problem that's returned (recursion on repeated
> group matches, I expect):
> 
> >>> p=re.compile('(x)*')
> >>> p
> <SRE_Pattern object at 0x8127048>
> >>> p.match(500000*'x')
> Segmentation fault (core dumped)

Ouch.

Andrew, would you mind adding a test case for that to the re test
suite?  It's important that this doesn't come back!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug  3 01:18:04 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:18:04 -0500
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: Your message of "Wed, 02 Aug 2000 22:50:43 +0100."
             <020901bffccb$b4bf4da0$060210ac@private> 
References: <020901bffccb$b4bf4da0$060210ac@private> 
Message-ID: <200008022318.SAA04558@cj20424-a.reston1.va.home.com>

> > But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,
> 
> 	Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> 	will skip 1.6. For example, if your win32 stuff is not ported
> 	then Python 1.6 is not usable on Windows/NT.

I expect to be releasing a 1.6 Windows installer -- but I can't
control Mark Hammond.  Yet, it shouldn't be hard for him to create a
1.6 version of win32all, should it?

> Change the init function name to a new name PythonExtensionInit_ say.
> Pass in the API version for the extension writer to check. If the
> version is bad for this extension returns without calling any python
> functions. Add a return code that is true if compatible, false if not.
> If compatible the extension can use python functions and report and
> problems it wishes.
> 
> int PythonExtensionInit_XXX( int invoking_python_api_version )
> 	{
> 	if( invoking_python_api_version != PYTHON_API_VERSION )
> 		{
> 		/* python will report that the module is incompatible */
> 		return 0;
> 		}
> 
> 	/* setup module for XXX ... */
> 
> 	/* say this extension is compatible with the invoking python */
> 	return 1;
> 	}
> 
> All 1.5 extensions fail to load on later python 2.0 and later.
> All 2.0 extensions fail to load on python 1.5.
> 
> All new extensions work only with python of the same API version.
> 
> Document that failure to setup a module could mean the extension is
> incompatible with this version of python.
> 
> Small code change in python core. But need to tell extension writers
> what the new interface is and update all extensions within the python
> CVS tree.

I sort-of like this idea -- at least at the +0 level.

I would choose a shorter name: PyExtInit_XXX().

Could you (or someone else) prepare a patch that changes this?  It
would be great if the patch were relative to the 1.6 branch of the
source tree; unfortunately this is different because of the
ANSIfication.

Unfortunately we only have two days to get this done for 1.6 -- I plan
to release 1.6b1 this Friday!  If you don't get to it, prepare a patch
for 2.0 would be the next best thing.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Thu Aug  3 01:13:30 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 01:13:30 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <020901bffccb$b4bf4da0$060210ac@private>  <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <01d701bffcd7$46a74a00$f2a6b5d4@hagrid>

> Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> will skip 1.6.   For example, if your win32 stuff is not ported then
> Python 1.6 is not usable on Windows/NT.

"not usable"?

guess you haven't done much cross-platform development lately...

> Change the init function name to a new name PythonExtensionInit_ say.
> Pass in the API version for the extension writer to check. If the
> version is bad for this extension returns without calling any python

huh?  are you seriously proposing to break every single C extension
ever written -- on each and every platform -- just to trap an error
message caused by extensions linked against 1.5.2 on your favourite
platform?

> Small code change in python core. But need to tell extension writers
> what the new interface is and update all extensions within the python
> CVS tree.

you mean "update the source code for all extensions ever written."

-1




From DavidA at ActiveState.com  Thu Aug  3 02:33:02 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 2 Aug 2000 17:33:02 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
In-Reply-To: <013901bff821$55dd02e0$8119fea9@neil>
Message-ID: <Pine.WNT.4.21.0008021732140.980-100000@loom>

>    IIRC ActiveState contributed to Perl a version of fork that works on
> Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> help heal one of the more difficult platform rifts. Emulating fork for Win32
> looks quite difficult to me but if its already done...

I've talked to Sarathy about it, and it's messy, as Perl manages PIDs
above and beyond what Windows does, among other things.  If anyone is
interested in doing that work, I can make the introduction.

--david




From DavidA at ActiveState.com  Thu Aug  3 02:35:01 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 2 Aug 2000 17:35:01 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
In-Reply-To: <013901bff821$55dd02e0$8119fea9@neil>
Message-ID: <Pine.WNT.4.21.0008021734040.980-100000@loom>

>    IIRC ActiveState contributed to Perl a version of fork that works on
> Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> help heal one of the more difficult platform rifts. Emulating fork for Win32
> looks quite difficult to me but if its already done...

Sigh. Me tired.

The message I posted a few minutes ago was actually referring to the
system() work, not the fork() work.  I agree that the fork() emulation
isn't Pythonic.

--david




From skip at mojam.com  Thu Aug  3 04:32:29 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 2 Aug 2000 21:32:29 -0500 (CDT)
Subject: [Python-Dev] METH_VARARGS
Message-ID: <14728.55741.477399.196240@beluga.mojam.com>

I noticed Andrew Kuchling's METH_VARARGS submission:

    Use METH_VARARGS instead of numeric constant 1 in method def. tables

While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
potential conflicts with other packages?

Skip



From akuchlin at cnri.reston.va.us  Thu Aug  3 04:41:02 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Wed, 2 Aug 2000 22:41:02 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008022310.SAA04518@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 06:10:33PM -0500
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com>
Message-ID: <20000802224102.A25837@newcnri.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 06:10:33PM -0500, Guido van Rossum wrote:
>> Why not just
>> ask Tim O'Malley to change the licence in return for getting it added
>> to the core?
>Excellent idea.  Go for it!

Mail to timo at bbn.com bounces; does anyone have a more recent e-mail
address?  What we do if he can't be located?  Add the module anyway,
abandon the idea, or write a new version?

--amk



From fdrake at beopen.com  Thu Aug  3 04:51:23 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 22:51:23 -0400 (EDT)
Subject: [Python-Dev] METH_VARARGS
In-Reply-To: <14728.55741.477399.196240@beluga.mojam.com>
References: <14728.55741.477399.196240@beluga.mojam.com>
Message-ID: <14728.56875.996310.790872@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
 > METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
 > potential conflicts with other packages?

  I think so, but there are too many third party extension modules
that would have to be changed to not also offer the old symbols as
well.  I see two options: leave things as they are, and provide both
versions of the symbols through at least Python 2.1.  For the later,
all examples in the code and documentation would need to be changed
and the non-PY_ versions strongly labelled as deprecated and going
away in Python version 2.2 (or whatever version it would be).
  It would *not* hurt to provide both symbols and change all the
examples, at any rate.  Aside from deleting all the checkin email,
that is!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gstein at lyra.org  Thu Aug  3 05:06:20 2000
From: gstein at lyra.org (Greg Stein)
Date: Wed, 2 Aug 2000 20:06:20 -0700
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <20000802224102.A25837@newcnri.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Wed, Aug 02, 2000 at 10:41:02PM -0400
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us>
Message-ID: <20000802200620.G19525@lyra.org>

On Wed, Aug 02, 2000 at 10:41:02PM -0400, Andrew Kuchling wrote:
> On Wed, Aug 02, 2000 at 06:10:33PM -0500, Guido van Rossum wrote:
> >> Why not just
> >> ask Tim O'Malley to change the licence in return for getting it added
> >> to the core?
> >Excellent idea.  Go for it!
> 
> Mail to timo at bbn.com bounces; does anyone have a more recent e-mail
> address?  What we do if he can't be located?  Add the module anyway,
> abandon the idea, or write a new version?

If we can't contact him, then I'd be quite happy to assist in designing and
writing a new one under a BSD-ish or Public Domain license. I was
considering doing exactly that just last week :-)

[ I want to start using cookies in ViewCVS; while the LGPL is "fine" for me,
  it would be nice if the whole ViewCVS package was BSD-ish ]


Of course, I'd much rather get a hold of Tim.

Cheers,
-g


-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Thu Aug  3 06:11:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:11:59 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.38251.289986.857417@anthem.concentric.net>
	<200008022246.RAA04405@cj20424-a.reston1.va.home.com>
Message-ID: <14728.61711.859894.972939@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Are you sure it's a good thing to add LGPL'ed code to the
    GvR> Python standard library though?  AFAIK it is still more
    GvR> restrictive than the old CWI license and probably also more
    GvR> restrictive than the new CNRI license; so it could come under
    GvR> scrutiny and prevent closed, proprietary software development
    GvR> using Python...

I don't know, however I have a version of the file with essentially no
license on it:

# Id: Cookie.py,v 2.4 1998/02/13 16:42:30 timo Exp
#  by  Timothy O'Malley <timo at bbn.com> Date: 1998/02/13 16:42:30
#
#  Cookie.py is an update for the old nscookie.py module.
#    Under the old module, it was not possible to set attributes,
#    such as "secure" or "Max-Age" on key,value granularity.  This
#    shortcoming has been addressed in Cookie.py but has come at
#    the cost of a slightly changed interface.  Cookie.py also
#    requires Python-1.5, for the re and cPickle modules.
#
#  The original idea to treat Cookies as a dictionary came from
#  Dave Mitchel (davem at magnet.com) in 1995, when he released the
#  first version of nscookie.py.

Is that better or worse? <wink>.  Back in '98, I actually asked him to
send me an LGPL'd copy because that worked better for Mailman.  We
could start with Tim's pre-LGPL'd version and backport the minor mods
I've made.

BTW, I've recently tried to contact Tim, but the address in the file
bounces.

-Barry



From guido at beopen.com  Thu Aug  3 06:39:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 23:39:47 -0500
Subject: [Python-Dev] METH_VARARGS
In-Reply-To: Your message of "Wed, 02 Aug 2000 21:32:29 EST."
             <14728.55741.477399.196240@beluga.mojam.com> 
References: <14728.55741.477399.196240@beluga.mojam.com> 
Message-ID: <200008030439.XAA05445@cj20424-a.reston1.va.home.com>

> While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
> METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
> potential conflicts with other packages?

Unless someone knows of a *real* conflict, I'd leave this one alone.
Yes, it should be Py_*, but no, it's not worth the effort of changing
all that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gstein at lyra.org  Thu Aug  3 06:26:48 2000
From: gstein at lyra.org (Greg Stein)
Date: Wed, 2 Aug 2000 21:26:48 -0700
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <14728.61711.859894.972939@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 03, 2000 at 12:11:59AM -0400
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <14728.61711.859894.972939@anthem.concentric.net>
Message-ID: <20000802212648.J19525@lyra.org>

On Thu, Aug 03, 2000 at 12:11:59AM -0400, Barry A. Warsaw wrote:
>...
> I don't know, however I have a version of the file with essentially no
> license on it:

That implies "no license" which means "no rights to redistribute, use, or
whatever." Very incompatible :-)

>...
> Is that better or worse? <wink>.  Back in '98, I actually asked him to
> send me an LGPL'd copy because that worked better for Mailman.  We
> could start with Tim's pre-LGPL'd version and backport the minor mods
> I've made.

Wouldn't help. We need a relicensed version, to use the LGPL'd version, or
to rebuild it from scratch.

> BTW, I've recently tried to contact Tim, but the address in the file
> bounces.

I just sent mail to Andrew Smith who has been working with Tim for several
years on various projects (RSVP, RAP, etc). Hopefully, he has a current
email address for Tim. I'll report back when I hear something from Andrew.
Of course, if somebody else can track him down faster...

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Thu Aug  3 06:41:14 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:41:14 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
Message-ID: <14728.63466.263123.434708@anthem.concentric.net>

>>>>> "GM" == Gareth McCaughan <Gareth.McCaughan at pobox.com> writes:

    GM> Consider the following piece of code, which takes a file
    GM> and prepares a concordance saying on which lines each word
    GM> in the file appears. (For real use it would need to be
    GM> made more sophisticated.)

    |     line_number = 0
    |     for line in open(filename).readlines():
    |       line_number = line_number+1
    |       for word in map(string.lower, string.split(line)):
    |         existing_lines = word2lines.get(word, [])   |
    |         existing_lines.append(line_number)          | ugh!
    |         word2lines[word] = existing_lines           |

I've run into this same situation many times myself.  I agree it's
annoying.  Annoying enough to warrant a change?  Maybe -- I'm not
sure.

    GM> I suggest a minor change: another optional argument to
    GM> "get" so that

    GM>     dict.get(item,default,flag)

Good idea, not so good solution.  Let's make it more explicit by
adding a new method instead of a flag.  I'll use `put' here since this
seems (in a sense) opposite of get() and my sleep addled brain can't
think of anything more clever.  Let's not argue about the name of this
method though -- if Guido likes the extension, he'll pick a good name
and I go on record as agreeing with his name choice, just to avoid a
protracted war.

A trivial patch to UserDict (see below) will let you play with this.

>>> d = UserDict()
>>> word = 'hello'
>>> d.get(word, [])
[]
>>> d.put(word, []).append('world')
>>> d.get(word)
['world']
>>> d.put(word, []).append('gareth')
>>> d.get(word)
['world', 'gareth']

Shouldn't be too hard to add equivalent C code to the dictionary
object.

-Barry

-------------------- snip snip --------------------
Index: UserDict.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/UserDict.py,v
retrieving revision 1.7
diff -u -r1.7 UserDict.py
--- UserDict.py	2000/02/02 15:10:14	1.7
+++ UserDict.py	2000/08/03 04:35:11
@@ -34,3 +34,7 @@
                 self.data[k] = v
     def get(self, key, failobj=None):
         return self.data.get(key, failobj)
+    def put(self, key, failobj=None):
+        if not self.data.has_key(key):
+            self.data[key] = failobj
+        return self.data[key]



From bwarsaw at beopen.com  Thu Aug  3 06:45:33 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:45:33 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.38251.289986.857417@anthem.concentric.net>
	<200008022246.RAA04405@cj20424-a.reston1.va.home.com>
	<20000802175553.A30340@kronos.cnri.reston.va.us>
	<200008022310.SAA04518@cj20424-a.reston1.va.home.com>
	<20000802224102.A25837@newcnri.cnri.reston.va.us>
	<20000802200620.G19525@lyra.org>
Message-ID: <14728.63725.390053.65213@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> If we can't contact him, then I'd be quite happy to assist in
    GS> designing and writing a new one under a BSD-ish or Public
    GS> Domain license. I was considering doing exactly that just last
    GS> week :-)

I don't think that's necessary; see my other post.  We should still
try to contact him if possible though.

My request for an LGPL'd copy was necessary because Mailman is GPL'd
(and Stallman suggested this as an acceptable solution).  It would be
just as fine for Mailman if an un-LGPL'd Cookie.py were part of the
standard Python distribution.

-Barry



From bwarsaw at beopen.com  Thu Aug  3 06:50:51 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:50:51 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.38251.289986.857417@anthem.concentric.net>
	<200008022246.RAA04405@cj20424-a.reston1.va.home.com>
	<14728.61711.859894.972939@anthem.concentric.net>
	<20000802212648.J19525@lyra.org>
Message-ID: <14728.64043.155392.32408@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> Wouldn't help. We need a relicensed version, to use the LGPL'd
    GS> version, or to rebuild it from scratch.

    >> BTW, I've recently tried to contact Tim, but the address in the
    >> file bounces.

    GS> I just sent mail to Andrew Smith who has been working with Tim
    GS> for several years on various projects (RSVP, RAP,
    GS> etc). Hopefully, he has a current email address for Tim. I'll
    GS> report back when I hear something from Andrew.  Of course, if
    GS> somebody else can track him down faster...

Cool.  Tim was exceedingly helpful in giving me a version of the file
I could use.  I have no doubt that if we can contact him, he'll
relicense it in a way that makes sense for the standard distro.  That
would be the best outcome.

-Barry



From tim_one at email.msn.com  Thu Aug  3 06:52:07 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 00:52:07 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <14728.63725.390053.65213@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>

Guys, these are cookies, not brain surgery!  If people like this API,
couldn't someone have done a clean-room reimplementation of it in less time
than we've spent jockeying over the freaking license?

tolerance-for-license-discussions-at-an-all-time-low-ly y'rs  - tim





From effbot at telia.com  Thu Aug  3 10:03:53 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 10:03:53 +0200
Subject: [Python-Dev] Cookies.py in the core
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us>
Message-ID: <002601bffd21$5f23e800$f2a6b5d4@hagrid>

andrew wrote:

> Mail to timo at bbn.com bounces; does anyone have a more recent e-mail
> address?  What we do if he can't be located?  Add the module anyway,
> abandon the idea, or write a new version?

readers of the daily URL might have noticed that he posted
a socket timeout wrapper a few days ago:

    Timothy O'Malley <timo at alum.mit.edu>

it's the same name and the same signature, so I assume it's
the same guy ;-)

</F>




From wesc at alpha.ece.ucsb.edu  Thu Aug  3 09:56:59 2000
From: wesc at alpha.ece.ucsb.edu (Wesley J. Chun)
Date: Thu, 3 Aug 2000 00:56:59 -0700 (PDT)
Subject: [Python-Dev] Re: Bookstand at LA Python conference
Message-ID: <200008030756.AAA23434@alpha.ece.ucsb.edu>

    > From: Guido van Rossum <guido at python.org>
    > Date: Sat, 29 Jul 2000 12:39:01 -0500
    > 
    > The next Python conference will be in Long Beach (Los Angeles).  We're
    > looking for a bookstore to set up a bookstand like we had at the last
    > conference.  Does anybody have a suggestion?


the most well-known big independent technical bookstore that also
does mail order and has been around for about 20 years is OpAmp:

OpAmp Technical Books
1033 N. Sycamore Ave
Los Angeles, CA  90038
800-468-4322
http://www.opamp.com

there really isn't a "2nd place" since OpAmp owns the market,
but if there was a #3, it would be Technical Book Company:

Technical Book Company
2056 Westwood Blvd
Los Angeles, CA  90025
800-233-5150


the above 2 stores are listed in the misc.books.technical FAQ:

http://www.faqs.org/faqs/books/technical/

there's a smaller bookstore that's also known to have a good
technical book selection:

Scholar's Bookstore 
El Segundo, CA  90245
310-322-3161

(and of course, the standbys are always the university bookstores
for UCLA, CalTech, UC Irvine, Cal State Long Beach, etc.)

as to be expected, someone has collated a list of bookstores
in the LA area:

http://www.geocities.com/Athens/4824/na-la.htm

hope this helps!!

-wesley

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

"Core Python Programming", Prentice Hall PTR, TBP Summer/Fall 2000
    http://www.phptr.com/ptrbooks/ptr_0130260363.html

Python Books:   http://www.softpro.com/languages-python.html

wesley.j.chun :: wesc at alpha.ece.ucsb.edu
cyberweb.consulting :: silicon.valley, ca
http://www.roadkill.com/~wesc/cyberweb/



From tim_one at email.msn.com  Thu Aug  3 10:05:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 04:05:31 -0400
Subject: [Python-Dev] Go \x yourself
Message-ID: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>

Offline, Guido and /F and I had a mighty battle about the meaning of \x
escapes in Python.  In the end we agreed to change the meaning of \x in a
backward-*in*compatible way.  Here's the scoop:

In 1.5.2 and before, the Reference Manual implies that an \x escape takes
two or more hex digits following, and has the value of the last byte.  In
reality it also accepted just one hex digit, or even none:

>>> "\x123465"  # same as "\x65"
'e'
>>> "\x65"
'e'
>>> "\x1"
'\001'
>>> "\x\x"
'\\x\\x'
>>>

I found no instances of the 0- or 1-digit forms in the CVS tree or in any of
the Python packages on my laptop.  Do you have any in your code?

And, apart from some deliberate abuse in the test suite, I found no
instances of more-than-two-hex-digits \x escapes either.  Similarly, do you
have any?  As Guido said and all agreed, it's probably a bug if you do.

The new rule is the same as Perl uses for \x escapes in -w mode, except that
Python will raise ValueError at compile-time for an invalid \x escape:  an
\x escape is of the form

    \xhh

where h is a hex digit.  That's it.  Guido reports that the O'Reilly books
(probably due to their Perl editing heritage!) already say Python works this
way.  It's the same rule for 8-bit and Unicode strings (in Perl too, at
least wrt the syntax).  In a Unicode string \xij has the same meaning as
\u00ij, i.e. it's the obvious Latin-1 character.  Playing back the above
pretending the new rule is in place:

>>> "\x123465" # \x12 -> \022, "3456" left alone
'\0223456'
>>> "\x65"
'e'
>>> "\x1"
ValueError
>>> "\x\x"
ValueError
>>>

We all support this:  the open-ended gobbling \x used to do lost information
without warning, and had no benefit whatsoever.  While there was some
attraction to generalizing \x in Unicode strings, \u1234 is already
perfectly adequate for specifying Unicode characters in hex form, and the
new rule for \x at least makes consistent Unicode sense now (and in a way
JPython should be able to adopt easily too).  The new rule gets rid of the
unPythonic TMTOWTDI introduced by generalizing Unicode \x to "the last 4
bytes".  That generalization also didn't make sense in light of the desire
to add \U12345678 escapes too (i.e., so then how many trailing hex digits
should a generalized \x suck up?  2?  4?  8?).  The only actual use for \x
in 8-bit strings (i.e., a way to specify a byte in hex) is still supported
with the same meaning as in 1.5.2, and \x in a Unicode string means
something as close to that as is possible.

Sure feels right to me.  Gripe quick if it doesn't to you.

as-simple-as-possible-is-a-nice-place-to-rest-ly y'rs  - tim





From gstein at lyra.org  Thu Aug  3 10:16:37 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 01:16:37 -0700
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 12:52:07AM -0400
References: <14728.63725.390053.65213@anthem.concentric.net> <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>
Message-ID: <20000803011637.K19525@lyra.org>

On Thu, Aug 03, 2000 at 12:52:07AM -0400, Tim Peters wrote:
> Guys, these are cookies, not brain surgery!  If people like this API,
> couldn't someone have done a clean-room reimplementation of it in less time
> than we've spent jockeying over the freaking license?

No.


-- 
Greg Stein, http://www.lyra.org/



From gstein at lyra.org  Thu Aug  3 10:18:38 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 01:18:38 -0700
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 04:05:31AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <20000803011838.L19525@lyra.org>

On Thu, Aug 03, 2000 at 04:05:31AM -0400, Tim Peters wrote:
>...
> Sure feels right to me.  Gripe quick if it doesn't to you.

+1

-- 
Greg Stein, http://www.lyra.org/



From effbot at telia.com  Thu Aug  3 10:27:39 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 10:27:39 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>
Message-ID: <006601bffd24$e25a9360$f2a6b5d4@hagrid>

andrew wrote:
>
> >-- SRE is usually faster than the old RE module (PRE).
> 
> Once the compiler is translated to C, it might be worth considering
> making SRE available as a standalone library for use outside of
> Python.

if it will ever be translated, that is...

> Hmm... here's an old problem that's returned (recursion on repeated
> group matches, I expect):
> 
> >>> p=re.compile('(x)*')
> >>> p
> <SRE_Pattern object at 0x8127048>
> >>> p.match(500000*'x')
> Segmentation fault (core dumped)

fwiw, that pattern isn't portable:

$ jpython test.py
File "test.py", line 3, in ?
java.lang.StackOverflowError

and neither is:

def nest(level):
    if level:
        nest(level-1)
nest(500000)

...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
has already taken care of the other one ;-).  but 0.9.9 won't be
out before the 1.6b1 release...

(and to avoid scaring the hell out of the beta testers, it's probably
better to leave the test out of the regression suite until the bug is
fixed...)

</F>




From Vladimir.Marangozov at inrialpes.fr  Thu Aug  3 10:44:58 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 3 Aug 2000 10:44:58 +0200 (CEST)
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <20000803011637.K19525@lyra.org> from "Greg Stein" at Aug 03, 2000 01:16:37 AM
Message-ID: <200008030844.KAA12666@python.inrialpes.fr>

Greg Stein wrote:
> 
> On Thu, Aug 03, 2000 at 12:52:07AM -0400, Tim Peters wrote:
> > Guys, these are cookies, not brain surgery!  If people like this API,
> > couldn't someone have done a clean-room reimplementation of it in less time
> > than we've spent jockeying over the freaking license?
> 
> No.


Sorry for asking this, but what "cookies in the core" means to you in
the first place?  A library module.py, C code or both?


PS: I can hardly accept the idea that cookies are necessary for normal
Web usage. I'm not against them, though. IMO, it is important to keep
control on whether they're enabled or disabled.
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Thu Aug  3 10:43:04 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 11:43:04 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008030844.KAA12666@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008031140300.7196-100000@sundial>

On Thu, 3 Aug 2000, Vladimir Marangozov wrote:

> Sorry for asking this, but what "cookies in the core" means to you in
> the first place?  A library module.py, C code or both?

I think Python is good enough for that. (Python is a great language!)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                        Moshe preaching to the choir

> PS: I can hardly accept the idea that cookies are necessary for normal
> Web usage. I'm not against them, though. IMO, it is important to keep
> control on whether they're enabled or disabled.

Yes, but that all happens client-side -- we were talking server-side
cookies. Cookies are a state-management mechanism for a loosely-coupled
protocols, and are almost essential in today's web. Not giving support
means that Python is not as good a server-side language as it can be.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From Vladimir.Marangozov at inrialpes.fr  Thu Aug  3 11:11:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 3 Aug 2000 11:11:36 +0200 (CEST)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021512180.8980-100000@sundial> from "Moshe Zadka" at Aug 02, 2000 03:17:31 PM
Message-ID: <200008030911.LAA12747@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> On Wed, 2 Aug 2000, Vladimir Marangozov wrote:
> 
> > Moshe Zadka wrote:
> > > 
> > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> > 
> > You get a compiled SRE object, right?
> 
> Nope -- I tested it with pre. 

As of yesterday's CVS (I saw AMK checking in an escape patch since then):

~/python/dev>python
Python 2.0b1 (#1, Aug  3 2000, 09:01:35)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> import pre
>>> pre.compile('[\\200-\\400]')
Segmentation fault (core dumped)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Thu Aug  3 11:06:23 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 12:06:23 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <200008030911.LAA12747@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008031206000.7196-100000@sundial>

On Thu, 3 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > On Wed, 2 Aug 2000, Vladimir Marangozov wrote:
> > 
> > > Moshe Zadka wrote:
> > > > 
> > > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> > > 
> > > You get a compiled SRE object, right?
> > 
> > Nope -- I tested it with pre. 
> 
> As of yesterday's CVS (I saw AMK checking in an escape patch since then):

Hmmmmm....I ought to be more careful then.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Thu Aug  3 11:14:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 11:14:24 +0200
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 04:05:31AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <20000803111424.Z266@xs4all.nl>

On Thu, Aug 03, 2000 at 04:05:31AM -0400, Tim Peters wrote:

> Sure feels right to me.  Gripe quick if it doesn't to you.

+1 if it's a compile-time error, +0 if it isn't and won't be made one. The
compile-time error makes it a lot easier to track down the issues, if any.
(Okay, so everyone should have proper unit testing -- not everyone actually
has it ;)

I suspect it would be a compile-time error, but I haven't looked at
compiling string literals yet ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Thu Aug  3 11:17:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 11:17:46 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <398938BA.CA54A98E@lemburg.com>

> 
> searching for literal text:
> 
> searching for "spam" in a string padded with "spaz" (1000 bytes on
> each side of the target):
> 
> string.find     0.112 ms
> sre8.search     0.059
> pre.search      0.122
> 
> unicode.find    0.130
> sre16.search    0.065
> 
> (yes, regular expressions can run faster than optimized C code -- as
> long as we don't take compilation time into account ;-)
> 
> same test, without any false matches:
> 
> string.find     0.035 ms
> sre8.search     0.050
> pre.search      0.116
> 
> unicode.find    0.031
> sre16.search    0.055

Those results are probably due to the fact that string.find
does a brute force search. If it would do a last match char
first search or even Boyer-Moore (this only pays off for long
search targets) then it should be a lot faster than [s|p]re.

Just for compares: would you mind running the search 
routines in mxTextTools on the same machine ?

import TextTools
TextTools.find(text, what)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug  3 11:55:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 11:55:57 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <020901bffccb$b4bf4da0$060210ac@private> <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <398941AD.F47CA1C1@lemburg.com>

Guido van Rossum wrote:
> 
> > Change the init function name to a new name PythonExtensionInit_ say.
> > Pass in the API version for the extension writer to check. If the
> > version is bad for this extension returns without calling any python
> > functions. Add a return code that is true if compatible, false if not.
> > If compatible the extension can use python functions and report and
> > problems it wishes.
> >
> > int PythonExtensionInit_XXX( int invoking_python_api_version )
> >       {
> >       if( invoking_python_api_version != PYTHON_API_VERSION )
> >               {
> >               /* python will report that the module is incompatible */
> >               return 0;
> >               }
> >
> >       /* setup module for XXX ... */
> >
> >       /* say this extension is compatible with the invoking python */
> >       return 1;
> >       }
> >
> > All 1.5 extensions fail to load on later python 2.0 and later.
> > All 2.0 extensions fail to load on python 1.5.
> >
> > All new extensions work only with python of the same API version.
> >
> > Document that failure to setup a module could mean the extension is
> > incompatible with this version of python.
> >
> > Small code change in python core. But need to tell extension writers
> > what the new interface is and update all extensions within the python
> > CVS tree.
> 
> I sort-of like this idea -- at least at the +0 level.

I sort of dislike the idea ;-)

It introduces needless work for hundreds of extension writers
and effectively prevents binary compatibility for future
versions of Python: not all platforms have the problems of the
Windows platform and extensions which were compiled against a
different API version may very well still work with the
new Python version -- e.g. the dynamic loader on Linux is
very well capable of linking the new Python version against
an extension compiled for the previous Python version.

If all this is really necessary, I'd at least suggest adding macros
emulating the old Py_InitModule() APIs, so that extension writers
don't have to edit their code just to get it recompiled.

BTW, the subject line doesn't have anything to do with the
proposed solutions in this thread... they all crash Python
or the extensions in some way, some nicer, some not so nice ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Thu Aug  3 11:57:22 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 05:57:22 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <20000803111424.Z266@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMBGNAA.tim_one@email.msn.com>

[Thomas Wouters]
> +1 if it's a compile-time error, +0 if it isn't and won't be
> made one. ...

Quoting back from the original msg:

> ... will raise ValueError at compile-time for an invalid \x escape
                            ^^^^^^^^^^^^^^^

The pseudo-example was taken from a pseudo interactive prompt, and just as
in a real example at a real interactive prompt, each (pseduo)input line was
(pseudo)compiled one at a time <wink>.





From mal at lemburg.com  Thu Aug  3 12:04:53 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 12:04:53 +0200
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
References: <Pine.WNT.4.21.0008021734040.980-100000@loom>
Message-ID: <398943C4.AFECEE36@lemburg.com>

David Ascher wrote:
> 
> >    IIRC ActiveState contributed to Perl a version of fork that works on
> > Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> > help heal one of the more difficult platform rifts. Emulating fork for Win32
> > looks quite difficult to me but if its already done...
> 
> Sigh. Me tired.
> 
> The message I posted a few minutes ago was actually referring to the
> system() work, not the fork() work.  I agree that the fork() emulation
> isn't Pythonic.

What about porting os.kill() to Windows (see my other post
with changed subject line in this thread) ? Wouldn't that
make sense ? (the os.spawn() APIs do return PIDs of spawned
processes, so calling os.kill() to send signals to these
seems like a feasable way to control them)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug  3 12:11:24 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 12:11:24 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>
Message-ID: <3989454C.5C9EF39B@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "GM" == Gareth McCaughan <Gareth.McCaughan at pobox.com> writes:
> 
>     GM> Consider the following piece of code, which takes a file
>     GM> and prepares a concordance saying on which lines each word
>     GM> in the file appears. (For real use it would need to be
>     GM> made more sophisticated.)
> 
>     |     line_number = 0
>     |     for line in open(filename).readlines():
>     |       line_number = line_number+1
>     |       for word in map(string.lower, string.split(line)):
>     |         existing_lines = word2lines.get(word, [])   |
>     |         existing_lines.append(line_number)          | ugh!
>     |         word2lines[word] = existing_lines           |
> 
> I've run into this same situation many times myself.  I agree it's
> annoying.  Annoying enough to warrant a change?  Maybe -- I'm not
> sure.
> 
>     GM> I suggest a minor change: another optional argument to
>     GM> "get" so that
> 
>     GM>     dict.get(item,default,flag)
> 
> Good idea, not so good solution.  Let's make it more explicit by
> adding a new method instead of a flag.  I'll use `put' here since this
> seems (in a sense) opposite of get() and my sleep addled brain can't
> think of anything more clever.  Let's not argue about the name of this
> method though -- if Guido likes the extension, he'll pick a good name
> and I go on record as agreeing with his name choice, just to avoid a
> protracted war.
> 
> A trivial patch to UserDict (see below) will let you play with this.
> 
> >>> d = UserDict()
> >>> word = 'hello'
> >>> d.get(word, [])
> []
> >>> d.put(word, []).append('world')
> >>> d.get(word)
> ['world']
> >>> d.put(word, []).append('gareth')
> >>> d.get(word)
> ['world', 'gareth']
> 
> Shouldn't be too hard to add equivalent C code to the dictionary
> object.

The following one-liner already does what you want:

	d[word] = d.get(word, []).append('world')

... and it's in no way more readable than your proposed
.put() line ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From m.favas at per.dem.csiro.au  Thu Aug  3 12:54:05 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 03 Aug 2000 18:54:05 +0800
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
Message-ID: <39894F4D.FB11F098@per.dem.csiro.au>

[Guido]
>> Hmm... here's an old problem that's returned (recursion on repeated
>> group matches, I expect):
>> 
>> >>> p=re.compile('(x)*')
>> >>> p
>> <SRE_Pattern object at 0x8127048>
>> >>> p.match(500000*'x')
>> Segmentation fault (core dumped)
>
>Ouch.
>
>Andrew, would you mind adding a test case for that to the re test
>suite?  It's important that this doesn't come back!

In fact, on my machine with the default stacksize of 2048kb, test_re.py
already exercises this bug. (Goes away if I do an "unlimit", of course.)
So testing for this deterministically is always going to be dependent on
the platform. How large do you want to go (reasonably)? - although I
guess core dumps should be avoided...

Mark

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From effbot at telia.com  Thu Aug  3 13:10:24 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 13:10:24 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <398938BA.CA54A98E@lemburg.com>
Message-ID: <00eb01bffd3b$8324fb80$f2a6b5d4@hagrid>

mal wrote:

> Just for compares: would you mind running the search 
> routines in mxTextTools on the same machine ?

> > searching for "spam" in a string padded with "spaz" (1000 bytes on
> > each side of the target):
> > 
> > string.find     0.112 ms

texttools.find    0.080 ms

> > sre8.search     0.059
> > pre.search      0.122
> > 
> > unicode.find    0.130
> > sre16.search    0.065
> > 
> > same test, without any false matches (padded with "-"):
> > 
> > string.find     0.035 ms

texttools.find    0.083 ms

> > sre8.search     0.050
> > pre.search      0.116
> > 
> > unicode.find    0.031
> > sre16.search    0.055
> 
> Those results are probably due to the fact that string.find
> does a brute force search. If it would do a last match char
> first search or even Boyer-Moore (this only pays off for long
> search targets) then it should be a lot faster than [s|p]re.

does the TextTools algorithm work with arbitrary character
set sizes, btw?

</F>




From effbot at telia.com  Thu Aug  3 13:25:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 13:25:45 +0200
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
References: <39894F4D.FB11F098@per.dem.csiro.au>
Message-ID: <00fc01bffd3d$91a36460$f2a6b5d4@hagrid>

mark favas wrote:
> >> >>> p.match(500000*'x')
> >> Segmentation fault (core dumped)
> >
> >Andrew, would you mind adding a test case for that to the re test
> >suite?  It's important that this doesn't come back!
> 
> In fact, on my machine with the default stacksize of 2048kb, test_re.py
> already exercises this bug. (Goes away if I do an "unlimit", of course.)
> So testing for this deterministically is always going to be dependent on
> the platform. How large do you want to go (reasonably)? - although I
> guess core dumps should be avoided...

afaik, there was no test in the standard test suite that
included run-away recursion...

what test is causing this error?

(adding a print statement to sre._compile should help you
figure that out...)

</F>




From MarkH at ActiveState.com  Thu Aug  3 13:19:50 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 3 Aug 2000 21:19:50 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <398943C4.AFECEE36@lemburg.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEBGDDAA.MarkH@ActiveState.com>

> What about porting os.kill() to Windows (see my other post
> with changed subject line in this thread) ? Wouldn't that
> make sense ? (the os.spawn() APIs do return PIDs of spawned
> processes, so calling os.kill() to send signals to these
> seems like a feasable way to control them)

Signals are a bit of a problem on Windows.  We can terminate the thread
mid-execution, but a clean way of terminating a thread isn't obvious.

I admit I didnt really read the long manpage when you posted it, but is a
terminate-without-prejudice option any good?

Mark.




From MarkH at ActiveState.com  Thu Aug  3 13:34:09 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 3 Aug 2000 21:34:09 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEBGDDAA.MarkH@ActiveState.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEBGDDAA.MarkH@ActiveState.com>

eek - a bit quick off the mark here ;-]

> Signals are a bit of a problem on Windows.  We can terminate the thread
> mid-execution, but a clean way of terminating a thread isn't obvious.

thread = process - you get the idea!

> terminate-without-prejudice option any good?

really should say

> terminate-without-prejudice only version any good?

Mark.




From m.favas at per.dem.csiro.au  Thu Aug  3 13:35:48 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 03 Aug 2000 19:35:48 +0800
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
References: <39894F4D.FB11F098@per.dem.csiro.au> <00fc01bffd3d$91a36460$f2a6b5d4@hagrid>
Message-ID: <39895914.133D52A4@per.dem.csiro.au>

Fredrik Lundh wrote:
> 
> mark favas wrote:
> > In fact, on my machine with the default stacksize of 2048kb, test_re.py
> > already exercises this bug.> 
> afaik, there was no test in the standard test suite that
> included run-away recursion...
> 
> what test is causing this error?
> 
> (adding a print statement to sre._compile should help you
> figure that out...)
> 
> </F>

The stack overflow is caused by the test (in test_re.py):

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
assert re.match('(x)*', 50000*'x').span() == (0, 50000)

(changing 50000 to 18000 works, 19000 overflows...)

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From guido at beopen.com  Thu Aug  3 14:56:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 07:56:38 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Thu, 03 Aug 2000 12:11:24 +0200."
             <3989454C.5C9EF39B@lemburg.com> 
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>  
            <3989454C.5C9EF39B@lemburg.com> 
Message-ID: <200008031256.HAA06107@cj20424-a.reston1.va.home.com>

> "Barry A. Warsaw" wrote:
> > Good idea, not so good solution.  Let's make it more explicit by
> > adding a new method instead of a flag.

You're learning to channel me. :-)

> > I'll use `put' here since this
> > seems (in a sense) opposite of get() and my sleep addled brain can't
> > think of anything more clever.  Let's not argue about the name of this
> > method though -- if Guido likes the extension, he'll pick a good name
> > and I go on record as agreeing with his name choice, just to avoid a
> > protracted war.

But I'll need input.  My own idea was dict.getput(), but that's ugly
as hell; dict.put() doesn't suggest that it also returns the value.

Protocol: if you have a suggestion for a name for this function, mail
it to me.  DON'T MAIL THE LIST.  (If you mail it to the list, that
name is disqualified.)  Don't explain me why the name is good -- if
it's good, I'll know, if it needs an explanation, it's not good.  From
the suggestions I'll pick one if I can, and the first person to
suggest it gets a special mention in the implementation.  If I can't
decide, I'll ask the PythonLabs folks to help.

Marc-Andre writes:
> The following one-liner already does what you want:
> 
> 	d[word] = d.get(word, []).append('world')

Are you using a patch to the list object so that append() returns the
list itself?  Or was it just late?  For me, this makes d[word] = None.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at cnri.reston.va.us  Thu Aug  3 14:06:49 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:06:49 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <002601bffd21$5f23e800$f2a6b5d4@hagrid>; from effbot@telia.com on Thu, Aug 03, 2000 at 10:03:53AM +0200
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us> <002601bffd21$5f23e800$f2a6b5d4@hagrid>
Message-ID: <20000803080649.A27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 10:03:53AM +0200, Fredrik Lundh wrote:
>readers of the daily URL might have noticed that he posted
>a socket timeout wrapper a few days ago:

Noted; thanks!  I've sent him an e-mail...

--amk



From akuchlin at cnri.reston.va.us  Thu Aug  3 14:14:56 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:14:56 -0400
Subject: [Python-Dev] (s)re crashing in regrtest
In-Reply-To: <39895914.133D52A4@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Thu, Aug 03, 2000 at 07:35:48PM +0800
References: <39894F4D.FB11F098@per.dem.csiro.au> <00fc01bffd3d$91a36460$f2a6b5d4@hagrid> <39895914.133D52A4@per.dem.csiro.au>
Message-ID: <20000803081456.B27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 07:35:48PM +0800, Mark Favas wrote:
>The stack overflow is caused by the test (in test_re.py):
># Try nasty case that overflows the straightforward recursive
># implementation of repeated groups.

That would be the test I added last night to trip this problem, per
GvR's instructions.  I'll comment out the test for now, so that it can
be restored once the bug is fixed.

--amk



From mal at lemburg.com  Thu Aug  3 14:14:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 14:14:55 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>  
	            <398858BE.15928F47@lemburg.com> <200008022218.RAA04178@cj20424-a.reston1.va.home.com>
Message-ID: <3989623F.2AB4C00C@lemburg.com>

Guido van Rossum wrote:
>
> [...]
>
> > > > Some comments on the new version:
> > >
> > > > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > > > license to reproduce, analyze, test, perform and/or display publicly,
> > > > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > > > alone or in any derivative version, provided, however, that CNRI's
> > > > > License Agreement is retained in Python 1.6b1, alone or in any
> > > > > derivative version prepared by Licensee.
> > > >
> > > > I don't think the latter (retaining the CNRI license alone) is
> > > > possible: you always have to include the CWI license.
> > >
> > > Wow.  I hadn't even noticed this!  It seems you can prepare a
> > > derivative version of the license.  Well, maybe.
> >
> > I think they mean "derivative version of Python 1.6b1", but in
> > court, the above wording could cause serious trouble for CNRI
> 
> You're right of course, I misunderstood you *and* the license.  Kahn
> explains it this way:
> 
> [Kahn]
> | Ok. I take the point being made. The way english works with ellipsis or
> | anaphoric references is to link back to the last anchor point. In the above
> | case, the last referent is Python 1.6b1.
> |
> | Thus, the last phrase refers to a derivative version of Python1.6b1
> | prepared by Licensee. There is no permission given to make a derivative
> | version of the License.
>
> > ... it seems 2.0 can reuse the CWI license after all ;-)
> 
> I'm not sure why you think that: 2.0 is a derivative version and is
> thus bound by the CNRI license as well as by the license that BeOpen
> adds.

If you interpret the above wording in the sense of "preparing
a derivative version of the License Agreement", BeOpen (or
anyone else) could just remove the CNRI License text. I
understand that this is not intended (that's why I put the smiley
there ;-).

> [...] 
>
> > > > > 3. In the event Licensee prepares a derivative work that is based on
> > > > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > > > derivative work available to the public as provided herein, then
> > > > > Licensee hereby agrees to indicate in any such work the nature of the
> > > > > modifications made to Python 1.6b1.
> > > >
> > > > In what way would those indications have to be made ? A patch
> > > > or just text describing the new features ?
> > >
> > > Just text.  Bob Kahn told me that the list of "what's new" that I
> > > always add to a release would be fine.
> >
> > Ok, should be made explicit in the license though...
> 
> It's hard to specify this precisely -- in fact, the more precise you
> specify it the more scary it looks and the more likely they are to be
> able to find fault with the details of how you do it.  In this case, I
> believe (and so do lawyers) that vague is good!  If you write "ported
> to the Macintosh" and that's what you did, they can hardly argue with
> you, can they?

True.
 
> > > > What does "make available to the public" mean ? If I embed
> > > > Python in an application and make this application available
> > > > on the Internet for download would this fit the meaning ?
> > >
> > > Yes, that's why he doesn't use the word "publish" -- such an action
> > > would not be considered publication in the sense of the copyright law
> > > (at least not in the US, and probably not according to the Bern
> > > convention) but it is clearly making it available to the public.
> >
> > Ouch. That would mean I'd have to describe all additions,
> > i.e. the embedding application, in most details in order not to
> > breach the terms of the CNRI license.
> 
> No, additional modules aren't modifications to CNRI's work.  A change
> to the syntax to support curly braces is.

Ok, thanks for clarifying this.

(I guess the "vague is good" argument fits here as well.)
 
> > > > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > > > INFRINGE ANY THIRD PARTY RIGHTS.
> > > > >
> > > > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> > > >
> > > > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > > > Germany the above text would only be valid after an initial
> > > > 6 month period after installation, AFAIK (this period is
> > > > called "Gew?hrleistung"). Licenses from other vendors usually
> > > > add some extra license text to limit the liability in this period
> > > > to the carrier on which the software was received by the licensee,
> > > > e.g. the diskettes or CDs.
> > >
> > > I'll mention this to Kahn.
> 
> His response:
> 
> | Guido, Im not willing to do a study of international law here. If you
> | can have the person identify one country other than the US that does
> | not allow the above limitation or exclusion of liability and provide a
> | copy of the section of their law, ill be happy to change this to read
> | ".... SOME STATES OR COUNTRIES MAY NOT ALLOW ...." Otherwise, id just
> | leave it alone (i.e. as is) for now.
> 
> Please mail this info directly to Kahn at CNRI.Reston.Va.US if you
> believe you have the right information.  (You may CC me.)  Personally,
> I wouldn't worry.  If the German law says that part of a license is
> illegal, it doesn't make it any more or less illegal whether the
> license warns you about this fact.
> 
> I believe that in the US, as a form of consumer protection, some
> states not only disallow general disclaimers, but also require that
> licenses containing such disclaimers notify the reader that the
> disclaimer is not valid in their state, so that's where the language
> comes from.  I don't know about German law.

I haven't found an English version of the German law text,
but this is the title of the law which handles German
business conditions:

"Gesetz zur Regelung des Rechts der Allgemeinen Gesch?ftsbedingungen
AGBG) - Act Governing Standard Business Conditions"
 
The relevant paragraph is no. 11 (10).

I'm not a lawyer, but from what I know:
terms generally excluding liability are invalid; liability
may be limited during the first 6 months after license
agreement and excluded after this initial period.

Anyway, you're right in that the notice about the paragraph
not necessarily applying to the licensee only has informational
character and that it doesn't do any harm otherwise.

> > > > > 6. This License Agreement will automatically terminate upon a material
> > > > > breach of its terms and conditions.
> > > >
> > > > Immediately ? Other licenses usually include a 30-60 day period
> > > > which allows the licensee to take actions. With the above text,
> > > > the license will put the Python copy in question into an illegal
> > > > state *prior* to having even been identified as conflicting with the
> > > > license.
> > >
> > > Believe it or not, this is necessary to ensure GPL compatibility!  An
> > > earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> > > incompatible.  There's an easy workaround though: you fix your
> > > compliance and download a new copy, which gives you all the same
> > > rights again.
> >
> > Hmm, but what about the 100.000 copies of the embedding application
> > that have already been downloaded -- I would have to force them
> > to redownload the application (or even just a demo of it) in
> > order to reestablish the lawfulness of the copy action.
> 
> It's better not to violate the license.  But do you really think that
> they would go after you immediately if you show good intentions to
> rectify?

I don't intend to violate the license, but customers of 
an application embedding Python will have to agree to the
Python license to be able to legally use the Python engine
embedded in the application -- that is: if the application
unintensionally fails to meet the CNRI license terms
then the application as a whole would immediately become
unusable by the customer.

Now just think of an eCommerce application which produces
some $100k USD revenue each day... such a customer wouldn't
like these license terms at all :-(

BTW, I think that section 6. can be removed altogether, if
it doesn't include any reference to such a 30-60 day period:
the permissions set forth in a license are only valid in case
the license terms are adhered to whether it includes such
a section or not.

> > Not that I want to violate the license in any way, but there
> > seem to be quite a few pitfalls in the present text, some of
> > which are not clear at all (e.g. the paragraph 3).
> 
> I've warned Kahn about this effect of making the license bigger, but
> he simply disagrees (and we agree to disagree).  I don't know what
> else I could do about it, apart from putting a FAQ about the license
> on python.org -- which I intend to do.

Good (or bad ? :-()
 
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From akuchlin at cnri.reston.va.us  Thu Aug  3 14:22:44 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:22:44 -0400
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: <006601bffd24$e25a9360$f2a6b5d4@hagrid>; from effbot@telia.com on Thu, Aug 03, 2000 at 10:27:39AM +0200
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us> <006601bffd24$e25a9360$f2a6b5d4@hagrid>
Message-ID: <20000803082244.C27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 10:27:39AM +0200, Fredrik Lundh wrote:
>if it will ever be translated, that is...

I'll agree to take a shot at it (which carries no implication of
actually finisihing :) ) post-2.0.  It's silly for all of Tcl, Python,
Perl to grow their own implementations, when a common implementation
could benefit from having 3x the number of eyes looking at it and
optimizing it.

>fwiw, that pattern isn't portable:

No, it isn't; the straightforward implementation of repeated groups is
recursive, and fixing this requires jumping through hoops to make it
nonrecursive (or adopting Python's solution and only recursing up to
some upper limit).  re had to get this right because regex didn't
crash on this pattern, and neither do recent Perls.  The vast bulk of
my patches to PCRE were to fix this problem.

--amk



From guido at beopen.com  Thu Aug  3 15:31:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 08:31:16 -0500
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
In-Reply-To: Your message of "Thu, 03 Aug 2000 10:27:39 +0200."
             <006601bffd24$e25a9360$f2a6b5d4@hagrid> 
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>  
            <006601bffd24$e25a9360$f2a6b5d4@hagrid> 
Message-ID: <200008031331.IAA06319@cj20424-a.reston1.va.home.com>

> andrew wrote:
> 
> > Hmm... here's an old problem that's returned (recursion on repeated
> > group matches, I expect):
> > 
> > >>> p=re.compile('(x)*')
> > >>> p
> > <SRE_Pattern object at 0x8127048>
> > >>> p.match(500000*'x')
> > Segmentation fault (core dumped)

Effbot:
> fwiw, that pattern isn't portable:

Who cares -- it shouldn't dump core!

> ...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
> has already taken care of the other one ;-).  but 0.9.9 won't be
> out before the 1.6b1 release...

I assume you are planning to put the backtracking stack back in, as
you mentioned in the checkin message?

> (and to avoid scaring the hell out of the beta testers, it's probably
> better to leave the test out of the regression suite until the bug is
> fixed...)

Even better, is it possible to put a limit on the recursion level
before 1.6b1 is released (tomorrow if we get final agreement on the
license) so at least it won't dump core?  Otherwise you'll get reports
of this from people who write this by accident...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug  3 14:57:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 14:57:14 +0200
Subject: [Python-Dev] Buglist
Message-ID: <20000803145714.B266@xs4all.nl>

Just a little FYI and 'is this okay' message; I've been browsing the buglist
the last few days, doing a quick mark & message sweep over the bugs that I
can understand. I've mostly been closing bugs that look closed, and
assigning them when it's very obvious who it should be assigned to.

Should I be doing this already ? Is the bug-importing 'done', or is Jeremy
still busy with importing and fixing bug status (stati ?) and such ? Is
there something better to use as a guideline than my 'best judgement' ? I
think it's a good idea to 'resolve' most of the bugs on the list, because a
lot of them are really non-issues or no-longer-issues, and the sheer size of
the list prohibits a proper overview of the real issues :P However, it's
entirely possible we're going to miss out on a few bugs this way. I'm trying
my best to be careful, but I think overlooking a few bugs is better than
overlooking all of them because of the size of the list :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Thu Aug  3 15:05:07 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 3 Aug 2000 09:05:07 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <1246814587-81974994@hypernet.com>

[Tim sez]
> The new rule is ...
>...  an \x escape is of the form
> 
>     \xhh
> 
> where h is a hex digit.  That's it.  

> >>> "\x123465" # \x12 -> \022, "3465" left alone
> '\0223465'

Hooray! I got bit often enough by that one ('e') that I forced 
myself to always use the wholly unnatural octal.

god-gave-us-sixteen-fingers-for-a-reason-ly y'rs


- Gordon



From fdrake at beopen.com  Thu Aug  3 15:06:51 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 09:06:51 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
Message-ID: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>

  At various points, there have been comments that xrange objects
should not print as lists but as xrange objects.  Taking a look at the
implementation, I noticed that if you call repr() (by name or by
backtick syntax), you get "the right thing"; the list representation
comes up when you print the object on a real file object.  The
tp_print slot of the xrange type produces the list syntax.  There is
no tp_str handler, so str(xrange(...)) is the same as
repr(xrange(...)).
  I propose ripping out the tp_print handler completely.  (And I've
already tested my patch. ;)
  Comments?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Thu Aug  3 15:09:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 16:09:40 +0300 (IDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008031609130.26290-100000@sundial>

On Thu, 3 Aug 2000, Fred L. Drake, Jr. wrote:

>   I propose ripping out the tp_print handler completely.  (And I've
> already tested my patch. ;)
>   Comments?

+1. Like I always say: less code, less bugs.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Thu Aug  3 15:31:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:31:34 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <398938BA.CA54A98E@lemburg.com> <00eb01bffd3b$8324fb80$f2a6b5d4@hagrid>
Message-ID: <39897436.E42F1C3C@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > Just for compares: would you mind running the search
> > routines in mxTextTools on the same machine ?
> 
> > > searching for "spam" in a string padded with "spaz" (1000 bytes on
> > > each side of the target):
> > >
> > > string.find     0.112 ms
> 
> texttools.find    0.080 ms
> 
> > > sre8.search     0.059
> > > pre.search      0.122
> > >
> > > unicode.find    0.130
> > > sre16.search    0.065
> > >
> > > same test, without any false matches (padded with "-"):
> > >
> > > string.find     0.035 ms
> 
> texttools.find    0.083 ms
> 
> > > sre8.search     0.050
> > > pre.search      0.116
> > >
> > > unicode.find    0.031
> > > sre16.search    0.055
> >
> > Those results are probably due to the fact that string.find
> > does a brute force search. If it would do a last match char
> > first search or even Boyer-Moore (this only pays off for long
> > search targets) then it should be a lot faster than [s|p]re.
> 
> does the TextTools algorithm work with arbitrary character
> set sizes, btw?

The find function creates a Boyer-Moore search object
for the search string (on every call). It compares 1-1
or using a translation table which is applied
to the searched text prior to comparing it to the search
string (this enables things like case insensitive
search and character sets, but is about 45% slower). Real-life
usage would be to create the search objects once per process
and then reuse them. The Boyer-Moore table calcuation takes
some time...

But to answer your question: mxTextTools is 8-bit throughout.
A Unicode aware version will follow by the end of this year.

Thanks for checking,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug  3 15:40:05 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:40:05 +0200
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 
 failing...)
References: <ECEPKNMJLHAPFFJHDOJBMEBGDDAA.MarkH@ActiveState.com>
Message-ID: <39897635.6C9FB82D@lemburg.com>

Mark Hammond wrote:
> 
> eek - a bit quick off the mark here ;-]
> 
> > Signals are a bit of a problem on Windows.  We can terminate the thread
> > mid-execution, but a clean way of terminating a thread isn't obvious.
> 
> thread = process - you get the idea!
> 
> > terminate-without-prejudice option any good?
> 
> really should say
> 
> > terminate-without-prejudice only version any good?

Well for one you can use signals for many other things than
just terminating a process (e.g. to have it reload its configuration
files). That's why os.kill() allows you to specify a signal.

The usual way of terminating a process on Unix from the outside
is to send it a SIGTERM (and if that doesn't work a SIGKILL).
I use this strategy a lot to control runaway client processes
and safely shut them down:

On Unix you can install a signal
handler in the Python program which then translates the SIGTERM
signal into a normal Python exception. Sending the signal then
causes the same as e.g. hitting Ctrl-C in a program: an
exception is raised asynchronously, but it can be handled
properly by the Python exception clauses to enable safe
shutdown of the process.

For background: the client processes in my application server
can execute arbitrary Python scripts written by users, i.e.
potentially buggy code which could effectively hose the server.
To control this, I use client processes which do the actual
exec code and watch them using a watchdog process. If the processes
don't return anything useful within a certain timeout limit,
the watchdog process sends them a SIGTERM and restarts a new
client.

Threads would not support this type of strategy, so I'm looking
for something similar on Windows, Win2k to be more specific.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Thu Aug  3 16:50:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 09:50:26 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Thu, 03 Aug 2000 14:14:55 +0200."
             <3989623F.2AB4C00C@lemburg.com> 
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com> <398858BE.15928F47@lemburg.com> <200008022218.RAA04178@cj20424-a.reston1.va.home.com>  
            <3989623F.2AB4C00C@lemburg.com> 
Message-ID: <200008031450.JAA06505@cj20424-a.reston1.va.home.com>

> > > ... it seems 2.0 can reuse the CWI license after all ;-)
> > 
> > I'm not sure why you think that: 2.0 is a derivative version and is
> > thus bound by the CNRI license as well as by the license that BeOpen
> > adds.
> 
> If you interpret the above wording in the sense of "preparing
> a derivative version of the License Agreement", BeOpen (or
> anyone else) could just remove the CNRI License text. I
> understand that this is not intended (that's why I put the smiley
> there ;-).

Please forget this interpretation! :-)

> I haven't found an English version of the German law text,
> but this is the title of the law which handles German
> business conditions:
> 
> "Gesetz zur Regelung des Rechts der Allgemeinen Gesch?ftsbedingungen
> AGBG) - Act Governing Standard Business Conditions"
>  
> The relevant paragraph is no. 11 (10).
> 
> I'm not a lawyer, but from what I know:
> terms generally excluding liability are invalid; liability
> may be limited during the first 6 months after license
> agreement and excluded after this initial period.
> 
> Anyway, you're right in that the notice about the paragraph
> not necessarily applying to the licensee only has informational
> character and that it doesn't do any harm otherwise.

OK, we'll just let this go.

> > It's better not to violate the license.  But do you really think that
> > they would go after you immediately if you show good intentions to
> > rectify?
> 
> I don't intend to violate the license, but customers of 
> an application embedding Python will have to agree to the
> Python license to be able to legally use the Python engine
> embedded in the application -- that is: if the application
> unintensionally fails to meet the CNRI license terms
> then the application as a whole would immediately become
> unusable by the customer.
> 
> Now just think of an eCommerce application which produces
> some $100k USD revenue each day... such a customer wouldn't
> like these license terms at all :-(

That depends.  Unintentional failure to meet the license terms seems
unlikely to me considering that the license doesn't impose a lot of
requirments.  It's vague in its definitions, but I think that works in
your advantage.

> BTW, I think that section 6. can be removed altogether, if
> it doesn't include any reference to such a 30-60 day period:
> the permissions set forth in a license are only valid in case
> the license terms are adhered to whether it includes such
> a section or not.

Try to explain that to a lawyer. :)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Thu Aug  3 15:55:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:55:28 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>  
	            <3989454C.5C9EF39B@lemburg.com> <200008031256.HAA06107@cj20424-a.reston1.va.home.com>
Message-ID: <398979D0.5AF80126@lemburg.com>

Guido van Rossum wrote:
> 
> Marc-Andre writes:
> > The following one-liner already does what you want:
> >
> >       d[word] = d.get(word, []).append('world')
> 
> Are you using a patch to the list object so that append() returns the
> list itself?  Or was it just late?  For me, this makes d[word] = None.

Ouch... looks like I haven't had enough coffee today. I'll
fix that immediately ;-)

How about making this a method:

def inplace(dict, key, default):

    value = dict.get(key, default)
    dict[key] = value
    return value

>>> d = {}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world']}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world', 'world']}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world', 'world', 'world']}

(Hope I got it right this time ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Thu Aug  3 16:14:13 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 10:14:13 -0400 (EDT)
Subject: [Python-Dev] Buglist
In-Reply-To: <20000803145714.B266@xs4all.nl>
References: <20000803145714.B266@xs4all.nl>
Message-ID: <14729.32309.807363.345594@bitdiddle.concentric.net>

I am done moving old bugs from Jitterbug to SF.  There are still some
new bugs being submitted to Jitterbug, which I'll need to move one at
a time.

In principle, it's okay to mark bugs as closed, as long as you are
*sure* that the bug has been fixed.  If you try to reproduce a bug on
your system and can't, it's not clear that it has been fixed.  It
might be a platform-specific bug, for example.  I would prefer it if
you only closed bugs where you can point to the CVS checkin that fixed
it.

Whenever you fix a bug, you should add a test case to the regression
test that would have caught the bug.  Have you done that for any of
the bugs you've marked as closed?

You should also add a comment at any bug you're closing explaining why
it is closed.

It is good to assign bugs to people -- probably even if we end up
playing hot potato for a while.  If a bug is assigned to you, you
should either try to fix it, diagnose it, or assign it to someone
else.

> I think overlooking a few bugs is better than overlooking all of
> them because of the size of the list :P 

You seem to be arguing that the sheer number of bug reports bothers
you and that it's better to have a shorter list of bugs regardless of
whether they're actually fixed.  Come on! I don't want to overlook any
bugs.

Jeremy



From bwarsaw at beopen.com  Thu Aug  3 16:25:20 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 10:25:20 -0400 (EDT)
Subject: [Python-Dev] Go \x yourself
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <14729.32976.819777.292096@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> The new rule is the same as Perl uses for \x escapes in -w
    TP> mode, except that Python will raise ValueError at compile-time
    TP> for an invalid \x escape: an \x escape is of the form

    TP>     \xhh

    TP> where h is a hex digit.  That's it.

+1



From bwarsaw at beopen.com  Thu Aug  3 16:41:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 10:41:10 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
	<14728.63466.263123.434708@anthem.concentric.net>
	<3989454C.5C9EF39B@lemburg.com>
Message-ID: <14729.33926.145263.296629@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> The following one-liner already does what you want:

    M> 	d[word] = d.get(word, []).append('world')

    M> ... and it's in no way more readable than your proposed
    M> .put() line ;-)

Does that mean it's less readable?  :)

-Barry



From mal at lemburg.com  Thu Aug  3 16:49:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 16:49:01 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
		<14728.63466.263123.434708@anthem.concentric.net>
		<3989454C.5C9EF39B@lemburg.com> <14729.33926.145263.296629@anthem.concentric.net>
Message-ID: <3989865D.A52964D6@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> The following one-liner already does what you want:
> 
>     M>  d[word] = d.get(word, []).append('world')
> 
>     M> ... and it's in no way more readable than your proposed
>     M> .put() line ;-)
> 
> Does that mean it's less readable?  :)

I find these .go_home().get_some_cheese().and_eat()...
constructions rather obscure.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Thu Aug  3 16:49:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 16:49:49 +0200
Subject: [Python-Dev] Buglist
In-Reply-To: <14729.32309.807363.345594@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 03, 2000 at 10:14:13AM -0400
References: <20000803145714.B266@xs4all.nl> <14729.32309.807363.345594@bitdiddle.concentric.net>
Message-ID: <20000803164949.D13365@xs4all.nl>

On Thu, Aug 03, 2000 at 10:14:13AM -0400, Jeremy Hylton wrote:

> In principle, it's okay to mark bugs as closed, as long as you are
> *sure* that the bug has been fixed.  If you try to reproduce a bug on
> your system and can't, it's not clear that it has been fixed.  It
> might be a platform-specific bug, for example.  I would prefer it if
> you only closed bugs where you can point to the CVS checkin that fixed
> it.

This is tricky for some bugreports, as they don't say *anything* about the
platform in question. However, I have been conservative, and haven't done
anything if I didn't either have the same platform as mentioned and could
reproduce the bug with 1.6a2 and/or Python 1.5.2 (very handy to have them
lying around) but not with current CVS, OR could find the CVS checkin that
fixed them. For instance, the incorrect usage of PyMem_Del() in some modules
(bug #110638) *seems* to be fixed, but I can't really test it and the CVS
checkin(s) that seem to fix it don't even mention the bug or the reason for
the change.

> Whenever you fix a bug, you should add a test case to the regression
> test that would have caught the bug.  Have you done that for any of
> the bugs you've marked as closed?

No, because all the bugs I've closed so far are 'obviously fixed', by
someone other than me. I would write one if I fixed the bug myself, I guess.
Also, most of these are more 'issues' rather than 'bugs', like someone
complaining about installing Python without Tcl/Tk and Tkinter not working,
threads misbehaving on some systems (didn't close that one, just added a
remark), etc.

> You should also add a comment at any bug you're closing explaining why
> it is closed.

Of course. I also forward the SF excerpt to the original submittor, since
they are not likely to browse the SF buglist and spot their own bug.

> It is good to assign bugs to people -- probably even if we end up
> playing hot potato for a while.  If a bug is assigned to you, you
> should either try to fix it, diagnose it, or assign it to someone
> else.

Hm, I did that for a few, but it's not very easy to find the right person,
in some cases. Bugs in the 're' module, should they go to amk or to /F ? XML
stuff, should it go to Paul Prescod or some of the other people who seem to
be doing something with XML ? A 'capabilities' list would be pretty neat!

> > I think overlooking a few bugs is better than overlooking all of
> > them because of the size of the list :P 

> You seem to be arguing that the sheer number of bug reports bothers
> you and that it's better to have a shorter list of bugs regardless of
> whether they're actually fixed.  Come on! I don't want to overlook any
> bugs.

No, that wasn't what I meant :P It's just that some bugs are vague, and
*seem* fixed, but are still an issue on some combination of compiler,
libraries, OS, etc. Also, there is the question on whether something is a
bug or a feature, or an artifact of compiler, library or design. A quick
pass over the bugs will either have to draw a firm line somewhere, or keep
most of the bugs and hope someone will look at them.

Having 9 out of 10 bugs waiting in the buglist without anyone looking at
them because it's too vague and everyone thinks not 'their' field of
expertise and expect someone else to look at them, defeats the purpose of
the buglist. But closing those bugreports, explaining the problem and even
forwarding the excerpt to the submittor *might* result in the original
submittor, who still has the bug, to forget about explaining it further,
whereas a couple of hours trying to duplicate the bug might locate it. I
personally just wouldn't want to be the one doing all that effort ;)

Just-trying-to-help-you-do-your-job---not-taking-it-over-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Aug  3 17:00:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 17:00:03 +0200
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 03, 2000 at 09:06:51AM -0400
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
Message-ID: <20000803170002.C266@xs4all.nl>

On Thu, Aug 03, 2000 at 09:06:51AM -0400, Fred L. Drake, Jr. wrote:

> There is no tp_str handler, so str(xrange(...)) is the same as
> repr(xrange(...)).
>   I propose ripping out the tp_print handler completely.  (And I've
> already tested my patch. ;)
>   Comments?

+0... I would say 'swap str and repr', because str(xrange) does what
repr(xrange) should do, and the other way 'round:

>>> x = xrange(1000)
>>> repr(x)
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
... ... ... 
... 998, 999)

>>> str(x)
'(xrange(0, 1000, 1) * 1)'

But I don't really care either way.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug  3 17:14:57 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 11:14:57 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <20000803170002.C266@xs4all.nl>
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
	<20000803170002.C266@xs4all.nl>
Message-ID: <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > >>> x = xrange(1000)
 > >>> repr(x)
 > (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
 > ... ... ... 
 > ... 998, 999)
 > 
 > >>> str(x)
 > '(xrange(0, 1000, 1) * 1)'

  What version is this with?  1.5.2 gives me:

Python 1.5.2 (#1, May  9 2000, 15:05:56)  [GCC 2.95.3 19991030 (prerelease)] on linux-i386
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = xrange(2)
>>> str(x)
'(xrange(0, 2, 1) * 1)'
>>> repr(x)
'(xrange(0, 2, 1) * 1)'
>>> x
(0, 1)

  The 1.6b1 that's getting itself ready says this:

Python 1.6b1 (#19, Aug  2 2000, 01:11:29)  [GCC 2.95.3 19991030 (prerelease)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
Module readline not available.
>>> x = xrange(2)
>>> str(x)
'(xrange(0, 2, 1) * 1)'
>>> repr(x)
'(xrange(0, 2, 1) * 1)'
>>> x
(0, 1)

  What I'm proposing is:

Python 2.0b1 (#116, Aug  2 2000, 15:35:35)  [GCC 2.95.3 19991030 (prerelease)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> x = xrange(2)
>>> str(x)
'xrange(0, 2, 1)'
>>> repr(x)
'xrange(0, 2, 1)'
>>> x
xrange(0, 2, 1)

  (Where the outer (... * n) is added only when n != 1, 'cause I think
that's just ugly.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug  3 17:30:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 17:30:23 +0200
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 03, 2000 at 11:14:57AM -0400
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com> <20000803170002.C266@xs4all.nl> <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>
Message-ID: <20000803173023.D266@xs4all.nl>

On Thu, Aug 03, 2000 at 11:14:57AM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > >>> x = xrange(1000)
>  > >>> repr(x)
>  > (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
>  > ... ... ... 
>  > ... 998, 999)
>  > 
>  > >>> str(x)
>  > '(xrange(0, 1000, 1) * 1)'

>   What version is this with?  1.5.2 gives me:
> 
> Python 1.5.2 (#1, May  9 2000, 15:05:56)  [GCC 2.95.3 19991030 (prerelease)] on linux-i386
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> x = xrange(2)
> >>> str(x)
> '(xrange(0, 2, 1) * 1)'
> >>> repr(x)
> '(xrange(0, 2, 1) * 1)'
> >>> x
> (0, 1)

Sorry, my bad. I just did 'x', and assumed it called repr(). I guess my
newbiehood shows in that I thought 'print x' always called 'str(x)'. Like I
replied to Tim this morning, after he caught me in the same kind of
ebmarrasing thinko:

Sigh, that's what I get for getting up when my GF had to and being at the
office at 8am. Don't mind my postings today, they're likely 99% brainfart.

Seeing as how 'print "range: %s"%x' did already use the 'str' and 'repr'
output, I see no reason not to make 'print x' do the same. So +1.

> >>> x
> xrange(0, 2, 1)
> 
>   (Where the outer (... * n) is added only when n != 1, 'cause I think
> that's just ugly.)

Why not remove the first and last argument, if they are respectively 0 and 1?

>>> xrange(100)
xrange(100)
>>> xrange(10,100)
xrange(10, 100)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug  3 17:48:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 11:48:28 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <20000803173023.D266@xs4all.nl>
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
	<20000803170002.C266@xs4all.nl>
	<14729.35953.19610.61905@cj42289-a.reston1.va.home.com>
	<20000803173023.D266@xs4all.nl>
Message-ID: <14729.37964.46818.653202@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Sorry, my bad. I just did 'x', and assumed it called repr(). I guess my
 > newbiehood shows in that I thought 'print x' always called 'str(x)'. Like I

  That's the evil beauty of tp_print -- nobody really expects it
because most types don't implement it (and don't need to); I seem to
recall Guido saying it was a performance issue for certain types, but
don't recall the specifics.

 > Why not remove the first and last argument, if they are respectively 0 and 1?

  I agree!  In fact, always removing the last argument if it == 1 is a
good idea as well.  Here's the output from the current patch:

>>> xrange(2)
xrange(2)
>>> xrange(2, 4)
xrange(2, 4)
>>> x = xrange(10, 4, -1)
>>> x
xrange(10, 4, -1)
>>> x.tolist()
[10, 9, 8, 7, 6, 5]
>>> x*3
(xrange(10, 4, -1) * 3)



  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Thu Aug  3 18:26:51 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 12:26:51 -0400 (EDT)
Subject: [Python-Dev] Buglist
In-Reply-To: <20000803164949.D13365@xs4all.nl>
References: <20000803145714.B266@xs4all.nl>
	<14729.32309.807363.345594@bitdiddle.concentric.net>
	<20000803164949.D13365@xs4all.nl>
Message-ID: <14729.40267.557470.612144@bitdiddle.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

  >> It is good to assign bugs to people -- probably even if we end up
  >> playing hot potato for a while.  If a bug is assigned to you, you
  >> should either try to fix it, diagnose it, or assign it to someone
  >> else.

  TW> Hm, I did that for a few, but it's not very easy to find the
  TW> right person, in some cases. Bugs in the 're' module, should
  TW> they go to amk or to /F ? XML stuff, should it go to Paul
  TW> Prescod or some of the other people who seem to be doing
  TW> something with XML ? A 'capabilities' list would be pretty neat!

I had the same problem when I was trying to assign bugs.  It is seldom
clear who should be assigned a bug.  I have used two rules when
processing open, uncategorized bugs:

    * If you have a reasonable guess about who to assign a bug to,
    it's better to assign to the wrong person than not to assign at
    all.  If the wrong person gets it, she can assign it to someone
    else. 

    * If you don't know who to assign it to, at least give it a
    category.  That allows someone who feels expert in a category
    (e.g. a Tkinter guru), to easily scan all the unassigned bugs in
    that category.

  >> You seem to be arguing that the sheer number of bug reports
  >> bothers you and that it's better to have a shorter list of bugs
  >> regardless of whether they're actually fixed.  Come on! I don't
  >> want to overlook any bugs.

  TW> No, that wasn't what I meant :P 

Sorry.  I didn't believe you really meant that, but you came off
sounding like you did :-).

  TW> Having 9 out of 10 bugs waiting in the buglist without anyone
  TW> looking at them because it's too vague and everyone thinks not
  TW> 'their' field of expertise and expect someone else to look at
  TW> them, defeats the purpose of the buglist. 

I still don't agree here.  If you're not fairly certain about the bug,
keep it on the list.  I don't see too much harm in having vague, open
bugs on the list.  

  TW>                                           But closing those
  TW> bugreports, explaining the problem and even forwarding the
  TW> excerpt to the submittor *might* result in the original
  TW> submittor, who still has the bug, to forget about explaining it
  TW> further, whereas a couple of hours trying to duplicate the bug
  TW> might locate it. I personally just wouldn't want to be the one
  TW> doing all that effort ;)

You can send mail to the person who reported the bug and ask her for
more details without closing it.

  TW> Just-trying-to-help-you-do-your-job---not-taking-it-over-ly

And I appreciate the help!! The more bugs we have categorized or
assigned, the better.

of-course-actually-fixing-real-bugs-is-good-too-ly y'rs,
Jeremy





From moshez at math.huji.ac.il  Thu Aug  3 18:44:28 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 19:44:28 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
Message-ID: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>

Suppose I'm fixing a bug in the library. I want peer review for my fix,
but I need none for my new "would have caught" test cases. Is it
considered alright to check-in right away the test case, breaking the test
suite, and to upload a patch to SF to fix it? Or should the patch include
the new test cases? 

The XP answer would be "hey, you have to checkin the breaking test case
right away", and I'm inclined to agree.

I really want to break the standard library, just because I'm a sadist --
but seriously, we need tests that break more often, so bugs will be easier
to fix.

waiting-for-fellow-sadists-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Thu Aug  3 19:54:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 12:54:55 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 19:44:28 +0300."
             <Pine.GSO.4.10.10008031940420.2575-100000@sundial> 
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> 
Message-ID: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>

> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases? 
> 
> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.
> 
> I really want to break the standard library, just because I'm a sadist --
> but seriously, we need tests that break more often, so bugs will be easier
> to fix.

In theory I'm with you.  In practice, each time the test suite breaks,
we get worried mail from people who aren't following the list closely,
did a checkout, and suddenly find that the test suite breaks.  That
just adds noise to the list.  So I'm against it.

-1

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From moshez at math.huji.ac.il  Thu Aug  3 18:55:41 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 19:55:41 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008031954110.2575-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> In theory I'm with you.  In practice, each time the test suite breaks,
> we get worried mail from people who aren't following the list closely,
> did a checkout, and suddenly find that the test suite breaks.  That
> just adds noise to the list.  So I'm against it.
> 
> -1

In theory, theory and practice shouldn't differ. In practice, they do.
Guido, you're way too much realist <1.6 wink>
Oh, well.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From gstein at lyra.org  Thu Aug  3 19:04:01 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 10:04:01 -0700
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 03, 2000 at 07:44:28PM +0300
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <20000803100401.T19525@lyra.org>

On Thu, Aug 03, 2000 at 07:44:28PM +0300, Moshe Zadka wrote:
> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases?

If you're fixing a bug, then check in *both* pieces and call explicitly for
a peer reviewer (plus the people watching -checkins). If you don't quite fix
the bug, then a second checkin can smooth things out.

Let's not get too caught up in "process", to the exclusion of being
productive about bug fixing.

> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.
> 
> I really want to break the standard library, just because I'm a sadist --
> but seriously, we need tests that break more often, so bugs will be easier
> to fix.

I really want to see less process and discussion, and more code.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From effbot at telia.com  Thu Aug  3 19:19:03 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 19:19:03 +0200
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>              <006601bffd24$e25a9360$f2a6b5d4@hagrid>  <200008031331.IAA06319@cj20424-a.reston1.va.home.com>
Message-ID: <007401bffd6e$ed9bbde0$f2a6b5d4@hagrid>

guido wrote:
> > ...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
> > has already taken care of the other one ;-).  but 0.9.9 won't be
> > out before the 1.6b1 release...
> 
> I assume you are planning to put the backtracking stack back in, as
> you mentioned in the checkin message?

yup -- but that'll have to wait a few more days...

> > (and to avoid scaring the hell out of the beta testers, it's probably
> > better to leave the test out of the regression suite until the bug is
> > fixed...)
> 
> Even better, is it possible to put a limit on the recursion level
> before 1.6b1 is released (tomorrow if we get final agreement on the
> license) so at least it won't dump core?

shouldn't be too hard, given that I added a "recursion level
counter" in _sre.c revision 2.30.  I just added the necessary
if-statement.

</F>




From gstein at lyra.org  Thu Aug  3 20:39:08 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 11:39:08 -0700
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 03, 2000 at 12:54:55PM -0500
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
Message-ID: <20000803113908.X19525@lyra.org>

On Thu, Aug 03, 2000 at 12:54:55PM -0500, Guido van Rossum wrote:
> > Suppose I'm fixing a bug in the library. I want peer review for my fix,
> > but I need none for my new "would have caught" test cases. Is it
> > considered alright to check-in right away the test case, breaking the test
> > suite, and to upload a patch to SF to fix it? Or should the patch include
> > the new test cases? 
> > 
> > The XP answer would be "hey, you have to checkin the breaking test case
> > right away", and I'm inclined to agree.
> > 
> > I really want to break the standard library, just because I'm a sadist --
> > but seriously, we need tests that break more often, so bugs will be easier
> > to fix.
> 
> In theory I'm with you.  In practice, each time the test suite breaks,
> we get worried mail from people who aren't following the list closely,
> did a checkout, and suddenly find that the test suite breaks.  That
> just adds noise to the list.  So I'm against it.

Tell those people to chill out for a few days and not be so jumpy. You're
talking about behavior that can easily be remedied.

It is a simple statement about the CVS repository: "CVS builds but may not
pass the test suite in certain cases" rather than "CVS is perfect"

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From tim_one at email.msn.com  Thu Aug  3 20:49:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 14:49:02 -0400
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> Suppose I'm fixing a bug in the library. I want peer review
> for my fix, but I need none for my new "would have caught"
> test cases. Is it considered alright to check-in right away
> the test case, breaking the test suite, and to upload a patch
> to SF to fix it? Or should the patch include the new test cases?
>
> The XP answer would be "hey, you have to checkin the breaking
> test case right away", and I'm inclined to agree.

It's abhorrent to me to ever leave the tree in a state where a test is
"expected to fail".  If it's left in a failing state for a brief period, at
best other developers will waste time wondering whether it's due to
something they did.  If it's left in a failing state longer than that,
people quickly stop paying attention to failures at all (the difference
between "all passed" and "something failed" is huge, the differences among 1
or 2 or 3 or ... failures get overlooked, and we've seen over and over that
when 1 failure is allowed to persist, others soon join it).

You can check in an anti-test right away, though:  a test that passes so
long as the code remains broken <wink>.





From jeremy at beopen.com  Thu Aug  3 20:58:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 14:58:15 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
Message-ID: <14729.49351.574550.48521@bitdiddle.concentric.net>

I'm Tim on this issue.  As officially appointed release manager for
2.0, I set some guidelines for checking in code.  One is that no
checkin should cause the regression test to fail.  If it does, I'll
back it out.

If you didn't review the contribution guidelines when they were posted
on this list, please look at PEP 200 now.

Jeremy



From jeremy at beopen.com  Thu Aug  3 21:00:23 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:00:23 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
	<14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <14729.49479.677157.957162@bitdiddle.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy at beopen.com> writes:

  JH> I'm Tim on this issue.

Make that "I'm with Tim on this issue."  I'm sure it would be fun to
channel Tim, but I don't have the skills for that.

Jeremy



From guido at beopen.com  Thu Aug  3 22:02:07 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:02:07 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 11:39:08 MST."
             <20000803113908.X19525@lyra.org> 
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <200008031754.MAA08812@cj20424-a.reston1.va.home.com>  
            <20000803113908.X19525@lyra.org> 
Message-ID: <200008032002.PAA17349@cj20424-a.reston1.va.home.com>

> Tell those people to chill out for a few days and not be so jumpy. You're
> talking about behavior that can easily be remedied.
> 
> It is a simple statement about the CVS repository: "CVS builds but may not
> pass the test suite in certain cases" rather than "CVS is perfect"

I would agree if it was only the python-dev crowd -- they are easily
trained.  But there are lots of others who check out the tree, so it
would be a continuing education process.  I don't see what good it does.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Thu Aug  3 21:13:08 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 21:13:08 +0200
Subject: [Python-Dev] Breaking Test Cases on Purpose
References: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
Message-ID: <00d501bffd7e$deb6ece0$f2a6b5d4@hagrid>

moshe:
> > The XP answer would be "hey, you have to checkin the breaking
> > test case right away", and I'm inclined to agree.

tim:
> It's abhorrent to me to ever leave the tree in a state where a test is
> "expected to fail".  If it's left in a failing state for a brief period, at
> best other developers will waste time wondering whether it's due to
> something they did

note that we've just seen this in action, in the SRE crash thread.

Andrew checked in a test that caused the test suite to bomb, and
sent me and Mark F. looking for an non-existing portability bug...

> You can check in an anti-test right away, though:  a test that passes so
> long as the code remains broken <wink>.

which is what the new SRE test script does -- the day SRE supports
unlimited recursion (soon), the test script will complain...

</F>




From tim_one at email.msn.com  Thu Aug  3 21:06:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 15:06:49 -0400
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENNGNAA.tim_one@email.msn.com>

[Jeremy Hylton]
> I'm Tim on this issue.

Then I'm Jeremy too.  Wow!  I needed a vacation <wink>.





From guido at beopen.com  Thu Aug  3 22:15:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:15:26 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 15:00:23 -0400."
             <14729.49479.677157.957162@bitdiddle.concentric.net> 
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com> <14729.49351.574550.48521@bitdiddle.concentric.net>  
            <14729.49479.677157.957162@bitdiddle.concentric.net> 
Message-ID: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>

>   JH> I'm Tim on this issue.
> 
> Make that "I'm with Tim on this issue."  I'm sure it would be fun to
> channel Tim, but I don't have the skills for that.

Actually, in my attic there's a small door that leads to a portal into
Tim's brain.  Maybe we could get Tim to enter the portal -- it would
be fun to see him lying on a piano in a dress reciting a famous aria.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Thu Aug  3 21:19:18 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:19:18 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
	<14729.49351.574550.48521@bitdiddle.concentric.net>
	<14729.49479.677157.957162@bitdiddle.concentric.net>
	<200008032015.PAA17571@cj20424-a.reston1.va.home.com>
Message-ID: <14729.50614.806442.190962@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  JH> I'm Tim on this issue.
  >>  Make that "I'm with Tim on this issue."  I'm sure it would be
  >> fun to channel Tim, but I don't have the skills for that.

  GvR> Actually, in my attic there's a small door that leads to a
  GvR> portal into Tim's brain.  Maybe we could get Tim to enter the
  GvR> portal -- it would be fun to see him lying on a piano in a
  GvR> dress reciting a famous aria.

You should have been on the ride from Monterey to the San Jose airport
a couple of weeks ago.  There was no piano, but it was pretty close.

Jeremy



From jeremy at beopen.com  Thu Aug  3 21:31:50 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:31:50 -0400 (EDT)
Subject: [Python-Dev] tests for standard library modules
Message-ID: <14729.51366.391122.131492@bitdiddle.concentric.net>

Most of the standard library is untested.

There are 148 top-level Python modules in the standard library, plus a
few packages that contain 50 or 60 more modules.  When we run the
regression test, we only touch 48 of those modules.  Only 18 of the
modules have their own test suite.  The other 30 modules at least get
imported, though sometimes none of the code gets executed.  (The
traceback module is an example.)

I would like to see much better test coverage in Python 2.0.  I would
welcome any test case submissions that improve the coverage of the
standard library.

Skip's trace.py code coverage tool is now available in Tools/script.
You can use it to examine how much of a particular module is covered
by existing tests.

Jeremy



From guido at beopen.com  Thu Aug  3 22:39:44 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:39:44 -0500
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: Your message of "Thu, 03 Aug 2000 15:31:50 -0400."
             <14729.51366.391122.131492@bitdiddle.concentric.net> 
References: <14729.51366.391122.131492@bitdiddle.concentric.net> 
Message-ID: <200008032039.PAA17852@cj20424-a.reston1.va.home.com>

> Most of the standard library is untested.

Indeed.  I would suggest looking at the Tcl test suite.  It's very
thorough!  When I look at many of the test modules we *do* have, I
cringe at how little of the module the test actually covers.  Many
tests (not the good ones!) seem to be content with checking that all
functions in a module *exist*.  Much of this dates back to one
particular period in 1996-1997 when we (at CNRI) tried to write test
suites for all modules -- clearly we were in a hurry! :-(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Aug  4 00:25:38 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 4 Aug 2000 00:25:38 +0200 (CEST)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <14729.51366.391122.131492@bitdiddle.concentric.net> from "Jeremy Hylton" at Aug 03, 2000 03:31:50 PM
Message-ID: <200008032225.AAA27154@python.inrialpes.fr>

Jeremy Hylton wrote:
> 
> Skip's trace.py code coverage tool is now available in Tools/script.
> You can use it to examine how much of a particular module is covered
> by existing tests.

Hmm. Glancing quickly at trace.py, I see that half of it is guessing
line numbers. The same SET_LINENO problem again. This is unfortunate.
But fortunately <wink>, here's another piece of code, modeled after
its C counterpart, that comes to Skip's rescue and that works with -O.

Example:

>>> import codeutil
>>> co = codeutil.PyCode_Line2Addr.func_code   # some code object
>>> codeutil.PyCode_GetExecLines(co)
[20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
>>> codeutil.PyCode_Line2Addr(co, 29)
173
>>> codeutil.PyCode_Addr2Line(co, 173)
29
>>> codeutil.PyCode_Line2Addr(co, 10)
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  File "codeutil.py", line 26, in PyCode_Line2Addr
    raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
IndexError: line must be in range [20,36]

etc...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252

------------------------------[ codeutil.py ]-------------------------
import types

def PyCode_Addr2Line(co, addrq):
    assert type(co) == types.CodeType, \
           "1st arg must be a code object, %s given" % type(co).__name__
    if addrq < 0 or addrq > len(co.co_code):
        raise IndexError, "address must be in range [0,%d]" % len(co.co_code)
    addr = 0
    line = co.co_firstlineno
    lnotab = co.co_lnotab
    for i in range(0, len(lnotab), 2):
        addr_incr = ord(lnotab[i])
        line_incr = ord(lnotab[i+1])
        addr = addr + addr_incr
        if (addr > addrq):
            break
        line = line + line_incr
    return line

def PyCode_Line2Addr(co, lineq):
    assert type(co) == types.CodeType, \
           "1st arg must be a code object, %s given" % type(co).__name__
    line = co.co_firstlineno
    lastlineno = PyCode_Addr2Line(co, len(co.co_code))
    if lineq < line or lineq > lastlineno:
        raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
    addr = 0
    lnotab = co.co_lnotab
    for i in range(0, len(lnotab), 2):
        if line >= lineq:
            break
        addr_incr = ord(lnotab[i])
        line_incr = ord(lnotab[i+1])
        addr = addr + addr_incr
        line = line + line_incr
    return addr

def PyCode_GetExecLines(co):
    assert type(co) == types.CodeType, \
           "arg must be a code object, %s given" % type(co).__name__
    lastlineno = PyCode_Addr2Line(co, len(co.co_code))
    lines = range(co.co_firstlineno, lastlineno + 1)
    # remove void lines (w/o opcodes): comments, blank/escaped lines
    i = len(lines) - 1
    while i >= 0:
        if lines[i] != PyCode_Addr2Line(co, PyCode_Line2Addr(co, lines[i])):
            lines.pop(i)
        i = i - 1
    return lines



From mwh21 at cam.ac.uk  Fri Aug  4 00:19:51 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 03 Aug 2000 23:19:51 +0100
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Moshe Zadka's message of "Thu, 3 Aug 2000 19:44:28 +0300 (IDT)"
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <m31z063s3c.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez at math.huji.ac.il> writes:

> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases? 
> 
> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.

I'm not so sure.  I can't find the bit I'm looking for in Beck's
book[1], but ISTR that you have two sorts of test, unit tests and
functional tests.  Unit tests always work, functional tests are more
what you want to work in the future, but may not now.  What goes in
Lib/test is definitely more of the unit test variety, and so if
something in there breaks it's a cause for alarm.  Checking in a test
you know will break just raises blood pressure for no good reason.

Also what if you're hacking on some bit of Python, run the test suite
and it fails?  You worry that you've broken something, when in fact
it's nothing to do with you.

-1. (like everyone else...)

Cheers,
M.

[1] Found it; p. 118 of "Extreme Programming Explained"

-- 
  I'm okay with intellegent buildings, I'm okay with non-sentient
  buildings. I have serious reservations about stupid buildings.
     -- Dan Sheppard, ucam.chat (from Owen Dunn's summary of the year)




From skip at mojam.com  Fri Aug  4 00:21:04 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:21:04 -0500 (CDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
In-Reply-To: <398979D0.5AF80126@lemburg.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
	<14728.63466.263123.434708@anthem.concentric.net>
	<3989454C.5C9EF39B@lemburg.com>
	<200008031256.HAA06107@cj20424-a.reston1.va.home.com>
	<398979D0.5AF80126@lemburg.com>
Message-ID: <14729.61520.11958.530601@beluga.mojam.com>

    >> How about making this a method:

    >> def inplace(dict, key, default):

    >>     value = dict.get(key, default)
    >>     dict[key] = value
    >>     return value

eh... I don't like these do two things at once kind of methods.  I see
nothing wrong with

    >>> dict = {}
    >>> dict['hello'] = dict.get('hello', [])
    >>> dict['hello'].append('world')
    >>> print dict
    {'hello': ['world']}

or

    >>> d = dict['hello'] = dict.get('hello', [])
    >>> d.insert(0, 'cruel')
    >>> print dict
    {'hello': ['cruel', 'world']}

for the obsessively efficiency-minded folks.

Also, we're talking about a method that would generally only be useful when
dictionaries have values which were mutable objects.  Irregardless of how
useful instances and lists are, I still find that my predominant day-to-day
use of dictionaries is with strings as keys and values.  Perhaps that's just
the nature of my work.

In short, I don't think anything needs to be changed.

-1 (don't like the concept, so I don't care about the implementation)

Skip



From mal at lemburg.com  Fri Aug  4 00:36:33 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 04 Aug 2000 00:36:33 +0200
Subject: [Python-Dev] Line number tools (tests for standard library modules)
References: <200008032225.AAA27154@python.inrialpes.fr>
Message-ID: <3989F3F1.162A9766@lemburg.com>

Vladimir Marangozov wrote:
> 
> Jeremy Hylton wrote:
> >
> > Skip's trace.py code coverage tool is now available in Tools/script.
> > You can use it to examine how much of a particular module is covered
> > by existing tests.
> 
> Hmm. Glancing quickly at trace.py, I see that half of it is guessing
> line numbers. The same SET_LINENO problem again. This is unfortunate.
> But fortunately <wink>, here's another piece of code, modeled after
> its C counterpart, that comes to Skip's rescue and that works with -O.
> 
> Example:
> 
> >>> import codeutil
> >>> co = codeutil.PyCode_Line2Addr.func_code   # some code object
> >>> codeutil.PyCode_GetExecLines(co)
> [20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
> >>> codeutil.PyCode_Line2Addr(co, 29)
> 173
> >>> codeutil.PyCode_Addr2Line(co, 173)
> 29
> >>> codeutil.PyCode_Line2Addr(co, 10)
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
>   File "codeutil.py", line 26, in PyCode_Line2Addr
>     raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
> IndexError: line must be in range [20,36]
> 
> etc...

Cool. 

With proper Python style names these utilities
would be nice additions for e.g. codeop.py or code.py.

BTW, I wonder why code.py includes Python console emulations:
there seems to be a naming bug there... I would have
named the module PythonConsole.py and left code.py what
it was previously: a collection of tools dealing with Python
code objects.

--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From skip at mojam.com  Fri Aug  4 00:53:58 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:53:58 -0500 (CDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <14729.51366.391122.131492@bitdiddle.concentric.net>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
Message-ID: <14729.63494.544079.516429@beluga.mojam.com>

    Jeremy> Skip's trace.py code coverage tool is now available in
    Jeremy> Tools/script.  You can use it to examine how much of a
    Jeremy> particular module is covered by existing tests.

Yes, though note that in the summary stuff on my web site there are obvious
bugs that I haven't had time to look at.  Sometimes modules are counted
twice.  Other times a module is listed as untested when right above it there
is a test coverage line...  

Skip



From skip at mojam.com  Fri Aug  4 00:59:34 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:59:34 -0500 (CDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <200008032225.AAA27154@python.inrialpes.fr>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
	<200008032225.AAA27154@python.inrialpes.fr>
Message-ID: <14729.63830.894657.930184@beluga.mojam.com>

    Vlad> Hmm. Glancing quickly at trace.py, I see that half of it is
    Vlad> guessing line numbers. The same SET_LINENO problem again. This is
    Vlad> unfortunate.  But fortunately <wink>, here's another piece of
    Vlad> code, modeled after its C counterpart, that comes to Skip's rescue
    Vlad> and that works with -O.

Go ahead and check in any changes you see that need doing.  I haven't
fiddled with trace.py much in the past couple of years, so there are some
places that clearly do things different than currently accepted practice.

(I am going to be up to my ass in alligators pretty much from now through
Labor Day (early September for the furriners among us), so things I thought
I would get to probably will remain undone.  The most important thing is to
fix the list comprehensions patch to force expression tuples to be
parenthesized.  Guido says it's an easy fix, and the grammar changes seem
trivial, but fixing compile.c is beyond my rusty knowledge at the moment.
Someone want to pick this up?)

Skip



From MarkH at ActiveState.com  Fri Aug  4 01:13:06 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 09:13:06 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <39897635.6C9FB82D@lemburg.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCEDODDAA.MarkH@ActiveState.com>

[Marc writes]
> On Unix you can install a signal
> handler in the Python program which then translates the SIGTERM
> signal into a normal Python exception. Sending the signal then
> causes the same as e.g. hitting Ctrl-C in a program: an
> exception is raised asynchronously, but it can be handled
> properly by the Python exception clauses to enable safe
> shutdown of the process.

I understand this.  This is why I was skeptical that a
"terminate-without-prejudice" only version would be useful.

I _think_ this fairly large email is agreeing that it isn't of much use.
If so, then I am afraid you are on your own :-(

Mark.




From Vladimir.Marangozov at inrialpes.fr  Fri Aug  4 01:27:39 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 4 Aug 2000 01:27:39 +0200 (CEST)
Subject: [Python-Dev] Removing the 16 bit arg limit
Message-ID: <200008032327.BAA27362@python.inrialpes.fr>

I've looked at this and the best compromise solution I ended up with
(before Py3K) is sketched here:

opcode.h:
#define EXTENDED_ARG	135	/* 16 higher bits of the next opcode arg */

ceval.c:
		case EXTENDED_ARG:
			do {
				oparg <<= 16;
				op = NEXTOP();
				oparg += NEXTARG();
			} while (op == EXTENDED_ARG);
			goto dispatch_opcode;

compile.c:
static void
com_addoparg(struct compiling *c, int op, int arg)
{
	if (arg < 0) {
		com_error(c, PyExc_SystemError,
			  "com_addoparg: argument out of range");
	}
	if (op == SET_LINENO) {
		com_set_lineno(c, arg);
		if (Py_OptimizeFlag)
			return;
	}
	do {
		int arg2 = arg & 0xffff;
		arg -= arg2;
		if (arg > 0)
			com_addbyte(c, EXTENDED_ARG);
		else
			com_addbyte(c, op);
		com_addint(c, arg2);
	} while (arg > 0);
}


But this is only difficulty level 0.

Difficulty level 1 is the jumps and their forward refs & backpatching in
compile.c.

There's no tricky solution to this (due to the absolute jumps). The only
reasonable, long-term useful solution I can think of is to build a table
of all anchors (delimiting the basic blocks of the code), then make a final
pass over the serialized basic blocks and update the anchors (with or
without EXTENDED_ARG jumps depending on the need).

However, I won't even think about it anymore whithout BDFL & Tim's
approval and strong encouragement <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at python.net  Fri Aug  4 03:24:44 2000
From: gward at python.net (Greg Ward)
Date: Thu, 3 Aug 2000 21:24:44 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
Message-ID: <20000803212444.A1237@beelzebub>

Hi all --

for building extensions with non-MS compilers, it sounds like a small
change to PC/config.h is needed.  Rene Liebscher suggests changing

  #ifndef USE_DL_EXPORT
  /* So nobody needs to specify the .lib in their Makefile any more */
  #ifdef _DEBUG
  #pragma comment(lib,"python20_d.lib")
  #else
  #pragma comment(lib,"python20.lib")
  #endif
  #endif /* USE_DL_EXPORT */

to

  #if !defined(USE_DL_EXPORT) && defined(_MSC_VER)
  ...

That way, the convenience pragma will still be there for MSVC users, but
it won't break building extensions with Borland C++.  (As I understand
it, Borland C++ understands the pragma, but then tries to use Python's
standard python20.lib, which of course is only for MSVC.)  Non-MSVC
users will have to explicitly supply the library, but that's OK: the
Distutils does it for them.  (Even with MSVC, because it's too much
bother *not* to specify python20.lib explicitly.)

Does this look like the right change to everyone?  I can check it in
(and on the 1.6 branch too) if it looks OK.

While I have your attention, Rene also suggests the convention of
"bcpp_python20.lib" for the Borland-format lib file, with other
compilers (library formats) supported in future similarly.  Works for me 
-- anyone have any problems with that?

        Greg
-- 
Greg Ward - programmer-at-big                           gward at python.net
http://starship.python.net/~gward/
Know thyself.  If you need help, call the CIA.



From moshez at math.huji.ac.il  Fri Aug  4 03:38:32 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:38:32 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10008040437320.9544-100000@sundial>

On Thu, 3 Aug 2000, Jeremy Hylton wrote:

> I'm Tim on this issue.  As officially appointed release manager for
> 2.0, I set some guidelines for checking in code.  One is that no
> checkin should cause the regression test to fail.  If it does, I'll
> back it out.
> 
> If you didn't review the contribution guidelines when they were posted
> on this list, please look at PEP 200 now.

Actually, I did. The thing is, it seems to me there's a huge difference
between breaking code, and manifesting that the code is wrong.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From moshez at math.huji.ac.il  Fri Aug  4 03:41:12 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:41:12 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040440110.9544-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> >   JH> I'm Tim on this issue.
> > 
> > Make that "I'm with Tim on this issue."  I'm sure it would be fun to
> > channel Tim, but I don't have the skills for that.
> 
> Actually, in my attic there's a small door that leads to a portal into
> Tim's brain.  Maybe we could get Tim to enter the portal -- it would
> be fun to see him lying on a piano in a dress reciting a famous aria.

I think I need to get out more often. I just realized I think it would
be fun to. Anybody there has a video camera?
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From moshez at math.huji.ac.il  Fri Aug  4 03:45:52 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:45:52 +0300 (IDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <200008032039.PAA17852@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040444020.9544-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> > Most of the standard library is untested.
> 
> Indeed.  I would suggest looking at the Tcl test suite.  It's very
> thorough!  When I look at many of the test modules we *do* have, I
> cringe at how little of the module the test actually covers.  Many
> tests (not the good ones!) seem to be content with checking that all
> functions in a module *exist*.  Much of this dates back to one
> particular period in 1996-1997 when we (at CNRI) tried to write test
> suites for all modules -- clearly we were in a hurry! :-(

Here's a suggestion for easily getting hints about what test suites to
write: go through the list of open bugs, and write a "would have caught"
test. At worst, we will actually have to fix some bugs <wink>.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Aug  4 04:23:59 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:23:59 -0400
Subject: [Python-Dev] snprintf breaks build
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>

Fred checked in a new rangeobject.c with 3 calls to snprintf.  That isn't a
std C function, and the lack of it breaks the build at least under Windows.
In the absence of a checkin supplying snprintf on all platforms within the
next hour, I'll just replace the snprintf calls with something that's
portable.





From MarkH at ActiveState.com  Fri Aug  4 04:27:32 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 12:27:32 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000803212444.A1237@beelzebub>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>

> Does this look like the right change to everyone?  I can check it in
> (and on the 1.6 branch too) if it looks OK.

I have no problems with this (but am a little confused - see below)

> While I have your attention, Rene also suggests the convention of
> "bcpp_python20.lib" for the Borland-format lib file, with other
> compilers (library formats) supported in future similarly.  Works for me
> -- anyone have any problems with that?

I would prefer python20_bcpp.lib, but that is not an issue.

I am a little confused by the intention, tho.  Wouldnt it make sense to
have Borland builds of the core create a Python20.lib, then we could keep
the pragma in too?

If people want to use Borland for extensions, can't we ask them to use that
same compiler to build the core too?  That would seem to make lots of the
problems go away?

But assuming there are good reasons, I am happy.  It wont bother me for
some time yet ;-) <just deleted a rant about the fact that anyone on
Windows who values their time in more than cents-per-hour would use MSVC,
but deleted it ;->

Sometimes-the-best-things-in-life-arent-free ly,

Mark.




From MarkH at ActiveState.com  Fri Aug  4 04:30:22 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 12:30:22 +1000
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008040440110.9544-100000@sundial>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEEKDDAA.MarkH@ActiveState.com>

> Anybody there has a video camera?

Eeeuuugghhh - the concept of Tim's last threatened photo-essay turning into
a video-essay has made me physically ill ;-)

Just-dont-go-there ly,

Mark.




From fdrake at beopen.com  Fri Aug  4 04:34:34 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 22:34:34 -0400 (EDT)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
Message-ID: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Fred checked in a new rangeobject.c with 3 calls to snprintf.  That isn't a
 > std C function, and the lack of it breaks the build at least under Windows.
 > In the absence of a checkin supplying snprintf on all platforms within the
 > next hour, I'll just replace the snprintf calls with something that's
 > portable.

  Hmm.  I think the issue with known existing snprintf()
implementations with Open Source licenses was that they were at least
somewhat contanimating.  I'll switch back to sprintf() until we have a
portable snprintf() implementation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Fri Aug  4 04:49:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:49:32 -0400
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEPEGNAA.tim_one@email.msn.com>

[Fred]
>   Hmm.  I think the issue with known existing snprintf()
> implementations with Open Source licenses was that they were at
> least somewhat contanimating.  I'll switch back to sprintf()
> until we have a portable snprintf() implementation.

Please don't bother!  Clearly, I've already fixed it on my machine so I
could make progress.  I'll simply check it in.  I didn't like the excessive
cleverness with the fmt vrbl anyway (your compiler may not complain that you
can end up passing more s[n]printf args than the format has specifiers to
convert, but it's a no-no anyway) ....





From tim_one at email.msn.com  Fri Aug  4 04:55:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:55:47 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000803212444.A1237@beelzebub>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPEGNAA.tim_one@email.msn.com>

[Greg Ward]
> for building extensions with non-MS compilers, it sounds like a small
> change to PC/config.h is needed.  Rene Liebscher suggests changing
>
>   #ifndef USE_DL_EXPORT
>   /* So nobody needs to specify the .lib in their Makefile any more */
>   #ifdef _DEBUG
>   #pragma comment(lib,"python20_d.lib")
>   #else
>   #pragma comment(lib,"python20.lib")
>   #endif
>   #endif /* USE_DL_EXPORT */
>
> to
>
>   #if !defined(USE_DL_EXPORT) && defined(_MSC_VER)
>   ...
>
> That way, the convenience pragma will still be there for MSVC users, but
> it won't break building extensions with Borland C++.

OK by me.

> ...
> While I have your attention,

You're pushing your luck, Greg <wink>.

> Rene also suggests the convention of "bcpp_python20.lib" for
> the Borland-format lib file, with other compilers (library
> formats) supported in future similarly.  Works for me -- anyone
> have any problems with that?

Nope, but I don't understand anything about how Borland differs from the
real <0.5 wink> Windows compiler, so don't know squat about the issues or
the goals.  If it works for Rene, I give up without a whimper.





From nhodgson at bigpond.net.au  Fri Aug  4 05:36:12 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Fri, 4 Aug 2000 13:36:12 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
References: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>
Message-ID: <00cf01bffdc5$246867f0$8119fea9@neil>

> But assuming there are good reasons, I am happy.  It wont bother me for
> some time yet ;-) <just deleted a rant about the fact that anyone on
> Windows who values their time in more than cents-per-hour would use MSVC,
> but deleted it ;->

   OK. Better cut my rates. Some people will be pleased ;)

   Borland C++ isn't that bad. With an optimiser and a decent debugger it'd
even be usable as my main compiler. What is good about Borland is that it
produces lots of meaningful warnings.

   I've never regretted ensuring that Scintilla/SciTE build on Windows with
each of MSVC, Borland and GCC. It wasn't much work and real problems have
been found by the extra checks done by Borland.

   You-should-try-it-sometime-ly y'rs,

   Neil




From bwarsaw at beopen.com  Fri Aug  4 05:46:02 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 23:46:02 -0400 (EDT)
Subject: [Python-Dev] snprintf breaks build
References: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
	<14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <14730.15482.216054.249627@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    Fred>   Hmm.  I think the issue with known existing snprintf()
    Fred> implementations with Open Source licenses was that they were
    Fred> at least somewhat contanimating.  I'll switch back to
    Fred> sprintf() until we have a portable snprintf()
    Fred> implementation.

In Mailman, I used the one from GNU screen, which is obviously GPL'd.
But Apache also comes with an snprintf implementation which doesn't
have the infectious license.  I don't feel like searching the
archives, but I'd be surprised if Greg Stein /didn't/ suggest this a
while back.

-Barry



From tim_one at email.msn.com  Fri Aug  4 05:54:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 23:54:47 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <00cf01bffdc5$246867f0$8119fea9@neil>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPHGNAA.tim_one@email.msn.com>

[Neil Hodgson]
> ...
>    I've never regretted ensuring that Scintilla/SciTE build on
> Windows with each of MSVC, Borland and GCC. It wasn't much work
> and real problems have been found by the extra checks done by
> Borland.
>
>    You-should-try-it-sometime-ly y'rs,

Indeed, the more compilers the better.  I've long wished that Guido would
leave CNRI, and find some situation in which he could hire people to work on
Python full-time.  If that ever happens, and he hires me, I'd like to do
serious work to free the Windows build config from such total dependence on
MSVC.





From MarkH at ActiveState.com  Fri Aug  4 05:52:58 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 13:52:58 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <00cf01bffdc5$246867f0$8119fea9@neil>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEEMDDAA.MarkH@ActiveState.com>

>    Borland C++ isn't that bad. With an optimiser and a decent
> debugger it'd even be usable as my main compiler.

>    You-should-try-it-sometime-ly y'rs,

OK - let me know when it has an optimiser and a decent debugger, and is
usable as a main compiler, and I will be happy to ;-)

Only-need-one-main-anything ly,

Mark.




From moshez at math.huji.ac.il  Fri Aug  4 06:30:59 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 07:30:59 +0300 (IDT)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040728150.10236-100000@sundial>

On Thu, 3 Aug 2000, Fred L. Drake, Jr. wrote:

>   Hmm.  I think the issue with known existing snprintf()
> implementations with Open Source licenses was that they were at least
> somewhat contanimating.  I'll switch back to sprintf() until we have a
> portable snprintf() implementation.

Fred -- in your case, there is no need for sprintf -- a few sizeof(long)s
along the way would make sure that your buffers are large enough.  (For
extreme future-proofing, you might also sizeof() the messages you print)

(Tidbit: since sizeof(long) measures in bytes, and %d prints in decimals,
then a buffer of length sizeof(long) is enough to hold a decimal
represntation of a long).

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From greg at cosc.canterbury.ac.nz  Fri Aug  4 06:38:16 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 04 Aug 2000 16:38:16 +1200 (NZST)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <Pine.GSO.4.10.10008040728150.10236-100000@sundial>
Message-ID: <200008040438.QAA11982@s454.cosc.canterbury.ac.nz>

Moshe Zadka:

> (Tidbit: since sizeof(long) measures in bytes, and %d prints in decimals,
> then a buffer of length sizeof(long) is enough to hold a decimal
> represntation of a long).

Pardon? I think you're a bit out in your calculation there!

3*sizeof(long) should be enough, though (unless some weird C
implementation measures sizes in units of more than 8 bits).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Fri Aug  4 07:22:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 01:22:23 -0400
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <200008040438.QAA11982@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPJGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> (Tidbit: since sizeof(long) measures in bytes, and %d prints in
> decimals, then a buffer of length sizeof(long) is enough to hold
> a decimal represntation of a long).

[Greg Ewing]
> Pardon? I think you're a bit out in your calculation there!
>
> 3*sizeof(long) should be enough, though (unless some weird C
> implementation measures sizes in units of more than 8 bits).

Getting closer, but the sign bit can consume a character all by itself, so
3*sizeof(long) sitll isn't enough.  To do this correctly and minimally
requires that we implement an arbitrary-precision log10 function, use the
platform MIN/MIX #define's for longs and chars, and malloc the buffers at
runtime.

Note that instead I boosted the buffer sizes in the module from 80 to 250.
That's obviously way more than enough for 64-bit platforms, and "obviously
way more" is the correct thing to do for programmers <wink>.  If one of the
principled alternatives is ever checked in (be it an snprintf or /F's custom
solution (which I like better)), we can go back and use those instead.





From MarkH at ActiveState.com  Fri Aug  4 07:58:52 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 15:58:52 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>

[Re forcing all extensions to use PythonExtensionInit_XXX]

> I sort-of like this idea -- at least at the +0 level.

Since this email there have been some strong objections to this.  I too
would weigh in at -1 for this, simply for the amount of work it would cost
me personally!


> Unfortunately we only have two days to get this done for 1.6 -- I plan
> to release 1.6b1 this Friday!  If you don't get to it, prepare a patch
> for 2.0 would be the next best thing.

It is now Friday afternoon for me.  Regardless of the outcome of this, the
patch Fredrik posted recently would still seem reasonable, and not have too
much impact on performance (ie, after locating and loading a .dll/.so, one
function call isnt too bad!):

I've even left his trailing comment, which I agree with too?

Shall this be checked in to the 1.6 and 2.0 trees?

Mark.

Index: Python/modsupport.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/modsupport.c,v
retrieving revision 2.48
diff -u -r2.48 modsupport.c
--- Python/modsupport.c 2000/07/09 03:09:56     2.48
+++ Python/modsupport.c 2000/07/18 07:55:03
@@ -51,6 +51,8 @@
 {
        PyObject *m, *d, *v;
        PyMethodDef *ml;
+       if (!Py_IsInitialized())
+               Py_FatalError("Interpreter not initialized (version
mismatch?)");
        if (module_api_version != PYTHON_API_VERSION)
                fprintf(stderr, api_version_warning,
                        name, PYTHON_API_VERSION, name,
module_api_version);

"Fatal Python error: Interpreter not initialized" might not be too helpful,
but it's surely better than "PyThreadState_Get: no current thread"...





From tim_one at email.msn.com  Fri Aug  4 09:06:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 03:06:21 -0400
Subject: [Python-Dev] FW: submitting patches against 1.6a2
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com>

Anyone competent with urrlib care to check out this fellow's complaint?
Thanks!

-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Paul Schreiber
Sent: Friday, August 04, 2000 2:20 AM
To: python-list at python.org
Subject: submitting patches against 1.6a2


I patched a number of bugs in urllib.py way back when -- in June, I
think. That was before the BeOpen announcement.

I emailed the patch to patches at python.org. I included the disclaimer. I
made the patch into a context diff.

I didn't hear back from anyone.

Should I resubmit? Where should I send the patch to?



Paul
--
http://www.python.org/mailman/listinfo/python-list





From esr at snark.thyrsus.com  Fri Aug  4 09:47:34 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Fri, 4 Aug 2000 03:47:34 -0400
Subject: [Python-Dev] curses progress
Message-ID: <200008040747.DAA02323@snark.thyrsus.com>

OK, I've added docs for curses.textpad and curses.wrapper.  Did we
ever settle on a final location in the distribution tree for the
curses HOWTO?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

According to the National Crime Survey administered by the Bureau of
the Census and the National Institute of Justice, it was found that
only 12 percent of those who use a gun to resist assault are injured,
as are 17 percent of those who use a gun to resist robbery. These
percentages are 27 and 25 percent, respectively, if they passively
comply with the felon's demands. Three times as many were injured if
they used other means of resistance.
        -- G. Kleck, "Policy Lessons from Recent Gun Control Research,"



From pf at artcom-gmbh.de  Fri Aug  4 09:47:17 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Fri, 4 Aug 2000 09:47:17 +0200 (MEST)
Subject: Vladimir's codeutil.py (was Re: [Python-Dev] tests for standard library modules)
In-Reply-To: <200008032225.AAA27154@python.inrialpes.fr> from Vladimir Marangozov at "Aug 4, 2000  0:25:38 am"
Message-ID: <m13KcCL-000DieC@artcom0.artcom-gmbh.de>

Hi,

Vladimir Marangozov:
> But fortunately <wink>, here's another piece of code, modeled after
> its C counterpart, that comes to Skip's rescue and that works with -O.
[...]
> ------------------------------[ codeutil.py ]-------------------------
[...]

Neat!  This seems to be very useful.
I think this could be added to standard library if it were documented.

Regards, Peter



From thomas at xs4all.net  Fri Aug  4 10:14:56 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 10:14:56 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 04, 2000 at 03:58:52PM +1000
References: <200008022318.SAA04558@cj20424-a.reston1.va.home.com> <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>
Message-ID: <20000804101456.H266@xs4all.nl>

On Fri, Aug 04, 2000 at 03:58:52PM +1000, Mark Hammond wrote:

> It is now Friday afternoon for me.  Regardless of the outcome of this, the
> patch Fredrik posted recently would still seem reasonable, and not have too
> much impact on performance (ie, after locating and loading a .dll/.so, one
> function call isnt too bad!):

> +       if (!Py_IsInitialized())
> +               Py_FatalError("Interpreter not initialized (version

Wasn't there a problem with this, because the 'Py_FatalError()' would be the
one in the uninitialized library and thus result in the same tstate error ?
Perhaps it needs a separate error message, that avoids the usual Python
cleanup and trickery and just prints the error message and exits ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From MarkH at ActiveState.com  Fri Aug  4 10:20:04 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 18:20:04 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <20000804101456.H266@xs4all.nl>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEFHDDAA.MarkH@ActiveState.com>

> Wasn't there a problem with this, because the 'Py_FatalError()'
> would be the
> one in the uninitialized library and thus result in the same
> tstate error ?
> Perhaps it needs a separate error message, that avoids the usual Python
> cleanup and trickery and just prints the error message and exits ?

I would obviously need to test this, but a cursory look at Py_FatalError()
implies it does not touch the thread lock - simply an fprintf, and an
abort() (and for debug builds on Windows, an offer to break into the
debugger)

Regardless, I'm looking for a comment on the concept, and I will make sure
that whatever I do actually works ;-)

Mark.




From effbot at telia.com  Fri Aug  4 10:30:25 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 4 Aug 2000 10:30:25 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <200008022318.SAA04558@cj20424-a.reston1.va.home.com> <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> <20000804101456.H266@xs4all.nl>
Message-ID: <012b01bffdee$3dadb020$f2a6b5d4@hagrid>

thomas wrote:
> > +       if (!Py_IsInitialized())
> > +               Py_FatalError("Interpreter not initialized (version
> 
> Wasn't there a problem with this, because the 'Py_FatalError()' would be the
> one in the uninitialized library and thus result in the same tstate error ?

you mean this one:

  Py_FatalError("PyThreadState_Get: no current thread");

> Perhaps it needs a separate error message, that avoids the usual Python
> cleanup and trickery and just prints the error message and exits ?

void
Py_FatalError(char *msg)
{
 fprintf(stderr, "Fatal Python error: %s\n", msg);
#ifdef macintosh
 for (;;);
#endif
#ifdef MS_WIN32
 OutputDebugString("Fatal Python error: ");
 OutputDebugString(msg);
 OutputDebugString("\n");
#ifdef _DEBUG
 DebugBreak();
#endif
#endif /* MS_WIN32 */
 abort();
}

</F>




From ping at lfw.org  Fri Aug  4 10:38:12 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 4 Aug 2000 01:38:12 -0700 (PDT)
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>

On Thu, 3 Aug 2000, Tim Peters wrote:
> 
> >>> "\x123465" # \x12 -> \022, "3456" left alone
> '\0223456'
> >>> "\x65"
> 'e'
> >>> "\x1"
> ValueError
> >>> "\x\x"
> ValueError
> >>>

I'm quite certain that this should be a SyntaxError, not a ValueError:

    >>> "\x1"
    SyntaxError: two hex digits are required after \x
    >>> "\x\x"
    SyntaxError: two hex digits are required after \x

Otherwise, +1.  Sounds great.


-- ?!ng




From tim_one at email.msn.com  Fri Aug  4 11:26:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 05:26:29 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPPGNAA.tim_one@email.msn.com>

[Tim Peters]
> >>> "\x123465" # \x12 -> \022, "3456" left alone
> '\0223456'
> >>> "\x65"
> 'e'
> >>> "\x1"
> ValueError
> >>> "\x\x"
> ValueError
> >>>

[?!ng]
> I'm quite certain that this should be a SyntaxError, not a
> ValueError:
>
>     >>> "\x1"
>     SyntaxError: two hex digits are required after \x
>     >>> "\x\x"
>     SyntaxError: two hex digits are required after \x
>
> Otherwise, +1.  Sounds great.

SyntaxError was my original pick too.  Guido picked ValueError instead
because the corresponding "not enough hex digits" error in Unicode strings
for damaged \u1234 escapes raises UnicodeError today, which is a subclass of
ValueError.

I couldn't care less, and remain +1 either way.  On the chance that the BDFL
may have changed his mind, I've copied him on this msg,  This is your one &
only chance to prevail <wink>.

just-so-long-as-it's-not-XEscapeError-ly y'rs  - tim





From mal at lemburg.com  Fri Aug  4 12:03:49 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 04 Aug 2000 12:03:49 +0200
Subject: [Python-Dev] Go \x yourself
References: <LNBBLJKPBEHFEDALKOLCIEPPGNAA.tim_one@email.msn.com>
Message-ID: <398A9505.A88D8F93@lemburg.com>

[Wow, 5:26 in the morning and still (or already) up and running...]

Tim Peters wrote:
> 
> [Tim Peters]
> > >>> "\x123465" # \x12 -> \022, "3456" left alone
> > '\0223456'
> > >>> "\x65"
> > 'e'
> > >>> "\x1"
> > ValueError
> > >>> "\x\x"
> > ValueError
> > >>>
> 
> [?!ng]
> > I'm quite certain that this should be a SyntaxError, not a
> > ValueError:
> >
> >     >>> "\x1"
> >     SyntaxError: two hex digits are required after \x
> >     >>> "\x\x"
> >     SyntaxError: two hex digits are required after \x
> >
> > Otherwise, +1.  Sounds great.
> 
> SyntaxError was my original pick too.  Guido picked ValueError instead
> because the corresponding "not enough hex digits" error in Unicode strings
> for damaged \u1234 escapes raises UnicodeError today, which is a subclass of
> ValueError.
> 
> I couldn't care less, and remain +1 either way.  On the chance that the BDFL
> may have changed his mind, I've copied him on this msg,  This is your one &
> only chance to prevail <wink>.

The reason for Unicode raising a UnicodeError is that the
string is passed through a codec in order to be converted to
Unicode. Codecs raise ValueErrors for encoding errors.

The "\x..." errors should probably be handled in the same
way to assure forward compatibility (they might be passed through
codecs as well in some future Python version in order to
implement source code encodings).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From akuchlin at cnri.reston.va.us  Fri Aug  4 14:45:06 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Fri, 4 Aug 2000 08:45:06 -0400
Subject: [Python-Dev] curses progress
In-Reply-To: <200008040747.DAA02323@snark.thyrsus.com>; from esr@snark.thyrsus.com on Fri, Aug 04, 2000 at 03:47:34AM -0400
References: <200008040747.DAA02323@snark.thyrsus.com>
Message-ID: <20000804084506.B5870@newcnri.cnri.reston.va.us>

On Fri, Aug 04, 2000 at 03:47:34AM -0400, Eric S. Raymond wrote:
>OK, I've added docs for curses.textpad and curses.wrapper.  Did we
>ever settle on a final location in the distribution tree for the
>curses HOWTO?

Fred and GvR thought a separate SF project would be better, so I
created http://sourceforge.net/projects/py-howto .  You've already
been added as a developer, as have Moshe and Fred.  Just check out the
CVS tree (directory, really) and put it in the Doc/ subdirectory of
the Python CVS tree.  Preparations for a checkin mailing list are
progressing, but still not complete.

--amk



From thomas at xs4all.net  Fri Aug  4 15:01:35 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:01:35 +0200
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: <200007270559.AAA04753@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Jul 27, 2000 at 12:59:15AM -0500
References: <20000725230322.N266@xs4all.nl> <200007270559.AAA04753@cj20424-a.reston1.va.home.com>
Message-ID: <20000804150134.J266@xs4all.nl>

[Don't be scared, I'm revisiting this thread for a purpose -- this isn't a
time jump ;-)]

On Thu, Jul 27, 2000 at 12:59:15AM -0500, Guido van Rossum wrote:

> I'm making up opcodes -- the different variants of LOAD and STORE
> don't matter.  On the right I'm displaying the stack contents after
> execution of the opcode (push appends to the end).  I'm writing
> 'result' to indicate the result of the += operator.

>   a[i] += b
> 
>       LOAD a			[a]
>       DUP			[a, a]
>       LOAD i			[a, a, i]
>       DUP			[a, a, i, i]
>       ROT3			[a, i, a, i]
>       GETITEM			[a, i, a[i]]
>       LOAD b			[a, i, a[i], b]
>       AUGADD			[a, i, result]
>       SETITEM			[]
> 
> I'm leaving the slice variant out; I'll get to that in a minute.

[ And later you gave an example of slicing using slice objects, rather than
the *SLICE+x opcodes ]

I have two tiny problems with making augmented assignment use the current
LOAD/STORE opcodes in the way Guido pointed out, above. One has to do with
the order of the arguments, and the other with ROT_FOUR. And they're closely
related, too :P

The question is in what order the expression

x += y

is evaluated. 

x = y

evaluates 'y' first, then 'x', but 

x + y

evaluates 'x' first, and then 'y'. 

x = x + y

Would thus evaluate 'x', then 'y', and then 'x' (for storing the result.)
(The problem isn't with single-variable expressions like these examples, of
course, but with expressions with sideeffects.)

I think it makes sense to make '+=' like '+', in that it evaluates the lhs
first. However, '+=' is as much '=' as it is '+', so it also makes sense to
evaluate the rhs first. There are plenty of arguments both ways, and both
sides of my brain have been beating eachother with spiked clubs for the
better part of a day now ;) On the other hand, how important is this issue ?
Does Python say anything about the order of argument evaluation ? Does it
need to ?

After making up your mind about the above issue, there's another problem,
and that's the generated bytecode.

If '+=' should be as '=' and evaluate the rhs first, here's what the
bytecode would have to look like for the most complicated case (two-argument
slicing.)

a[b:c] += i

LOAD i			[i]
LOAD a			[i, a]
DUP_TOP			[i, a, a]
LOAD b			[i, a, a, b]
DUP_TOP			[i, a, a, b, b]
ROT_THREE		[i, a, b, a, b]
LOAD c			[i, a, b, a, b, c]
DUP_TOP			[i, a, b, a, b, c, c]
ROT_FOUR		[i, a, b, c, a, b, c]
SLICE+3			[i, a, b, c, a[b:c]]
ROT_FIVE		[a[b:c], i, a, b, c]
ROT_FIVE		[c, a[b:c], i, a, b]
ROT_FIVE		[b, c, a[b:c], i, a]
ROT_FIVE		[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

So, *two* new bytecodes, 'ROT_FOUR' and 'ROT_FIVE', just to get the right
operands in the right place.

On the other hand, if the *left* hand side is evaluated first, it would look
like this:

a[b:c] += i

LOAD a			[a]
DUP_TOP			[a, a]
LOAD b			[a, a, b]
DUP_TOP			[a, a, b, b]
ROT_THREE		[a, b, a, b]
LOAD c			[a, b, a, b, c]
DUP_TOP			[a, b, a, b, c, c]
ROT_FOUR		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

A lot shorter, and it only needs ROT_FOUR, not ROT_FIVE. An alternative
solution is to drop ROT_FOUR too, and instead use a DUP_TOPX argument-opcode
that duplicates the top 'x' items:

LOAD a			[a]
LOAD b			[a, b]
LOAD c			[a, b, c]
DUP_TOPX 3		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

I think 'DUP_TOPX' makes more sense than ROT_FOUR, as DUP_TOPX could be used
in the bytecode for 'a[b] += i' and 'a.b += i' as well. (Guido's example
would become something like this:

a[b] += i

LOAD a			[a]
LOAD b			[a, b]
DUP_TOPX 2		[a, b, a, b]
BINARY_SUBSCR		[a, b, a[b]]
LOAD i			[a, b, a[b], i]
INPLACE_ADD		[a, b, result]
STORE_SUBSCR		[]

So, *bytecode* wise, evaluating the lhs of '+=' first is easiest. It
requires a lot more hacking of compile.c, but I think I can manage that.
However, one part of me is still yelling that '+=' should evaluate its
arguments like '=', not '+'. Which part should I lobotomize ? :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug  4 15:08:58 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:08:58 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <200008041259.FAA24651@slayer.i.sourceforge.net>; from moshez@users.sourceforge.net on Fri, Aug 04, 2000 at 05:59:43AM -0700
References: <200008041259.FAA24651@slayer.i.sourceforge.net>
Message-ID: <20000804150858.K266@xs4all.nl>

On Fri, Aug 04, 2000 at 05:59:43AM -0700, Moshe Zadka wrote:

> Log Message:
> The only error the test suite skips is currently ImportError -- so that's
> what we raise. If you see a problem with this patch, say so and I'll
> retract.

test_support creates a class 'TestSkipped', which has a docstring that
suggests it can be used in the same way as ImportError. However, it doesn't
work ! Is that intentional ? The easiest fix to make it work is probably
making TestSkipped a subclass of ImportError, rather than Error (which it
is, now.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Fri Aug  4 15:11:38 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 16:11:38 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test
 test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <20000804150858.K266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008041610180.16446-100000@sundial>

On Fri, 4 Aug 2000, Thomas Wouters wrote:

> On Fri, Aug 04, 2000 at 05:59:43AM -0700, Moshe Zadka wrote:
> 
> > Log Message:
> > The only error the test suite skips is currently ImportError -- so that's
> > what we raise. If you see a problem with this patch, say so and I'll
> > retract.
> 
> test_support creates a class 'TestSkipped', which has a docstring that
> suggests it can be used in the same way as ImportError. However, it doesn't
> work ! Is that intentional ? The easiest fix to make it work is probably
> making TestSkipped a subclass of ImportError, rather than Error (which it
> is, now.)

Thanks for the tip, Thomas! I didn't know about it -- but I just read
the regrtest.py code, and it seemed to be the only exception it catches.
Why not just add test_support.TestSkipped to the exception it catches
when it catches the ImportError?

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Fri Aug  4 15:19:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:19:31 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <Pine.GSO.4.10.10008041610180.16446-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 04, 2000 at 04:11:38PM +0300
References: <20000804150858.K266@xs4all.nl> <Pine.GSO.4.10.10008041610180.16446-100000@sundial>
Message-ID: <20000804151931.L266@xs4all.nl>

On Fri, Aug 04, 2000 at 04:11:38PM +0300, Moshe Zadka wrote:

> > test_support creates a class 'TestSkipped', which has a docstring that
> > suggests it can be used in the same way as ImportError. However, it doesn't
> > work ! Is that intentional ? The easiest fix to make it work is probably
> > making TestSkipped a subclass of ImportError, rather than Error (which it
> > is, now.)

> Thanks for the tip, Thomas! I didn't know about it -- but I just read
> the regrtest.py code, and it seemed to be the only exception it catches.
> Why not just add test_support.TestSkipped to the exception it catches
> when it catches the ImportError?

Right. Done. Now to update all those tests that raise ImportError when they
*mean* 'TestSkipped' :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug  4 15:26:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 4 Aug 2000 09:26:53 -0400 (EDT)
Subject: [Python-Dev] curses progress
In-Reply-To: <200008040747.DAA02323@snark.thyrsus.com>
References: <200008040747.DAA02323@snark.thyrsus.com>
Message-ID: <14730.50333.391218.736370@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > OK, I've added docs for curses.textpad and curses.wrapper.  Did we
 > ever settle on a final location in the distribution tree for the
 > curses HOWTO?

  Andrew is creating a new project on SourceForge.
  I think this is the right thing to do.  We may want to discuss
packaging, to make it easier for users to get to the documentation
they need; this will have to wait until after 1.6.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Fri Aug  4 16:59:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 09:59:35 -0500
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: Your message of "Fri, 04 Aug 2000 15:58:52 +1000."
             <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> 
Message-ID: <200008041459.JAA01621@cj20424-a.reston1.va.home.com>

> [Re forcing all extensions to use PythonExtensionInit_XXX]

[GvR]
> > I sort-of like this idea -- at least at the +0 level.

[MH]
> Since this email there have been some strong objections to this.  I too
> would weigh in at -1 for this, simply for the amount of work it would cost
> me personally!

OK.  Dead it is.  -1.

> Shall this be checked in to the 1.6 and 2.0 trees?

Yes, I'll do so.

> "Fatal Python error: Interpreter not initialized" might not be too helpful,
> but it's surely better than "PyThreadState_Get: no current thread"...

Yes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug  4 17:06:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:06:33 -0500
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: Your message of "Fri, 04 Aug 2000 03:06:21 -0400."
             <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> 
Message-ID: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>

> Anyone competent with urrlib care to check out this fellow's complaint?

It arrived on June 14, so I probably ignored it -- with 1000s of other
messages received while I was on vacation.  This was before we started
using the SF PM.

But I still have his email.  Someone else please look at this!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


Subject: [Patches] urllib.py patch
From: Paul Schreiber <paul at commerceflow.com>
To: patches at python.org
Date: Wed, 14 Jun 2000 16:52:02 -0700
Content-Type: multipart/mixed;
 boundary="------------3EE36A3787159ED881FD3EC3"

This is a multi-part message in MIME format.
--------------3EE36A3787159ED881FD3EC3
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I confirm that, to the best of my knowledge and belief, this
contribution is free of any claims of third parties under
copyright, patent or other rights or interests ("claims").  To
the extent that I have any such claims, I hereby grant to CNRI a
nonexclusive, irrevocable, royalty-free, worldwide license to
reproduce, distribute, perform and/or display publicly, prepare
derivative versions, and otherwise use this contribution as part
of the Python software and its related documentation, or any
derivative versions thereof, at no cost to CNRI or its licensed
users, and to authorize others to do so.

I acknowledge that CNRI may, at its sole discretion, decide
whether or not to incorporate this contribution in the Python
software and its related documentation.  I further grant CNRI
permission to use my name and other identifying information
provided to CNRI by me for use in connection with the Python
software and its related documentation.

Patch description
-----------------
This addresses four issues:

(1) usernames and passwords in urls with special characters are now
decoded properly. i.e. http://foo%2C:bar at www.whatever.com/

(2) Basic Auth support has been added to HTTPS, like it was in HTTP.

(3) Version 1.92 sent the POSTed data, but did not deal with errors
(HTTP responses other than 200) properly. HTTPS now behaves the same way
HTTP does.

(4) made URL-checking beahve the same way with HTTPS as it does with
HTTP (changed == to !=).


Paul Schreiber
--------------3EE36A3787159ED881FD3EC3
Content-Type: text/plain; charset=us-ascii;
 name="urllib-diff-2"
Content-Disposition: inline;
 filename="urllib-diff-2"
Content-Transfer-Encoding: 7bit

*** urllib.old	Tue Jun 13 18:27:02 2000
--- urllib.py	Tue Jun 13 18:33:27 2000
***************
*** 302,316 ****
          def open_https(self, url, data=None):
              """Use HTTPS protocol."""
              import httplib
              if type(url) is type(""):
                  host, selector = splithost(url)
!                 user_passwd, host = splituser(host)
              else:
                  host, selector = url
                  urltype, rest = splittype(selector)
!                 if string.lower(urltype) == 'https':
                      realhost, rest = splithost(rest)
!                     user_passwd, realhost = splituser(realhost)
                      if user_passwd:
                          selector = "%s://%s%s" % (urltype, realhost, rest)
                  #print "proxy via https:", host, selector
--- 302,325 ----
          def open_https(self, url, data=None):
              """Use HTTPS protocol."""
              import httplib
+             user_passwd = None
              if type(url) is type(""):
                  host, selector = splithost(url)
!                 if host:
!                     user_passwd, host = splituser(host)
!                     host = unquote(host)
!                 realhost = host
              else:
                  host, selector = url
                  urltype, rest = splittype(selector)
!                 url = rest
!                 user_passwd = None
!                 if string.lower(urltype) != 'https':
!                     realhost = None
!                 else:
                      realhost, rest = splithost(rest)
!                     if realhost:
!                         user_passwd, realhost = splituser(realhost)
                      if user_passwd:
                          selector = "%s://%s%s" % (urltype, realhost, rest)
                  #print "proxy via https:", host, selector
***************
*** 331,336 ****
--- 340,346 ----
              else:
                  h.putrequest('GET', selector)
              if auth: h.putheader('Authorization: Basic %s' % auth)
+             if realhost: h.putheader('Host', realhost)
              for args in self.addheaders: apply(h.putheader, args)
              h.endheaders()
              if data is not None:
***************
*** 340,347 ****
              if errcode == 200:
                  return addinfourl(fp, headers, url)
              else:
!                 return self.http_error(url, fp, errcode, errmsg, headers)
!   
      def open_gopher(self, url):
          """Use Gopher protocol."""
          import gopherlib
--- 350,360 ----
              if errcode == 200:
                  return addinfourl(fp, headers, url)
              else:
!                 if data is None:
!                     return self.http_error(url, fp, errcode, errmsg, headers)
!                 else:
!                     return self.http_error(url, fp, errcode, errmsg, headers, data)
! 
      def open_gopher(self, url):
          """Use Gopher protocol."""
          import gopherlib
***************
*** 872,878 ****
          _userprog = re.compile('^([^@]*)@(.*)$')
  
      match = _userprog.match(host)
!     if match: return match.group(1, 2)
      return None, host
  
  _passwdprog = None
--- 885,891 ----
          _userprog = re.compile('^([^@]*)@(.*)$')
  
      match = _userprog.match(host)
!     if match: return map(unquote, match.group(1, 2))
      return None, host
  
  _passwdprog = None


--------------3EE36A3787159ED881FD3EC3--


_______________________________________________
Patches mailing list
Patches at python.org
http://www.python.org/mailman/listinfo/patches



From guido at beopen.com  Fri Aug  4 17:11:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:11:03 -0500
Subject: [Python-Dev] Go \x yourself
In-Reply-To: Your message of "Fri, 04 Aug 2000 01:38:12 MST."
             <Pine.LNX.4.10.10008040136490.5008-100000@localhost> 
References: <Pine.LNX.4.10.10008040136490.5008-100000@localhost> 
Message-ID: <200008041511.KAA01925@cj20424-a.reston1.va.home.com>

> I'm quite certain that this should be a SyntaxError, not a ValueError:
> 
>     >>> "\x1"
>     SyntaxError: two hex digits are required after \x
>     >>> "\x\x"
>     SyntaxError: two hex digits are required after \x
> 
> Otherwise, +1.  Sounds great.

No, problems with literal interpretations traditionally raise
"runtime" exceptions rather than syntax errors.  E.g.

>>> 111111111111111111111111111111111111
OverflowError: integer literal too large
>>> u'\u123'
UnicodeError: Unicode-Escape decoding error: truncated \uXXXX
>>>

Note that UnicodeError is a subclass of ValueError.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Fri Aug  4 16:11:00 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 17:11:00 +0300 (IDT)
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008041709450.16446-100000@sundial>

On Fri, 4 Aug 2000, Guido van Rossum wrote:

> > Anyone competent with urrlib care to check out this fellow's complaint?
> 
> It arrived on June 14, so I probably ignored it -- with 1000s of other
> messages received while I was on vacation.  This was before we started
> using the SF PM.
> 
> But I still have his email.  Someone else please look at this!

AFAIK, those are the two urllib patches assigned to Jeremy.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From akuchlin at mems-exchange.org  Fri Aug  4 16:13:05 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 4 Aug 2000 10:13:05 -0400
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 04, 2000 at 10:06:33AM -0500
References: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> <200008041506.KAA01874@cj20424-a.reston1.va.home.com>
Message-ID: <20000804101305.A11929@kronos.cnri.reston.va.us>

On Fri, Aug 04, 2000 at 10:06:33AM -0500, Guido van Rossum wrote:
>It arrived on June 14, so I probably ignored it -- with 1000s of other
>messages received while I was on vacation.  This was before we started
>using the SF PM.

I think this is SF patch#100880 -- I entered it so it wouldn't get lost.

--amk



From guido at beopen.com  Fri Aug  4 17:26:45 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:26:45 -0500
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: Your message of "Fri, 04 Aug 2000 15:01:35 +0200."
             <20000804150134.J266@xs4all.nl> 
References: <20000725230322.N266@xs4all.nl> <200007270559.AAA04753@cj20424-a.reston1.va.home.com>  
            <20000804150134.J266@xs4all.nl> 
Message-ID: <200008041526.KAA02071@cj20424-a.reston1.va.home.com>

[Thomas]
> The question is in what order the expression
> 
> x += y
> 
> is evaluated. 
> 
> x = y
> 
> evaluates 'y' first, then 'x', but 
> 
> x + y
> 
> evaluates 'x' first, and then 'y'. 
> 
> x = x + y
> 
> Would thus evaluate 'x', then 'y', and then 'x' (for storing the result.)
> (The problem isn't with single-variable expressions like these examples, of
> course, but with expressions with sideeffects.)

Yes.  And note that the Python reference manual specifies the
execution order (or at least tries to) -- I figured that in a
user-friendly interpreted language, predictability is more important
than some optimizer being able to speed your code up a tiny bit by
rearranging evaluation order.

> I think it makes sense to make '+=' like '+', in that it evaluates the lhs
> first. However, '+=' is as much '=' as it is '+', so it also makes sense to
> evaluate the rhs first. There are plenty of arguments both ways, and both
> sides of my brain have been beating eachother with spiked clubs for the
> better part of a day now ;) On the other hand, how important is this issue ?
> Does Python say anything about the order of argument evaluation ? Does it
> need to ?

I say that in x += y, x should be evaluated before y.

> After making up your mind about the above issue, there's another problem,
> and that's the generated bytecode.
[...]
> A lot shorter, and it only needs ROT_FOUR, not ROT_FIVE. An alternative
> solution is to drop ROT_FOUR too, and instead use a DUP_TOPX argument-opcode
> that duplicates the top 'x' items:

Sure.

> However, one part of me is still yelling that '+=' should evaluate its
> arguments like '=', not '+'. Which part should I lobotomize ? :)

That part.  If you see x+=y as shorthand for x=x+y, x gets evaluated
before y anyway!  We're saving the second evaluation of x, not the
first one!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug  4 17:46:57 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:46:57 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Thu, 03 Aug 2000 17:21:04 EST."
             <14729.61520.11958.530601@beluga.mojam.com> 
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net> <3989454C.5C9EF39B@lemburg.com> <200008031256.HAA06107@cj20424-a.reston1.va.home.com> <398979D0.5AF80126@lemburg.com>  
            <14729.61520.11958.530601@beluga.mojam.com> 
Message-ID: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>

[Skip]
> eh... I don't like these do two things at once kind of methods.  I see
> nothing wrong with
> 
>     >>> dict = {}
>     >>> dict['hello'] = dict.get('hello', [])
>     >>> dict['hello'].append('world')
>     >>> print dict
>     {'hello': ['world']}
> 
> or
> 
>     >>> d = dict['hello'] = dict.get('hello', [])
>     >>> d.insert(0, 'cruel')
>     >>> print dict
>     {'hello': ['cruel', 'world']}
> 
> for the obsessively efficiency-minded folks.

Good!  Two lines instead of three, and only two dict lookups in the
latter one.

> Also, we're talking about a method that would generally only be useful when
> dictionaries have values which were mutable objects.  Irregardless of how
> useful instances and lists are, I still find that my predominant day-to-day
> use of dictionaries is with strings as keys and values.  Perhaps that's just
> the nature of my work.

Must be.  I have used the above two idioms many times -- a dict of
lists is pretty common.  I believe that the fact that you don't need
it is the reason why you don't like it.

I believe that as long as we agree that

  dict['hello'] += 1

is clearer (less strain on the reader's brain) than

  dict['hello'] = dict['hello'] + 1

we might as well look for a clearer way to spell the above idiom.

My current proposal (violating my own embargo against posting proposed
names to the list :-) would be

  dict.default('hello', []).append('hello')

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From paul at prescod.net  Fri Aug  4 17:52:11 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 04 Aug 2000 11:52:11 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>              <3986794E.ADBB938C@prescod.net>  <200008011820.NAA30284@cj20424-a.reston1.va.home.com> <004d01bffc50$522fa2a0$f2a6b5d4@hagrid>
Message-ID: <398AE6AB.9D8F943B@prescod.net>

Fredrik Lundh wrote:
> 
> ...
> 
> how about letting _winreg export all functions with their
> win32 names, and adding a winreg.py which looks some-
> thing like this:
> 
>     from _winreg import *
> 
>     class Key:
>         ....
> 
>     HKEY_CLASSES_ROOT = Key(...)
>     ...

To me, that would defeat the purpose. Have you looked at the "*"
exported by _winreg? The whole point is to impose some organization on
something that is totally disorganized (because that's how the C module
is).

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From skip at mojam.com  Fri Aug  4 20:07:28 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 4 Aug 2000 13:07:28 -0500 (CDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
	<14728.63466.263123.434708@anthem.concentric.net>
	<3989454C.5C9EF39B@lemburg.com>
	<200008031256.HAA06107@cj20424-a.reston1.va.home.com>
	<398979D0.5AF80126@lemburg.com>
	<14729.61520.11958.530601@beluga.mojam.com>
	<200008041546.KAA02168@cj20424-a.reston1.va.home.com>
Message-ID: <14731.1632.44037.499807@beluga.mojam.com>

    >> Also, we're talking about a method that would generally only be
    >> useful when dictionaries have values which were mutable objects.
    >> Irregardless of how useful instances and lists are, I still find that
    >> my predominant day-to-day use of dictionaries is with strings as keys
    >> and values.  Perhaps that's just the nature of my work.

    Guido> Must be.  I have used the above two idioms many times -- a dict
    Guido> of lists is pretty common.  I believe that the fact that you
    Guido> don't need it is the reason why you don't like it.

I do use lists in dicts as well, it's just that it seems to me that using
strings as values (especially because I use bsddb a lot and often want to
map dictionaries to files) dominates.  The two examples I posted are what
I've used for a long time.  I guess I just don't find them to be big
limitations.

Skip



From barry at scottb.demon.co.uk  Sat Aug  5 01:19:52 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sat, 5 Aug 2000 00:19:52 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <01d701bffcd7$46a74a00$f2a6b5d4@hagrid>
Message-ID: <000d01bffe6a$7e4bab60$060210ac@private>

> > Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> > will skip 1.6.   For example, if your win32 stuff is not ported then
> > Python 1.6 is not usable on Windows/NT.
> 
> "not usable"?
> 
> guess you haven't done much cross-platform development lately...

	True. On Unix I have an ISDN status monitor, it depends on
	FReeBSD interfaces and PIL. On Windows I have an SCM
	solution that depends on COM to drive SourceSafe.

	Without Mark's COM support I cannot run any of my code on
	Windows.

> > Change the init function name to a new name PythonExtensionInit_ say.
> > Pass in the API version for the extension writer to check. If the
> > version is bad for this extension returns without calling any python
> 
> huh?  are you seriously proposing to break every single C extension
> ever written -- on each and every platform -- just to trap an error
> message caused by extensions linked against 1.5.2 on your favourite
> platform?

	What makes you think that a crash will not happen under Unix
	when you change the API? You just don't get the Windows crash.

	As this thread has pointed out you have no intention of checking
	for binary compatibility on the API as you move up versions.
 
> > Small code change in python core. But need to tell extension writers
> > what the new interface is and update all extensions within the python
> > CVS tree.
>
> you mean "update the source code for all extensions ever written."

	Yes, I'm aware of the impact.

> -1
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From gward at python.net  Sat Aug  5 02:53:09 2000
From: gward at python.net (Greg Ward)
Date: Fri, 4 Aug 2000 20:53:09 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 04, 2000 at 12:27:32PM +1000
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>
Message-ID: <20000804205309.A1013@beelzebub>

On 04 August 2000, Mark Hammond said:
> I would prefer python20_bcpp.lib, but that is not an issue.

Good suggestion: the contents of the library are more important than the 
format.  Rene, can you make this change and include it in your next
patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as 
opposed to "python20_bcpp"?

> I am a little confused by the intention, tho.  Wouldnt it make sense to
> have Borland builds of the core create a Python20.lib, then we could keep
> the pragma in too?
> 
> If people want to use Borland for extensions, can't we ask them to use that
> same compiler to build the core too?  That would seem to make lots of the
> problems go away?

But that requires people to build all of Python from source, which I'm
guessing is a bit more bothersome than building an extension or two from 
source.  Especially since Python is already distributed as a very
easy-to-use binary installer for Windows, but most extensions are not.

Rest assured that we probably won't be making things *completely*
painless for those who do not toe Chairman Bill's party line and insist
on using "non-standard" Windows compilers.  They'll probably have to get
python20_bcpp.lib (or python20_gcc.lib, or python20_lcc.lib) on their
own -- whether downloaded or generated, I don't know.  But the
alternative is to include 3 or 4 python20_xxx.lib files in the standard
Windows distribution, which I think is silly.

> But assuming there are good reasons, I am happy.  It wont bother me for
> some time yet ;-) <just deleted a rant about the fact that anyone on
> Windows who values their time in more than cents-per-hour would use MSVC,
> but deleted it ;->

Then I won't even write my "it's not just about money, it's not even
about features, it's about the freedom to use the software you want to
use no matter what it says in Chairman Bill's book of wisdom" rant.

Windows: the Cultural Revolution of the 90s.  ;-)

        Greg
-- 
Greg Ward - geek-at-large                               gward at python.net
http://starship.python.net/~gward/
What happens if you touch these two wires tog--



From guido at beopen.com  Sat Aug  5 04:27:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 21:27:59 -0500
Subject: [Python-Dev] Python 1.6b1 is released!
Message-ID: <200008050227.VAA11161@cj20424-a.reston1.va.home.com>

Python 1.6b1, with the new CNRI open source license, is released today
from the python.org website.  Read all about it:

    http://www.python.org/1.6/

Here's a little background on the new license (also posted on
www.pythonlabs.com):

CNRI has funded Python development for five years and held copyright,
but never placed a CNRI-specific license on the software.  In order to
clarify the licensing, BeOpen.com has been working with CNRI to
produce a new CNRI license.  The result of these discussions (which
included Eric Raymond, Bruce Perens, Richard Stallman and Python
Consortium members) has produced the CNRI Open Source License, under
which Python 1.6b1 has been released.

Bob Weiner, CTO of BeOpen.com, on the result of the licensing
discussions: "Bob Kahn [CNRI's President] worked with us to understand
the particular needs of the Open Source community and Python users.
The result is a very open license."

The new CNRI license was approved by the Python Consortium members, at
a meeting of the Python Consortium on Friday, July 21, 2000 in
Monterey, California.

Eric Raymond, President of the Open Source Initiative (OSI), reports
that OSI's Board of Directors voted to certify the new CNRI license
[modulo minor editing] as fully Open Source compliant.

Richard Stallman, founder of the Free Software Foundation, is in
discussion with CNRI about the new license's compatibility with the
GPL.  We are hopeful that the remaining issues will be resolved in
favor of GPL compatibility before the release of Python 1.6 final.

We would like to thank all who graciously volunteered their time to
help make these results possible: Bob Kahn for traveling out west to
discuss these issues in person; Eric Raymond and Bruce Perens for
their useful contributions to the discussions; Bob Weiner for taking
care of the bulk of the negotiations; Richard Stallman for GNU; and
the Python Consortium representatives for making the consortium
meeting a success!

(And I would personally like to thank Tim Peters for keeping the
newsgroup informed and for significant editing of the text above.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From amk1 at erols.com  Sat Aug  5 06:15:22 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Sat, 5 Aug 2000 00:15:22 -0400
Subject: [Python-Dev] python-dev summary posted
Message-ID: <200008050415.AAA00811@207-172-146-87.s87.tnt3.ann.va.dialup.rcn.com>

I've posted the python-dev summary for July 16-31 to
comp.lang.python/python-list; interested people can go check it out.

--amk



From just at letterror.com  Sat Aug  5 10:03:33 2000
From: just at letterror.com (Just van Rossum)
Date: Sat, 05 Aug 2000 09:03:33 +0100
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com> <bld7joah8z.fsf@bitdiddle.concentric.net> <9b13RLA800i5EwLY@jessikat.fsnet.co.uk> <8mg3au$rtb$1@nnrp1.deja.com>
Message-ID: <398BCA4F.17E23309@letterror.com>

[ CC-d to python-dev from c.l.py ]

Jeremy Hylton wrote:
> It is a conservative response.  JPython is an implementation of Python,
> and compatibility between Python and JPython is important.  It's not
> required for every language feature, of course; you can't load a Java
> class file in C Python.

Jeremy, have you ever *looked* at stackless? Even though it requires
extensive patches in the eval loop, all additional semantics are nicely
hidden in an extension module. The Java argument is a *very* poor one
because of this. No, you can't load a Java class in CPython, and yes,
"import continuation" fails under JPython. So what?

> I'm not sure what you mean by distinguishing between the semantics of
> continuations and the implementation of Stackless Python.  They are
> both issues!  In the second half of my earlier message, I observed that
> we would never add continuations without a PEP detailing their exact
> semantics.  I do not believe such a specification currently exists for
> stackless Python.

That's completely unfair. Stackless has been around *much* longer than
those silly PEPs. It seems stackless isn't in the same league as, say,
"adding @ to the print statement for something that is almost as
conveniently done with a function". I mean, jeez.

> The PEP would also need to document the C interface and how it affects
> people writing extensions and doing embedded work.  Python is a glue
> language and the effects on the glue interface are also important.

The stackless API is 100% b/w compatible. There are (or could/should be)
additional calls for extension writers and embedders that would like
to take advantage of stackless features, but full compatibility is
*there*. To illustrate this: for windows as well as MacOS, there are
DLLs for stackless that you just put in the place if the original
Python core DLLs, and *everything* just works. 

Christian has done an amazing piece of work, and he's gotten much
praise from the community. I mean, if you *are* looking for a killer
feature to distinguish 1.6 from 2.0, I'd know where to look...

Just



From mal at lemburg.com  Sat Aug  5 11:35:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 05 Aug 2000 11:35:06 +0200
Subject: [Python-Dev] Python 1.6b1 out ?!
Message-ID: <398BDFCA.4D5A262D@lemburg.com>

Strange: either I missed it or Guido chose to release 1.6b1 
in silence, but I haven't seen any official announcement of the
release available from http://www.python.org/1.6/.

BTW, nice holiday, Guido ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Sun Aug  6 01:34:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 5 Aug 2000 19:34:43 -0400
Subject: [Python-Dev] Python 1.6b1 out ?!
In-Reply-To: <398BDFCA.4D5A262D@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>

[M.-A. Lemburg]
> Strange: either I missed it or Guido chose to release 1.6b1
> in silence, but I haven't seen any official announcement of the
> release available from http://www.python.org/1.6/.
>
> BTW, nice holiday, Guido ;-)

There's an announcement at the top of http://www.python.org/, though, and an
announcement about the new license at http://www.pythonlabs.com/.  Guido
also posted to comp.lang.python.  You probably haven't seen the latter if
you use the mailing list gateway, because many mailing lists at python.org
coincidentally got hosed at the same time due to a full disk.  Else your
news server simply hasn't gotten it yet (I saw it come across on
netnews.msn.com, but then Microsoft customers get everything first <wink>).





From thomas at xs4all.net  Sat Aug  5 17:18:30 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 5 Aug 2000 17:18:30 +0200
Subject: [Python-Dev] UNPACK_LIST & UNPACK_TUPLE
Message-ID: <20000805171829.N266@xs4all.nl>

I'm a tad confused about the 'UNPACK_LIST' and 'UNPACK_TUPLE' opcodes. There
doesn't seem to be a difference between the two, yet the way they are
compiled is slightly different (but not much.) I can list all the
differences I can see, but I just don't understand them, and because of that
I'm not sure how to handle them in augmented assignment. UNPACK_LIST just
seems so redundant :)

Wouldn't it make sense to remove the difference between the two, or better
yet, remove UNPACK_LIST (and possibly rename UNPACK_TUPLE to UNPACK_SEQ ?)
We already lost bytecode compability anyway!

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Sun Aug  6 01:46:00 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 5 Aug 2000 19:46:00 -0400
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <398BCA4F.17E23309@letterror.com>; from just@letterror.com on Sat, Aug 05, 2000 at 09:03:33AM +0100
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com> <bld7joah8z.fsf@bitdiddle.concentric.net> <9b13RLA800i5EwLY@jessikat.fsnet.co.uk> <8mg3au$rtb$1@nnrp1.deja.com> <398BCA4F.17E23309@letterror.com>
Message-ID: <20000805194600.A7242@thyrsus.com>

Just van Rossum <just at letterror.com>:
> Christian has done an amazing piece of work, and he's gotten much
> praise from the community. I mean, if you *are* looking for a killer
> feature to distinguish 1.6 from 2.0, I'd know where to look...

I must say I agree.  Something pretty similar to Stackless Python is
going to have to happen anyway for the language to make its next major
advance in capability -- generators, co-routining, and continuations.

I also agree that this is a more important debate, and a harder set of
decisions, than the PEPs.  Which means we should start paying attention
to it *now*.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

I don't like the idea that the police department seems bent on keeping
a pool of unarmed victims available for the predations of the criminal
class.
         -- David Mohler, 1989, on being denied a carry permit in NYC



From bwarsaw at beopen.com  Sun Aug  6 01:50:04 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 19:50:04 -0400 (EDT)
Subject: [Python-Dev] Python 1.6b1 out ?!
References: <398BDFCA.4D5A262D@lemburg.com>
	<LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>
Message-ID: <14732.43052.91330.426211@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> There's an announcement at the top of http://www.python.org/,
    TP> though, and an announcement about the new license at
    TP> http://www.pythonlabs.com/.  Guido also posted to
    TP> comp.lang.python.  You probably haven't seen the latter if you
    TP> use the mailing list gateway, because many mailing lists at
    TP> python.org coincidentally got hosed at the same time due to a
    TP> full disk.  Else your news server simply hasn't gotten it yet
    TP> (I saw it come across on netnews.msn.com, but then Microsoft
    TP> customers get everything first <wink>).

And you should soon see the announcement if you haven't already.  All
the mailing lists on py.org should be back on line now.  It'll take a
while to clear out the queue though.

-Barry



From bwarsaw at beopen.com  Sun Aug  6 01:52:05 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 19:52:05 -0400 (EDT)
Subject: [Python-Dev] UNPACK_LIST & UNPACK_TUPLE
References: <20000805171829.N266@xs4all.nl>
Message-ID: <14732.43173.634118.381282@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> I'm a tad confused about the 'UNPACK_LIST' and 'UNPACK_TUPLE'
    TW> opcodes. There doesn't seem to be a difference between the
    TW> two, yet the way they are compiled is slightly different (but
    TW> not much.) I can list all the differences I can see, but I
    TW> just don't understand them, and because of that I'm not sure
    TW> how to handle them in augmented assignment. UNPACK_LIST just
    TW> seems so redundant :)

    TW> Wouldn't it make sense to remove the difference between the
    TW> two, or better yet, remove UNPACK_LIST (and possibly rename
    TW> UNPACK_TUPLE to UNPACK_SEQ ?)  We already lost bytecode
    TW> compability anyway!

This is a historical artifact.  I don't remember what version it was,
but at one point there was a difference between

    a, b, c = gimme_a_tuple()

and

    [a, b, c] = gimme_a_list()

That difference was removed, and support was added for any sequence
unpacking.  If changing the bytecode is okay, then there doesn't seem
to be any reason to retain the differences.

-Barry



From jack at oratrix.nl  Sat Aug  5 23:14:08 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sat, 05 Aug 2000 23:14:08 +0200
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks) 
In-Reply-To: Message by "Fredrik Lundh" <effbot@telia.com> ,
	     Thu, 3 Aug 2000 19:19:03 +0200 , <007401bffd6e$ed9bbde0$f2a6b5d4@hagrid> 
Message-ID: <20000805211413.E1224E2670@oratrix.oratrix.nl>

Fredrik,
could you add a PyOS_CheckStack() call to the recursive part of sre
(within #ifdef USE_STACKCHECK, of course)?
I'm getting really really nasty crashes on the Mac if I run the
regression tests...
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From jack at oratrix.nl  Sat Aug  5 23:41:15 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sat, 05 Aug 2000 23:41:15 +0200
Subject: [Python-Dev] strftime()
Message-ID: <20000805214120.A55EEE2670@oratrix.oratrix.nl>

The test_strftime regression test has been failing on the Mac for
ages, and I finally got round to investigating the problem: the
MetroWerks library returns the strings "am" and "pm" for %p but the
regression test expects "AM" and "PM". According to the comments in
the source of the library (long live vendors who provide it! Yeah!)
this is C9X compatibility.

I can of course move the %p to the nonstandard expectations, but maybe 
someone has a better idea?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From bwarsaw at beopen.com  Sun Aug  6 02:12:58 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 20:12:58 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com>
	<bld7joah8z.fsf@bitdiddle.concentric.net>
	<9b13RLA800i5EwLY@jessikat.fsnet.co.uk>
	<8mg3au$rtb$1@nnrp1.deja.com>
	<398BCA4F.17E23309@letterror.com>
	<20000805194600.A7242@thyrsus.com>
Message-ID: <14732.44426.201651.690336@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> I must say I agree.  Something pretty similar to Stackless
    ESR> Python is going to have to happen anyway for the language to
    ESR> make its next major advance in capability -- generators,
    ESR> co-routining, and continuations.

Stackless definitely appeals to me from a coolness factor, though I
don't know how much I'd use those new capabilities that it allows.
The ability to embed Python on hardware that might otherwise not be
possible without Stackless is also an interesting thing to explore.

    ESR> I also agree that this is a more important debate, and a
    ESR> harder set of decisions, than the PEPs.  Which means we
    ESR> should start paying attention to it *now*.

Maybe a PEP isn't the right venue, but the semantics and externally
visible effects of Stackless need to be documented.  What if JPython
or Python .NET wanted to adopt those same semantics, either by doing
their implementation's equivalent of Stackless or by some other means?
We can't even think about doing that without a clear and complete
specification.

Personally, I don't see Stackless making it into 2.0 and possibly not
2.x.  But I agree it is something to seriously consider for Py3K.

-Barry



From tim_one at email.msn.com  Sun Aug  6 07:07:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 01:07:27 -0400
Subject: [Python-Dev] strftime()
In-Reply-To: <20000805214120.A55EEE2670@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com>

[Jack Jansen]
> The test_strftime regression test has been failing on the Mac for
> ages, and I finally got round to investigating the problem: the
> MetroWerks library returns the strings "am" and "pm" for %p but the
> regression test expects "AM" and "PM". According to the comments in
> the source of the library (long live vendors who provide it! Yeah!)
> this is C9X compatibility.

My copy of a draft C99 std agrees (7.23.3.5) with MetroWerks on this point
(i.e., that %p in the "C" locale becomes "am" or "pm").

> I can of course move the %p to the nonstandard expectations, but maybe
> someone has a better idea?

Not really.  If Python thinks this function is valuable, it "should" offer a
platform-independent implementation, but as nobody has time for that ...





From MarkH at ActiveState.com  Sun Aug  6 07:08:46 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sun, 6 Aug 2000 15:08:46 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000d01bffe6a$7e4bab60$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEHPDDAA.MarkH@ActiveState.com>

[/F]
> > huh?  are you seriously proposing to break every single C extension
> > ever written -- on each and every platform -- just to trap an error
> > message caused by extensions linked against 1.5.2 on your favourite
> > platform?

[Barry]
> 	What makes you think that a crash will not happen under Unix
> 	when you change the API? You just don't get the Windows crash.
>
> 	As this thread has pointed out you have no intention of checking
> 	for binary compatibility on the API as you move up versions.

I imtimated the following, but did not spell it out, so I will here to
clarify.

I was -1 on Barry's solution getting into 1.6, given the time frame.  I
hinted that the solution Guido recently checked in "if
(!Py_IsInitialized()) ..." would not be too great an impact even if Barry's
solution, or one like it, was eventually adopted.

So I think that the adoption of our half-solution (ie, we are really only
forcing a better error message - not even getting a traceback to indicate
_which_ module fails) need not preclude a better solution when we have more
time to implement it...

Mark.




From moshez at math.huji.ac.il  Sun Aug  6 08:23:48 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 09:23:48 +0300 (IDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <20000805194600.A7242@thyrsus.com>
Message-ID: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>

On Sat, 5 Aug 2000, Eric S. Raymond wrote:

> I must say I agree.  Something pretty similar to Stackless Python is
> going to have to happen anyway for the language to make its next major
> advance in capability -- generators, co-routining, and continuations.
> 
> I also agree that this is a more important debate, and a harder set of
> decisions, than the PEPs.  Which means we should start paying attention
> to it *now*.

I tend to disagree. For a while now I'm keeping an eye on the guile
interpreter development (a very cool project, but unfortunately limping
along. It probably will be the .NET of free software, though). In guile,
they were able to implement continuations *without* what we call
stacklessness. Sure, it might look inefficient, but for most applications
(like co-routines) it's actually quite all right. What all that goes to
say is that we should treat stackles exactly like it is -- an
implementation detail. Now, that's not putting down Christian's work -- on
the contrary, I think the Python implementation is very important. But
that alone should indicate there's no need for a PEP. I, for one, am for
it, because I happen to think it's a much better implementation. If it
also has the effect of making continuationsmodule.c easier to write, well,
that's not an issue in this discussion as far as I'm concerned.

brain-dumping-ly y'rs, Z.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Sun Aug  6 10:55:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 06 Aug 2000 10:55:55 +0200
Subject: [Python-Dev] Python 1.6b1 out ?!
References: <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>
Message-ID: <398D281B.E7F118C0@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Strange: either I missed it or Guido chose to release 1.6b1
> > in silence, but I haven't seen any official announcement of the
> > release available from http://www.python.org/1.6/.
> >
> > BTW, nice holiday, Guido ;-)
> 
> There's an announcement at the top of http://www.python.org/, though, and an
> announcement about the new license at http://www.pythonlabs.com/.  Guido
> also posted to comp.lang.python.  You probably haven't seen the latter if
> you use the mailing list gateway, because many mailing lists at python.org
> coincidentally got hosed at the same time due to a full disk.  Else your
> news server simply hasn't gotten it yet (I saw it come across on
> netnews.msn.com, but then Microsoft customers get everything first <wink>).

I saw the announcement on www.python.org, thanks. (I already
started to miss the usual 100+ Python messages I get into my mailbox
every day ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sun Aug  6 14:20:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 06 Aug 2000 14:20:56 +0200
Subject: [Python-Dev] Pickling using XML as output format
Message-ID: <398D5827.EE8938DD@lemburg.com>

Before starting to reinvent the wheel:

I need a pickle.py compatible module which essentially works
just like pickle.py, but uses XML as output format. I've already
looked at xml_pickle.py (see Parnassus), but this doesn't seem
to handle object references at all. Also, it depends on 
xml.dom which I'd rather avoid.

My idea was to rewrite the format used by pickle in an
XML syntax and then hard-code the DTD into a subclass
of the parser in xmllib.py.

Now, I'm very new to XML, so I may be missing something here...
would this be doable in a fairly sensible way (I'm thinking
of closely sticking to the pickle stream format) ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Sun Aug  6 14:46:09 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 15:46:09 +0300 (IDT)
Subject: [Python-Dev] Pickling using XML as output format
In-Reply-To: <398D5827.EE8938DD@lemburg.com>
Message-ID: <Pine.GSO.4.10.10008061544180.20069-100000@sundial>

On Sun, 6 Aug 2000, M.-A. Lemburg wrote:

> Before starting to reinvent the wheel:

Ummmm......I'd wait for some DC guy to chime in: I think Zope had
something like that. You might want to ask around on the Zope lists
or search zope.org.

I'm not sure what it has and what it doesn't have, though.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez





From moshez at math.huji.ac.il  Sun Aug  6 15:22:09 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 16:22:09 +0300 (IDT)
Subject: [Python-Dev] Warnings on gcc -Wall
Message-ID: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>

As those of you with a firm eye on python-checkins noticed, I've been
trying to clear the source files (as many of them as I could get to
compile on my setup) from warnings. This is only with gcc -Wall: a future
project of mine is to enable much more warnings and try to clean them too.

There are currently two places where warnings still remain:

 -- readline.c -- readline/history.h is included only on BeOS, and
otherwise prototypes are declared by hand. Does anyone remember why? 

-- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
oparg which might be used before initialized. I've had a look at that
code, and I'm certain gcc's flow analysis is simply not good enough.
However, I would like to silence the warning, so I can get used to
building with -Wall -Werror and make sure to mind any warnings. Does
anyone see any problem with putting opcode=0 and oparg=0 near the top?

Any comments welcome, of course.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Sun Aug  6 16:00:26 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 16:00:26 +0200
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>; from moshez@math.huji.ac.il on Sun, Aug 06, 2000 at 04:22:09PM +0300
References: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>
Message-ID: <20000806160025.P266@xs4all.nl>

On Sun, Aug 06, 2000 at 04:22:09PM +0300, Moshe Zadka wrote:

>  -- readline.c -- readline/history.h is included only on BeOS, and
> otherwise prototypes are declared by hand. Does anyone remember why? 

Possibly because old versions of readline don't have history.h ?

> -- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
> oparg which might be used before initialized. I've had a look at that
> code, and I'm certain gcc's flow analysis is simply not good enough.
> However, I would like to silence the warning, so I can get used to
> building with -Wall -Werror and make sure to mind any warnings. Does
> anyone see any problem with putting opcode=0 and oparg=0 near the top?

Actually, I don't think this is true. 'opcode' and 'oparg' get filled inside
the permanent for-loop, but after the check on pending signals and
exceptions. I think it's theoretically possible to have 'things_to_do' on
the first time through the loop, which end up in an exception, thereby
causing the jump to on_error, entering the branch on WHY_EXCEPTION, which
uses oparg and opcode. I'm not sure if initializing opcode/oparg is the
right thing to do, though, but I'm not sure what is, either :-)

As for the checkins, I haven't seen some of the pending checkin-mails pass
by (I did some cleaning up of configure.in last night, for instance, after
the re-indent and grammar change in compile.c that *did* come through.)
Barry (or someone else ;) are those still waiting in the queue, or should we
consider them 'lost' ? 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Sun Aug  6 16:13:10 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 17:13:10 +0300 (IDT)
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: <20000806160025.P266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008061703040.20069-100000@sundial>

On Sun, 6 Aug 2000, Thomas Wouters wrote:

> On Sun, Aug 06, 2000 at 04:22:09PM +0300, Moshe Zadka wrote:
> 
> >  -- readline.c -- readline/history.h is included only on BeOS, and
> > otherwise prototypes are declared by hand. Does anyone remember why? 
> 
> Possibly because old versions of readline don't have history.h ?

And it did have the history functions? If so, maybe we can include
<readline/readline.h> unconditionally, and switch on the readline version.
If not, I'd just announce support for earlier versions of readline
nonexistant and be over and done with it.

> 'opcode' and 'oparg' get filled inside
> the permanent for-loop, but after the check on pending signals and
> exceptions. I think it's theoretically possible to have 'things_to_do' on
> the first time through the loop, which end up in an exception, thereby
> causing the jump to on_error, entering the branch on WHY_EXCEPTION, which
> uses oparg and opcode. I'm not sure if initializing opcode/oparg is the
> right thing to do, though, but I'm not sure what is, either :-)

Probably initializing them before the "goto no_error" to some dummy value,
then checking for this dummy value in the relevant place. You're right,
of course, I hadn't noticed the goto.

> As for the checkins, I haven't seen some of the pending checkin-mails pass
> by (I did some cleaning up of configure.in last night, for instance, after
> the re-indent and grammar change in compile.c that *did* come through.)
> Barry (or someone else ;) are those still waiting in the queue, or should we
> consider them 'lost' ? 

I got a reject on two e-mails, but I didn't think of saving
them....oooops..well, no matter, most of them were trivial stuff.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tismer at appliedbiometrics.com  Sun Aug  6 16:47:26 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Sun, 06 Aug 2000 16:47:26 +0200
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
Message-ID: <398D7A7E.2AB1BDF3@appliedbiometrics.com>


Moshe Zadka wrote:
> 
> On Sat, 5 Aug 2000, Eric S. Raymond wrote:
> 
> > I must say I agree.  Something pretty similar to Stackless Python is
> > going to have to happen anyway for the language to make its next major
> > advance in capability -- generators, co-routining, and continuations.
> >
> > I also agree that this is a more important debate, and a harder set of
> > decisions, than the PEPs.  Which means we should start paying attention
> > to it *now*.
> 
> I tend to disagree. For a while now I'm keeping an eye on the guile
> interpreter development (a very cool project, but unfortunately limping
> along. It probably will be the .NET of free software, though). In guile,
> they were able to implement continuations *without* what we call
> stacklessness. Sure, it might look inefficient, but for most applications
> (like co-routines) it's actually quite all right.

Despite the fact that I consider the Guile implementation a pile
of junk code that I would never touch like I did with Python*),
you are probably right. Stackless goes a bit too far in a sense,
that it implies abilities for other implementations which are
hard to achieve.

There are in fact other ways to implement coroutines and uthreads.
Stackless happens to achieve all of that and a lot more, and to
be very efficient. Therefore it would be a waste to go back to
a restricted implementation since it exists already. If stackless
didn't go so far, it would probably have been successfully
integrated, already. I wanted it all and luckily got it all.

On the other hand, there is no need to enforce every Python
implementation to do the full continuation support. In CPython,
continuationmodule.c can be used for such purposes, and it can
be used as a basement for coroutine and generator implementations.
Using Guile's way to implement these would be a possible path
for JPython.
The point is to use only parts of the possibilities and don't
enforce everything for every environment. There is just no point
in shrinking the current implementation down; not even a subset
would be helpful in JPython.

> What all that goes to
> say is that we should treat stackles exactly like it is -- an
> implementation detail. Now, that's not putting down Christian's work -- on
> the contrary, I think the Python implementation is very important. But
> that alone should indicate there's no need for a PEP. I, for one, am for
> it, because I happen to think it's a much better implementation. If it
> also has the effect of making continuationsmodule.c easier to write, well,
> that's not an issue in this discussion as far as I'm concerned.

A possible proposal could be this:

- incorporate Stackless into CPython, but don't demand it
  for every implementation
- implement coroutines and others with Stackless for CPython
  try alternative implementations for JPython if there are users
- do *not* make continuations a standard language feature until
  there is a portable way to get it everywhere

Still, I can't see the point with Java. There are enough
C extension which are not available for JPython, but it is
allowed to use them. Same with the continuationmodule, why
does it need to exist for Java, in order to allow it for
CPython?
This is like not implementing new browser features until
they can be implemented on my WAP handy. Sonsense.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com

*) sorry, feel free to disagree, but this was my impression when
   I read the whole code half a year ago.
   This is exactly what I not want :-)



From moshez at math.huji.ac.il  Sun Aug  6 17:11:21 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 18:11:21 +0300 (IDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <Pine.GSO.4.10.10008061807230.20069-100000@sundial>

On Sun, 6 Aug 2000, Christian Tismer wrote:

> On the other hand, there is no need to enforce every Python
> implementation to do the full continuation support. In CPython,
> continuationmodule.c can be used for such purposes, and it can
> be used as a basement for coroutine and generator implementations.
> Using Guile's way to implement these would be a possible path
> for JPython.

Actually, you can't use Guile's way for JPython -- the guile folks
are doing some low-level semi-portable stuff in C...

> - incorporate Stackless into CPython, but don't demand it
>   for every implementation

Again, I want to say I don't think there's a meaning for "for every
implementation" -- Stackless is not part of the language definiton,
it's part of the implementation. The whole Java/.NET is a red herring.

> - implement coroutines and others with Stackless for CPython

I think that should be done in a third-party module. But hey, if Guido
wants to maintain another module...

> - do *not* make continuations a standard language feature until
>   there is a portable way to get it everywhere

I'd got further and say "do *not* make continuations a standard language
feature" <wink>

> Still, I can't see the point with Java. There are enough
> C extension which are not available for JPython, but it is
> allowed to use them. Same with the continuationmodule, why
> does it need to exist for Java, in order to allow it for
> CPython?

My point exactly.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tismer at appliedbiometrics.com  Sun Aug  6 17:22:39 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Sun, 06 Aug 2000 17:22:39 +0200
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008061807230.20069-100000@sundial>
Message-ID: <398D82BF.85D0E5AB@appliedbiometrics.com>


Moshe Zadka wrote:

...
> > - implement coroutines and others with Stackless for CPython
> 
> I think that should be done in a third-party module. But hey, if Guido
> wants to maintain another module...

Right, like now. CPython has the necessary stackless hooks, nuts
and bolts, but nothing else, and no speed impact.

Then is just happens to be *possible* to write such an extension,
and it will be written, but this is no language feature.

> > - do *not* make continuations a standard language feature until
> >   there is a portable way to get it everywhere
> 
> I'd got further and say "do *not* make continuations a standard language
> feature" <wink>

This was my sentence, in the first place, but when reviewing
the message, I could not resist to plug that in again <1.5 wink>

As discussed in a private thread with Just, some continuation
features can only be made "nice" if they are supported by
some language extension. I want to use Python in CS classes
to teach them continuations, therefore I need a backdoor :-)

and-there-will-always-be-a-version-on-my-site-that-goes-
   -beyond-the-standard - ly y'rs  - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From bwarsaw at beopen.com  Sun Aug  6 17:49:07 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sun, 6 Aug 2000 11:49:07 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
	<398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <14733.35059.53619.98300@anthem.concentric.net>

>>>>> "CT" == Christian Tismer <tismer at appliedbiometrics.com> writes:

    CT> Still, I can't see the point with Java. There are enough C
    CT> extension which are not available for JPython, but it is
    CT> allowed to use them. Same with the continuationmodule, why
    CT> does it need to exist for Java, in order to allow it for
    CT> CPython?  This is like not implementing new browser features
    CT> until they can be implemented on my WAP handy. Sonsense.

It's okay if there are some peripheral modules that are available to
CPython but not JPython (include Python .NET here too), and vice
versa.  That'll just be the nature of things.  But whatever basic
language features Stackless allows one to do /from Python/ must be
documented.  That's the only way we'll be able to one of these things:

- support the feature a different way in a different implementation
- agree that the feature is part of the Python language definition,
  but possibly not (yet) supported by all implementations.
- define the feature as implementation dependent (so people writing
  portable code know to avoid those features).

-Barry



From guido at beopen.com  Sun Aug  6 19:23:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 12:23:52 -0500
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: Your message of "Sun, 06 Aug 2000 16:22:09 +0300."
             <Pine.GSO.4.10.10008061612490.20069-100000@sundial> 
References: <Pine.GSO.4.10.10008061612490.20069-100000@sundial> 
Message-ID: <200008061723.MAA14418@cj20424-a.reston1.va.home.com>

>  -- readline.c -- readline/history.h is included only on BeOS, and
> otherwise prototypes are declared by hand. Does anyone remember why? 

Please don't touch that module.  GNU readline is wacky.

> -- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
> oparg which might be used before initialized. I've had a look at that
> code, and I'm certain gcc's flow analysis is simply not good enough.
> However, I would like to silence the warning, so I can get used to
> building with -Wall -Werror and make sure to mind any warnings. Does
> anyone see any problem with putting opcode=0 and oparg=0 near the top?

No problem.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sun Aug  6 19:34:34 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 12:34:34 -0500
Subject: [Python-Dev] strftime()
In-Reply-To: Your message of "Sun, 06 Aug 2000 01:07:27 -0400."
             <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com> 
Message-ID: <200008061734.MAA14488@cj20424-a.reston1.va.home.com>

> [Jack Jansen]
> > The test_strftime regression test has been failing on the Mac for
> > ages, and I finally got round to investigating the problem: the
> > MetroWerks library returns the strings "am" and "pm" for %p but the
> > regression test expects "AM" and "PM". According to the comments in
> > the source of the library (long live vendors who provide it! Yeah!)
> > this is C9X compatibility.
> 
> My copy of a draft C99 std agrees (7.23.3.5) with MetroWerks on this point
> (i.e., that %p in the "C" locale becomes "am" or "pm").
> 
> > I can of course move the %p to the nonstandard expectations, but maybe
> > someone has a better idea?
> 
> Not really.  If Python thinks this function is valuable, it "should" offer a
> platform-independent implementation, but as nobody has time for that ...

No.  The test is too strict.  It should be fixed.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From just at letterror.com  Sun Aug  6 19:59:42 2000
From: just at letterror.com (Just van Rossum)
Date: Sun, 6 Aug 2000 18:59:42 +0100
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <14733.35059.53619.98300@anthem.concentric.net>
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
 <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <l03102800b5b354bd9114@[193.78.237.132]>

At 11:49 AM -0400 06-08-2000, Barry A. Warsaw wrote:
>It's okay if there are some peripheral modules that are available to
>CPython but not JPython (include Python .NET here too), and vice
>versa.  That'll just be the nature of things.  But whatever basic
>language features Stackless allows one to do /from Python/ must be
>documented.

The things stackless offers are no different from:

- os.open()
- os.popen()
- os.system()
- os.fork()
- threading (!!!)

These things are all doable /from Python/, yet their non-portability seems
hardly an issue for the Python Standard Library.

>That's the only way we'll be able to one of these things:
>
>- support the feature a different way in a different implementation
>- agree that the feature is part of the Python language definition,
>  but possibly not (yet) supported by all implementations.

Honest (but possibly stupid) question: are extension modules part of the
language definition?

>- define the feature as implementation dependent (so people writing
>  portable code know to avoid those features).

It's an optional extension module, so this should be obvious. (As it
happens, it depends on a new and improved implementation of ceval.c, but
this is really beside the point.)

Just

PS: thanks to everybody who kept CC-ing me in this thread; it's much
appreciated as I'm not on python-dev.





From jeremy at alum.mit.edu  Sun Aug  6 20:54:56 2000
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sun, 6 Aug 2000 14:54:56 -0400
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <20000805194600.A7242@thyrsus.com>
Message-ID: <AJEAKILOCCJMDILAPGJNOEFJCBAA.jeremy@alum.mit.edu>

Eric S. Raymond <esr at thyrsus.com> writes:
>Just van Rossum <just at letterror.com>:
>> Christian has done an amazing piece of work, and he's gotten much
>> praise from the community. I mean, if you *are* looking for a killer
>> feature to distinguish 1.6 from 2.0, I'd know where to look...
>
>I must say I agree.  Something pretty similar to Stackless Python is
>going to have to happen anyway for the language to make its next major
>advance in capability -- generators, co-routining, and continuations.
>
>I also agree that this is a more important debate, and a harder set of
>decisions, than the PEPs.  Which means we should start paying attention
>to it *now*.

The PEPs exist as a way to formalize important debates and hard decisions.
Without a PEP that offers a formal description of the changes, it is hard to
have a reasonable debate.  I would not be comfortable with the specification
for any feature from stackless being the implementation.

Given the current release schedule for Python 2.0, I don't see any
possibility of getting a PEP accepted in time.  The schedule, from PEP 200,
is:

    Tentative Release Schedule
        Aug. 14: All 2.0 PEPs finished / feature freeze
        Aug. 28: 2.0 beta 1
        Sep. 29: 2.0 final

Jeremy





From guido at beopen.com  Sun Aug  6 23:17:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 16:17:33 -0500
Subject: [Python-Dev] math.rint bites the dust
Message-ID: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>

After a brief consult with Tim, I've decided to drop math.rint() --
it's not standard C, can't be implemented in portable C, and its
naive (non-IEEE-754-aware) effect can easily be had in other ways.

If you disagree, speak up now!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Sun Aug  6 22:25:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 22:25:03 +0200
Subject: [Python-Dev] math.rint bites the dust
In-Reply-To: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 06, 2000 at 04:17:33PM -0500
References: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>
Message-ID: <20000806222502.S266@xs4all.nl>

On Sun, Aug 06, 2000 at 04:17:33PM -0500, Guido van Rossum wrote:

> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

I don't particularly disagree, since I hardly do anything with floating
point numbers, but how can something both not be implementable in portable C
*and* it's effect easily be had in other ways ?

I also recall someone who was implementing rint() on platforms that didnt
have it... Or did that idea get trashed because it wasn't portable enough ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Mon Aug  7 00:49:06 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 06 Aug 2000 22:49:06 +0000
Subject: [Python-Dev] bug-fixes in cnri-16-start branch
Message-ID: <398DEB62.789B4C9C@nowonder.de>

I have a question on the right procedure for fixing a simple
bug in the 1.6 release branch.

Bug #111162 appeared because the tests for math.rint() are
already contained in the cnri-16-start revision of test_math.py
while the "try: ... except AttributeError: ..." construct which
was checked in shortly after was not.

Now the correct bugfix is already known (and has been
applied to the main branch). I have updated the test_math.py
file in my working version with "-r cnri-16-start" and
made the changes.

Now I probably should just commit, close the patch
(with an appropriate follow-up) and be happy.

did-I-get-that-right-or-does-something-else-have-to-be-done-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From tim_one at email.msn.com  Sun Aug  6 22:54:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 16:54:02 -0400
Subject: [Python-Dev] math.rint bites the dust
In-Reply-To: <20000806222502.S266@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEFJGOAA.tim_one@email.msn.com>

[Guido]
> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

[Thomas Wouters]
> I don't particularly disagree, since I hardly do anything with floating
> point numbers, but how can something both not be implementable in
> portable C *and* it's effect easily be had in other ways ?

Can't.  rint is not standard C, but is standard C99, where a conforming
implementation requires paying attention to all the details of the 754 fp
model.  It's a *non* 754-aware version of rint that can be easily had in
other ways (e.g., you easily write a rint in Python that always rounds to
nearest/even, by building on math.floor and checking the sign bit, but
ignoring the possibilities of infinities, NaNs, current 754 rounding mode,
and correct treatment of (at least) the 754 inexact and underflow flags --
Python gives no way to get at any of those now, neither does current C, and
a correct rint from the C99 point of view has to deal with all of them).

This is a case where I'm unwilling to support a function at all before it
can be supported correctly; I see no value in exposing current platforms'
divergent and incorrect implementations of rint, not in the math module.
Code that uses them will fail to work at all on some platforms (since rint
is not in today's C, some platfroms don't have it), and change meaning over
time as the other platforms move toward C99 compliance.

> I also recall someone who was implementing rint() on platforms
> that didnt have it... Or did that idea get trashed because it wasn't
> portable enough ?

Bingo.

everyone's-welcome-to-right-their-own-incorrect-version<wink>-ly
    y'rs  - tim





From jack at oratrix.nl  Sun Aug  6 22:56:48 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sun, 06 Aug 2000 22:56:48 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
Message-ID: <20000806205653.B0341E2670@oratrix.oratrix.nl>

Could the defenders of Stackless Python please explain _why_ this is
such a great idea? Just and Christian seem to swear by it, but I'd
like to hear of some simple examples of programming tasks that will be 
programmable in 50% less code with it (or 50% more understandable
code, for that matter).

And, similarly, could the detractors of Stackless Python explain why
it is such a bad idea. A lot of core-pythoneers seem to have
misgivings about it, even though issues of compatability and
efficiency have been countered many times here by its champions (at
least, it seems that way to a clueless bystander like myself). I'd
really like to be able to take a firm standpoint myself, that's part
of my personality, but I really don't know which firm standpoint at
the moment:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From tim_one at email.msn.com  Sun Aug  6 23:03:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 17:03:23 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFLGOAA.tim_one@email.msn.com>

[Jack Jansen]
> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? ...

But they already have, and many times.  That's why it needs a PEP:  so we
don't have to endure <wink> the exact same heated discussions multiple times
every year for eternity.

> ...
> And, similarly, could the detractors of Stackless Python explain why
> it is such a bad idea.

Ditto.

if-anyone-hasn't-yet-noticed-98%-of-advocacy-posts-go-straight-
    into-a-black-hole-ly y'rs  - tim





From thomas at xs4all.net  Sun Aug  6 23:05:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 23:05:45 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>; from jack@oratrix.nl on Sun, Aug 06, 2000 at 10:56:48PM +0200
References: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <20000806230545.T266@xs4all.nl>

On Sun, Aug 06, 2000 at 10:56:48PM +0200, Jack Jansen wrote:

> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? Just and Christian seem to swear by it, but I'd
> like to hear of some simple examples of programming tasks that will be 
> programmable in 50% less code with it (or 50% more understandable
> code, for that matter).

That's *continuations*, not Stackless. Stackless itself is just a way of
implementing the Python bytecode eval loop with minimized use of the C
stack. It doesn't change any functionality except the internal dependance on
the C stack (which is limited on some platforms.) Stackless also makes a
number of things possible, like continuations.

Continuations can certainly reduce code, if used properly, and they can make
it a lot more readable if the choice is between continuations or threaded
spaghetti-code. It can, however, make code a lot less readable too, if used
improperly, or when viewed by someone who doesn't grok continuations.

I'm +1 on Stackless, +0 on continuations. (Continuations are cool, and
Pythonic in one sense (stackframes become even firster class citizens ;) but
not easy to learn or get used to.)

> And, similarly, could the detractors of Stackless Python explain why
> it is such a bad idea.

I think my main reservation towards Stackless is the change to ceval.c,
which is likely to be involved (I haven't looked at it, yet) -- but ceval.c
isn't a childrens' book now, and I think the added complexity (if any) is
worth the loss of some of the dependancies on the C stack.

fl.0,02-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Mon Aug  7 01:18:22 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 06 Aug 2000 23:18:22 +0000
Subject: [Python-Dev] math.rint bites the dust
References: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>
Message-ID: <398DF23E.D1DDE196@nowonder.de>

Guido van Rossum wrote:
> 
> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

If this is because of Bug #111162, things can be fixed easily.
(as I said in another post just some minutes ago, I just
need to recommit the changes made after cnri-16-start.)

I wouldn't be terribly concerned about its (maybe temporary)
death, though. After I learned more about it I am sure I
want to use round() rather than math.rint().

floating-disap-point-ed-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From esr at thyrsus.com  Sun Aug  6 23:59:35 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 17:59:35 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>; from jack@oratrix.nl on Sun, Aug 06, 2000 at 10:56:48PM +0200
References: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <20000806175935.A14138@thyrsus.com>

Jack Jansen <jack at oratrix.nl>:
> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? Just and Christian seem to swear by it, but I'd
> like to hear of some simple examples of programming tasks that will be 
> programmable in 50% less code with it (or 50% more understandable
> code, for that matter).

My interest in Stackless is that I want to see Icon-style generators,
co-routining, and first-class continuations in Python.  Generators and
co-routining are wrappers around continuations.  Something
functionally equivalent to the Stackless mods is needed to get us
there, because using the processor stack makes it very hard to do
continuations properly.

In their full generality, first-class continuations are hard to think
about and to explain clearly, and I'm not going to try here.  A large
part of Guido's reluctance to introduce them is precisely because they
are so hard to think about; he thinks it's a recipe for trouble stuff
in the language that *he* has trouble understanding, let alone other
people.

He has a point, and part of the debate going on in the group that has
been tracking this stuff (Guido, Barry Warsaw, Jeremy Hylton, Fred
Drake, Eric Tiedemann and myself) is whether Python should expose
support for first-class continuations or only "safer" packagings such
as coroutining and generators.  So for the moment just think of
continuations as the necessary primitive to implement coroutining and
generators.

You can think of a generator as a function that, internally, is coded 
as a special kind of loop.  Let's say, for example, that you want a function
that returns successive entries in the list "squares of integers".  In 
Python-with-generators, that would look something like this.

def square_generator():
    i = 1
    while 1:
	yield i**2
	i = i + 1

Calling this function five times in succession would return 1, 4, 9,
16, 25.  Now what would be going on under the hood is that the new primitive
"yield" says "return the given value, and save a continuation of this
function to be run next time the function is called".  The continuation 
saves the program counter and the state of automatic variables (the stack)
to be restored on the next call -- thus, execution effectively resumes just
after the yield statement.

This example probably does not look very interesting.  It's a very trivial
use of the facility.  But now suppose you had an analogous function 
implemented by a code loop that gets an X event and yields the event data!

Suddenly, X programs don't have to look like a monster loop with all the
rest of the code hanging off of them.  Instead, any function in the program
that needs to do stateful input parsing can just say "give me the next event"
and get it.  

In general, what generators let you do is invert control hierarchies
based on stateful loops or recursions.  This is extremely nice for
things like state machines and tree traversals -- you can bundle away
the control loop away in a generator, interrupt it, and restart it
without losing your place.

I want this feature a lot.  Guido has agreed in principle that we ought
to have generators, but there is not yet a well-defined path forward to
them.  Stackless may be the most promising route.

I was going to explain coroutines separately, but I realized while writing
this that the semantics of "yield" proposed above actually gives full
coroutining.  Two functions can ping-pong control back and forth among
themselves while retaining their individual stack states as a pair of
continuations.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"This country, with its institutions, belongs to the people who
inhabit it. Whenever they shall grow weary of the existing government,
they can exercise their constitutional right of amending it or their
revolutionary right to dismember it or overthrow it."
	-- Abraham Lincoln, 4 April 1861



From tim_one at email.msn.com  Mon Aug  7 00:07:45 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 18:07:45 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806175935.A14138@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>

[ Eric S. Raymond]
> ...
> I want this feature [generators] a lot.  Guido has agreed in principle
> that we ought to have generators, but there is not yet a well-defined
> path forward to them.  Stackless may be the most promising route.

Actually, if we had a PEP <wink>, it would have recorded for all time that
Guido gave a detailed description of how to implement generators with minor
changes to the current code.  It would also record that Steven Majewski had
already done so some 5 or 6 years ago.  IMO, the real reason we don't have
generators already is that they keep getting hijacked by continuations
(indeed, Steven gave up on his patches as soon as he realized he couldn't
extend his approach to continuations).

> I was going to explain coroutines separately, but I realized while
> writing this that the semantics of "yield" proposed above actually
> gives full coroutining.

Well, the Icon semantics for "suspend"-- which are sufficient for Icon's
generators --are not sufficient for Icon's coroutines.  It's for that very
reason that Icon supports generators on all platforms (including JCon, their
moral equivalent of JPython), but supports coroutines only on platforms that
have the magical blob of platform-dependent machine-language cruft needed to
trick out the C stack at coroutine context switches (excepting JCon, where
coroutines are implemented as Java threads).

Coroutines are plain harder.  Generators are just semi-coroutines
(suspend/yield *always* return "to the caller", and that makes life 100x
easier in a conventional eval loop like Python's -- it's still "stack-like",
and the only novel thing needed is a way to resume a suspended frame but
still in call-like fashion).

and-if-we-had-a-pep-every-word-of-this-reply-would-have-been-
    in-it-too<wink>-ly y'rs  - tim





From esr at thyrsus.com  Mon Aug  7 00:51:59 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 18:51:59 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 06, 2000 at 06:07:45PM -0400
References: <20000806175935.A14138@thyrsus.com> <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>
Message-ID: <20000806185159.A14259@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> [ Eric S. Raymond]
> > ...
> > I want this feature [generators] a lot.  Guido has agreed in principle
> > that we ought to have generators, but there is not yet a well-defined
> > path forward to them.  Stackless may be the most promising route.
> 
> Actually, if we had a PEP <wink>, it would have recorded for all time that
> Guido gave a detailed description of how to implement generators with minor
> changes to the current code.  It would also record that Steven Majewski had
> already done so some 5 or 6 years ago. 

Christian Tismer, over to you.  I am not going to presume to initiate
the continuations PEP when there's someone with a Python
implementation and extensive usage experience on the list.  However, I
will help with editing and critiques based on my experience with other
languages that have similar features, if you want.

>                                     IMO, the real reason we don't have
> generators already is that they keep getting hijacked by continuations
> (indeed, Steven gave up on his patches as soon as he realized he couldn't
> extend his approach to continuations).

This report of repeated "hijacking" doesn't surprise me a bit.  In fact,
if I'd thought about it I'd have *expected* it.  We know from experience
with other languages (notably Scheme) that call-with-current-continuation
is the simplest orthogonal primitive that this whole cluster of concepts can
be based on.  Implementors with good design taste are going to keep finding
their way back to it, and they're going to feel incompleteness and pressure
if they can't get there.

This is why I'm holding out for continuation objects and 
call-with-continuation to be an explicit Python builtin. We're going to get
there anyway; best to do it cleanly right away.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Taking my gun away because I might shoot someone is like cutting my tongue
out because I might yell `Fire!' in a crowded theater."
        -- Peter Venetoklis



From esr at snark.thyrsus.com  Mon Aug  7 01:18:35 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 19:18:35 -0400
Subject: [Python-Dev] Adding a new class to the library?
Message-ID: <200008062318.TAA14335@snark.thyrsus.com>

I have a candidate for admission to the Python class library.  It's a
framework class for writing things like menu trees and object
browsers.  What's the correct approval procedure for such things?

In more detail, it supports manipulating a stack of sequence objects.
Each sequence object has an associated selection point (the cirrently
selected sequence member) and an associated viewport around it (a
range of indices or sequence members that are considered `visible'.

There are methods to manipulate the object stack.  More importantly,
there are functions which move the selection point in the current
object around, and drag the viewport with it.  (This sort of
thing sounds simple, but is tricky for the same reason BitBlt is
tricky -- lots of funky boundary cases.)

I've used this as the framework for implementing the curses menu
interface for CML2.  It is well-tested and stable.  It might also
be useful for implementing other kinds of data browsers in any
situation where the concept of limited visibility around a selection
point makes sense.  Symbolic debuggers is an example that leaps to mind.

I am, of course, willing to fully document it.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"One of the ordinary modes, by which tyrants accomplish their purposes
without resistance, is, by disarming the people, and making it an
offense to keep arms."
        -- Constitutional scholar and Supreme Court Justice Joseph Story, 1840



From gmcm at hypernet.com  Mon Aug  7 01:34:44 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sun, 6 Aug 2000 19:34:44 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <1246517606-99838203@hypernet.com>

Jack Jansen wrote:

> Could the defenders of Stackless Python please explain _why_ this
> is such a great idea? Just and Christian seem to swear by it, but
> I'd like to hear of some simple examples of programming tasks
> that will be programmable in 50% less code with it (or 50% more
> understandable code, for that matter).

Here's the complete code for the download of a file (the data 
connection of an FTP server):

    def _doDnStream(self, binary=0):
        mode = 'r'
        if binary:
            mode = mode + 'b'
        f = open(self.cmdconn.filename, mode)
        if self.cmdconn.filepos:
            #XXX check length of file
            f.seek(self.cmdconn.filepos, 0)
        while 1:
            if self.abort:
                break
            data = f.read(8192)
            sz = len(data)
            if sz:
                if not binary:
                    data = '\r\n'.join(data.split('\n'))
                self.write(data)
            if sz < 8192:
                break

[from the base class]
    def write(self, msg):
        while msg:
            sent = self.dispatcher.write(self.sock, msg)
            if sent == 0:
                raise IOError, "unexpected EOF"
            msg = msg[sent:]

Looks like blocking sockets, right? Wrong. That's a fully 
multiplexed socket. About a dozen lines of code (hidden in 
that dispatcher object) mean that I can write async without 
using a state machine. 

stackless-forever-ly y'rs

- Gordon



From guido at beopen.com  Mon Aug  7 03:32:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 20:32:59 -0500
Subject: [Python-Dev] Adding a new class to the library?
In-Reply-To: Your message of "Sun, 06 Aug 2000 19:18:35 -0400."
             <200008062318.TAA14335@snark.thyrsus.com> 
References: <200008062318.TAA14335@snark.thyrsus.com> 
Message-ID: <200008070132.UAA16111@cj20424-a.reston1.va.home.com>

> I have a candidate for admission to the Python class library.  It's a
> framework class for writing things like menu trees and object
> browsers.  What's the correct approval procedure for such things?
> 
> In more detail, it supports manipulating a stack of sequence objects.
> Each sequence object has an associated selection point (the cirrently
> selected sequence member) and an associated viewport around it (a
> range of indices or sequence members that are considered `visible'.
> 
> There are methods to manipulate the object stack.  More importantly,
> there are functions which move the selection point in the current
> object around, and drag the viewport with it.  (This sort of
> thing sounds simple, but is tricky for the same reason BitBlt is
> tricky -- lots of funky boundary cases.)
> 
> I've used this as the framework for implementing the curses menu
> interface for CML2.  It is well-tested and stable.  It might also
> be useful for implementing other kinds of data browsers in any
> situation where the concept of limited visibility around a selection
> point makes sense.  Symbolic debuggers is an example that leaps to mind.
> 
> I am, of course, willing to fully document it.

Have a look at the tree widget in IDLE.  That's Tk specific, but I
believe there's a lot of GUI independent concepts in there.  IDLE's
path and object browsers are built on it.  How does this compare?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at thyrsus.com  Mon Aug  7 02:52:53 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 20:52:53 -0400
Subject: [Python-Dev] Adding a new class to the library?
In-Reply-To: <200008070132.UAA16111@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 06, 2000 at 08:32:59PM -0500
References: <200008062318.TAA14335@snark.thyrsus.com> <200008070132.UAA16111@cj20424-a.reston1.va.home.com>
Message-ID: <20000806205253.B14423@thyrsus.com>

Guido van Rossum <guido at beopen.com>:
> Have a look at the tree widget in IDLE.  That's Tk specific, but I
> believe there's a lot of GUI independent concepts in there.  IDLE's
> path and object browsers are built on it.  How does this compare?

Where is this in the CVS tree? I groveled for it but without success.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

To make inexpensive guns impossible to get is to say that you're
putting a money test on getting a gun.  It's racism in its worst form.
        -- Roy Innis, president of the Congress of Racial Equality (CORE), 1988



From greg at cosc.canterbury.ac.nz  Mon Aug  7 03:04:27 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 07 Aug 2000 13:04:27 +1200 (NZST)
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <200008041511.KAA01925@cj20424-a.reston1.va.home.com>
Message-ID: <200008070104.NAA12334@s454.cosc.canterbury.ac.nz>

BDFL:

> No, problems with literal interpretations traditionally raise
> "runtime" exceptions rather than syntax errors.  E.g.

What about using an exception that's a subclass of *both*
ValueError and SyntaxError?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Mon Aug  7 03:16:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 21:16:44 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806185159.A14259@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>

[Tim]
> IMO, the real reason we don't have generators already is that
> they keep getting hijacked by continuations (indeed, Steven gave
> up on his patches as soon as he realized he couldn't extend his
> approach to continuations).

[esr]
> This report of repeated "hijacking" doesn't surprise me a bit.  In
> fact, if I'd thought about it I'd have *expected* it.  We know from
> experience with other languages (notably Scheme) that call-with-
> current-continuation is the simplest orthogonal primitive that this
> whole cluster of concepts can be based on.  Implementors with good
> design taste are going to keep finding their way back to it, and
> they're going to feel incompleteness and pressure if they can't get
> there.

On the one hand, I don't think I know of a language *not* based on Scheme
that has call/cc (or a moral equivalent).  REBOL did at first, but after Joe
Marshal left, Carl Sassenrath ripped it out in favor of a more conventional
implementation.  Even the massive Common Lisp declined to adopt call/cc, the
reasons for which Kent Pitman has posted eloquently and often on
comp.lang.lisp (basically summarized by that continuations are, in Kent's
view, "a semantic mess" in the way Scheme exposed them -- btw, people should
look his stuff up, as he has good ideas for cleaning that mess w/o
sacrificing the power (and so the Lisp world splinters yet again?)).  So
call/cc remains "a Scheme thing" to me after all these years, and even there
by far the most common warning in the release notes for a new implementation
is that call/cc doesn't work correctly yet or at all (but, in the meantime,
here are 3 obscure variations that will work in hard-to-explain special
cases ...).  So, ya, we *do* have experience with this stuff, and it sure
ain't all good.

On the other hand, what implementors other than Schemeheads *do* keep
rediscovering is that generators are darned useful and can be implemented
easily without exotic views of the world.  CLU, Icon and Sather all fit in
that box, and their designers wouldn't touch continuations with a 10-foot
thick condom <wink>.

> This is why I'm holding out for continuation objects and
> call-with-continuation to be an explicit Python builtin. We're
> going to get there anyway; best to do it cleanly right away.

This can get sorted out in the PEP.  As I'm sure someone else has screamed
by now (because it's all been screamed before), Stackless and the
continuation module are distinct beasts (although the latter relies on the
former).  It would be a shame if the fact that it makes continuations
*possible* were to be held against Stackless.  It makes all sorts of things
possible, some of which Guido would even like if people stopped throwing
continuations in his face long enough for him to see beyond them <0.5
wink -- but he doesn't like continuations, and probably never will>.





From jeremy at alum.mit.edu  Mon Aug  7 03:39:46 2000
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sun, 6 Aug 2000 21:39:46 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>

If someone is going to write a PEP, I hope they will explain how the
implementation deals with the various Python C API calls that can call back
into Python.

In the stackless implementation, builtin_apply is a thin wrapper around
builtin_apply_nr.  The wrapper checks the return value from builtin_apply_nr
for Py_UnwindToken.  If Py_UnwindToken is found, it calls
PyEval_Frame_Dispatch. In this case, builtin_apply returns whatever
PyEval_Frame_Dispatch returns; the frame dispatcher just executes stack
frames until it is ready to return.

How does this control flow at the C level interact with a Python API call
like PySequence_Tuple or PyObject_Compare that can start executing Python
code again?  Say there is a Python function call which in turn calls
PySequence_Tuple, which in turn calls a __getitem__ method on some Python
object, which in turn uses a continuation to transfer control.  After the
continuation is called, the Python function will never return and the
PySquence_Tuple call is no longer necessary, but there is still a call to
PySequence_Tuple on the C stack.  How does stackless deal with the return
through this function?

I expect that any C function that may cause Python code to be executed must
be wrapped the way apply was wrapper.  So in the example, PySequence_Tuple
may return Py_UnwindToken.  This adds an extra return condition that every
caller of PySequence_Tuple must check.  Currently, the caller must check for
NULL/exception in addition to a normal return.  With stackless, I assume the
caller would also need to check for "unwinding."

Is this analysis correct? Or is there something I'm missing?

I see that the current source release of stackless does not do anything
special to deal with C API calls that execute Python code.  For example,
PyDict_GetItem calls PyObject_Hash, which could in theory lead to a call on
a continuation, but neither caller nor callee does anything special to
account for the possibility.  Is there some other part of the implementation
that prevents this from being a problem?

Jeremy




From greg at cosc.canterbury.ac.nz  Mon Aug  7 03:50:32 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 07 Aug 2000 13:50:32 +1200 (NZST)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
In-Reply-To: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
Message-ID: <200008070150.NAA12345@s454.cosc.canterbury.ac.nz>

> dict.default('hello', []).append('hello')

Is this new method going to apply to dictionaries only,
or is it to be considered part of the standard mapping
interface?

If the latter, I wonder whether it would be better to
provide a builtin function instead. The more methods
are added to the mapping interface, the more complicated
it becomes to implement an object which fully complies
with the mapping interface. Operations which can be
carried out through the basic interface are perhaps
best kept "outside" the object, in a function or
wrapper object.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From bwarsaw at beopen.com  Mon Aug  7 04:25:54 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sun, 6 Aug 2000 22:25:54 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
	<200008070150.NAA12345@s454.cosc.canterbury.ac.nz>
Message-ID: <14734.7730.698860.642851@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    >> dict.default('hello', []).append('hello')

    GE> Is this new method going to apply to dictionaries only,
    GE> or is it to be considered part of the standard mapping
    GE> interface?

I think we've settled on setdefault(), which is more descriptive, even
if it's a little longer.  I have posted SF patch #101102 which adds
setdefault() to both the dictionary object and UserDict (along with
the requisite test suite and doco changes).

-Barry



From pf at artcom-gmbh.de  Mon Aug  7 10:32:00 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Mon, 7 Aug 2000 10:32:00 +0200 (MEST)
Subject: [Python-Dev] Who is the author of lib-tk/Tkdnd.py?
Message-ID: <m13LiKG-000DieC@artcom0.artcom-gmbh.de>

Hi,

I've some ideas (already implemented <0.5 wink>) for
generic Drag'n'Drop in Python/Tkinter applications.  
Before bothering the list here I would like to discuss this with 
the original author of Tkdnd.py.

Thank you for your attention, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
After all, Python is a programming language, not a psychic hotline. --Tim Peters



From mal at lemburg.com  Mon Aug  7 10:57:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 10:57:01 +0200
Subject: [Python-Dev] Pickling using XML as output format
References: <Pine.GSO.4.10.10008061544180.20069-100000@sundial>
Message-ID: <398E79DD.3EB21D3A@lemburg.com>

Moshe Zadka wrote:
> 
> On Sun, 6 Aug 2000, M.-A. Lemburg wrote:
> 
> > Before starting to reinvent the wheel:
> 
> Ummmm......I'd wait for some DC guy to chime in: I think Zope had
> something like that. You might want to ask around on the Zope lists
> or search zope.org.
> 
> I'm not sure what it has and what it doesn't have, though.

I'll download the latest beta and check this out.

Thanks for the tip,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug  7 11:15:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 11:15:08 +0200
Subject: [Python-Dev] Go \x yourself
References: <200008070104.NAA12334@s454.cosc.canterbury.ac.nz>
Message-ID: <398E7E1C.84D28EA5@lemburg.com>

Greg Ewing wrote:
> 
> BDFL:
> 
> > No, problems with literal interpretations traditionally raise
> > "runtime" exceptions rather than syntax errors.  E.g.
> 
> What about using an exception that's a subclass of *both*
> ValueError and SyntaxError?

What would this buy you ?

Note that the contents of a literal string don't really have
anything to do with syntax. The \x escape sequences are
details of the codecs used for converting those literal
strings to Python string objects.

Perhaps we need a CodecError which is subclass of ValueError
and then make the UnicodeError a subclass of this CodecError ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From artcom0!pf at artcom-gmbh.de  Mon Aug  7 10:14:54 2000
From: artcom0!pf at artcom-gmbh.de (artcom0!pf at artcom-gmbh.de)
Date: Mon, 7 Aug 2000 10:14:54 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14734.7730.698860.642851@anthem.concentric.net> from "Barry A. Warsaw" at "Aug 6, 2000 10:25:54 pm"
Message-ID: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>

Hi,

Guido:
>     >> dict.default('hello', []).append('hello')

Greg Ewing <greg at cosc.canterbury.ac.nz>:
>     GE> Is this new method going to apply to dictionaries only,
>     GE> or is it to be considered part of the standard mapping
>     GE> interface?
 
Barry A. Warsaw:
> I think we've settled on setdefault(), which is more descriptive, even
> if it's a little longer.  I have posted SF patch #101102 which adds
> setdefault() to both the dictionary object and UserDict (along with
> the requisite test suite and doco changes).

This didn't answer the question raised by Greg Ewing.  AFAI have seen,
the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
the answer is "applies to dictionaries only".

What is with the other external mapping types already in the core,
like 'dbm', 'shelve' and so on?

If the patch doesn't add this new method to these other mapping types, 
this fact should at least be documented similar to the methods 'items()' 
and 'values' that are already unimplemented in 'dbm':
 """Dbm objects behave like mappings (dictionaries), except that 
    keys and values are always strings.  Printing a dbm object 
    doesn't print the keys and values, and the items() and values() 
    methods are not supported."""

I'm still -1 on the name:  Nobody would expect, that a method 
called 'setdefault()' will actually return something useful.  May be 
it would be better to invent an absolutely obfuscuated new name, so 
that everybody is forced to actually *READ* the documentation of this 
method or nobody will guess, what it is supposed to do or even
worse: how to make clever use of it.

At least it would be a lot more likely, that someone becomes curious, 
what a method called 'grezelbatz()' is suppoed to do, than that someone
will actually lookup the documentation of a method called 'setdefault()'.

If the average Python programmer would ever start to use this method 
at all, then I believe it is very likely that we will see him/her
coding:
	dict.setdefault('key', [])
	dict['key'].append('bar')

So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
it would be better to make this a builtin function, that can be applied
to all mapping types.

Maybe it would be even better to delay this until in Python 3000
builtin types may have become real classes, so that this method may
be inherited by all mapping types from an abstract mapping base class.

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)




From mal at lemburg.com  Mon Aug  7 12:07:09 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 12:07:09 +0200
Subject: [Python-Dev] Pickling using XML as output format
References: <Pine.GSO.4.10.10008061544180.20069-100000@sundial> <398E79DD.3EB21D3A@lemburg.com>
Message-ID: <398E8A4D.CAA87E02@lemburg.com>

"M.-A. Lemburg" wrote:
> 
> Moshe Zadka wrote:
> >
> > On Sun, 6 Aug 2000, M.-A. Lemburg wrote:
> >
> > > Before starting to reinvent the wheel:
> >
> > Ummmm......I'd wait for some DC guy to chime in: I think Zope had
> > something like that. You might want to ask around on the Zope lists
> > or search zope.org.
> >
> > I'm not sure what it has and what it doesn't have, though.
> 
> I'll download the latest beta and check this out.

Ok, Zope has something called ppml.py which aims at converting
Python pickles to XML. It doesn't really pickle directly to XML
though and e.g. uses the Python encoding for various objects.

I guess, I'll start hacking away at my own xmlpickle.py
implementation with the goal of making Python pickles
editable using a XML editor.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tismer at appliedbiometrics.com  Mon Aug  7 12:48:19 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 12:48:19 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
Message-ID: <398E93F3.374B585A@appliedbiometrics.com>


Jeremy Hylton wrote:
> 
> If someone is going to write a PEP, I hope they will explain how the
> implementation deals with the various Python C API calls that can call back
> into Python.

He will.

> In the stackless implementation, builtin_apply is a thin wrapper around
> builtin_apply_nr.  The wrapper checks the return value from builtin_apply_nr
> for Py_UnwindToken.  If Py_UnwindToken is found, it calls
> PyEval_Frame_Dispatch. In this case, builtin_apply returns whatever
> PyEval_Frame_Dispatch returns; the frame dispatcher just executes stack
> frames until it is ready to return.

Correct.

> How does this control flow at the C level interact with a Python API call
> like PySequence_Tuple or PyObject_Compare that can start executing Python
> code again?  Say there is a Python function call which in turn calls
> PySequence_Tuple, which in turn calls a __getitem__ method on some Python
> object, which in turn uses a continuation to transfer control.  After the
> continuation is called, the Python function will never return and the
> PySquence_Tuple call is no longer necessary, but there is still a call to
> PySequence_Tuple on the C stack.  How does stackless deal with the return
> through this function?

Right. What you see here is the incompleteness of Stackless.
In order to get this "right", I would have to change many
parts of the implementation, in order to allow for continuations
in every (probably even unwanted) place.
I could not do this.

Instead, the situation of these still occouring recursions
are handled differently. continuationmodule guarantees, that
in the context of recursive interpreter calls, the given
stack order of execution is obeyed. Violations of this
cause simply an exception.

> I expect that any C function that may cause Python code to be executed must
> be wrapped the way apply was wrapper.  So in the example, PySequence_Tuple
> may return Py_UnwindToken.  This adds an extra return condition that every
> caller of PySequence_Tuple must check.  Currently, the caller must check for
> NULL/exception in addition to a normal return.  With stackless, I assume the
> caller would also need to check for "unwinding."

No, nobody else is allowed to return Py_UnwindToken but the few
functions in the builtins implementation and in ceval. The
continuationmodule may produce it since it knows the context
where it is called. eval_code is supposed to be the main instance
who checks for this special value.

As said, allowing this in any context would have been a huge
change to the whole implementation, and would probably also
have broken existing extensions which do not expect that
a standard function wants to do a callback.

> Is this analysis correct? Or is there something I'm missing?
> 
> I see that the current source release of stackless does not do anything
> special to deal with C API calls that execute Python code.  For example,
> PyDict_GetItem calls PyObject_Hash, which could in theory lead to a call on
> a continuation, but neither caller nor callee does anything special to
> account for the possibility.  Is there some other part of the implementation
> that prevents this from being a problem?

This problem is no problem for itself, since inside the stackless
modification for Python, there are no places where unexpected
Py_UnwindTokens or continuations are produced. This is a closed
system insofar. But with the continuation extension, it is
of course a major problem.

The final solution to the recursive interpreter/continuation
problem was found long after my paper was presented. The idea
is simple, solves everything, and shortened my implementation
substantially:

Whenever a recursive interpreter call takes place, the calling
frame gets a lock flag set. This flag says "this frame is wrapped
in a suspended eval_code call and cannot be a continuation".
continuationmodule always obeys this flag and prevends the
creation of continuations for such frames by raising an
exception. In other words: Stack-like behavior is enforced
in situations where the C stack is involved.

So, a builtin or an extension *can* call a continuation, but
finally, it will have to come back to the calling point.
If not, then one of the locked frames will be touched,
finally, in the wrong C stack order. But by reference
counting, this touching will cause the attempt to create
a continuation, and what I said above will raise an exception.

Probably the wrong place to explain this in more detail here, but
it doesn't apply to the stackless core at all which is just
responsible for the necessary support machinery.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From paul at prescod.net  Mon Aug  7 14:18:43 2000
From: paul at prescod.net (Paul Prescod)
Date: Mon, 07 Aug 2000 08:18:43 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <39866F8D.FCFA85CB@prescod.net> <1246975873-72274187@hypernet.com>
Message-ID: <398EA923.E5400D2B@prescod.net>

Gordon McMillan wrote:
> 
> ...
> 
> As a practical matter, it looks to me like winreg (under any but
> the most well-informed usage) may well leak handles. If so,
> that would be a disaster. But I don't have time to check it out.

I would be very surprised if that was the case. Perhaps you can outline
your thinking so that *I* can check it out.

I claim that:

_winreg never leaks Windows handles as long _winreg handle objects are
destroyed.

winreg is written entirely in Python and destroys _winreg handles as
long as winreg key objects are destroyed.

winreg key objects are destroyed as long as there is no cycle.

winreg does not create cycles.

Therefore, winreg does not leak handles. I'm 98% confident of each
assertion...for a total confidence of 92%.
-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Mon Aug  7 14:38:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 07:38:11 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Mon, 07 Aug 2000 13:50:32 +1200."
             <200008070150.NAA12345@s454.cosc.canterbury.ac.nz> 
References: <200008070150.NAA12345@s454.cosc.canterbury.ac.nz> 
Message-ID: <200008071238.HAA18076@cj20424-a.reston1.va.home.com>

> > dict.default('hello', []).append('hello')
> 
> Is this new method going to apply to dictionaries only,
> or is it to be considered part of the standard mapping
> interface?
> 
> If the latter, I wonder whether it would be better to
> provide a builtin function instead. The more methods
> are added to the mapping interface, the more complicated
> it becomes to implement an object which fully complies
> with the mapping interface. Operations which can be
> carried out through the basic interface are perhaps
> best kept "outside" the object, in a function or
> wrapper object.

The "mapping interface" has no firm definition.  You're free to
implement something without a default() method and call it a mapping.

In Python 3000, where classes and built-in types will be unified, of
course this will be fixed: there will be a "mapping" base class that
implements get() and default() in terms of other, more primitive
operations.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From moshez at math.huji.ac.il  Mon Aug  7 13:45:45 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 7 Aug 2000 14:45:45 +0300 (IDT)
Subject: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd)
Message-ID: <Pine.GSO.4.10.10008071444080.4113-100000@sundial>

I've answered him personally about the first part -- but the second part
is interesting (and even troubling)

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez

---------- Forwarded message ----------
Date: Mon, 7 Aug 2000 08:59:30 +0000 (UTC)
From: Eddy De Greef <degreef at imec.be>
To: python-list at python.org
Newsgroups: comp.lang.python
Subject: Minor compilation problem on HP-UX (1.6b1)

Hi,

when I compile version 1.6b1 on HP-UX-10, I get a few compilation errors 
in Python/getargs.c (undefined UCHAR_MAX etc). The following patch fixes this:

------------------------------------------------------------------------------
*** Python/getargs.c.orig       Mon Aug  7 10:19:55 2000
--- Python/getargs.c    Mon Aug  7 10:20:21 2000
***************
*** 8,13 ****
--- 8,14 ----
  #include "Python.h"
  
  #include <ctype.h>
+ #include <limits.h>
  
  
  int PyArg_Parse Py_PROTO((PyObject *, char *, ...));
------------------------------------------------------------------------------

I also have a suggestion to improve the speed on the HP-UX platform. 
By tuning the memory allocation algorithm (see the patch below), it is 
possible to obtain a speed improvement of up to 22% on non-trivial 
Python scripts, especially when lots of (small) objects have to be created. 
I'm aware that platform-specific features are undesirable for a 
multi-platform application such as Python, but 22% is quite a lot
for such a small modification ...
Maybe similar tricks can be used on other platforms too.

------------------------------------------------------------------------------
*** Modules/main.c.orig Mon Aug  7 10:02:09 2000
--- Modules/main.c      Mon Aug  7 10:02:37 2000
***************
*** 83,88 ****
--- 83,92 ----
        orig_argc = argc;       /* For Py_GetArgcArgv() */
        orig_argv = argv;
  
+ #ifdef __hpux
+       mallopt (M_MXFAST, 512);
+ #endif /* __hpux */
+ 
        if ((p = getenv("PYTHONINSPECT")) && *p != '\0')
                inspect = 1;
        if ((p = getenv("PYTHONUNBUFFERED")) && *p != '\0')
------------------------------------------------------------------------------

Regards,

Eddy
-- 
http://www.python.org/mailman/listinfo/python-list




From gmcm at hypernet.com  Mon Aug  7 14:00:10 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 08:00:10 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <398EA923.E5400D2B@prescod.net>
Message-ID: <1246472883-102528128@hypernet.com>

Paul Prescod wrote:

> Gordon McMillan wrote:
> > 
> > ...
> > 
> > As a practical matter, it looks to me like winreg (under any
> > but the most well-informed usage) may well leak handles. If so,
> > that would be a disaster. But I don't have time to check it
> > out.
> 
> I would be very surprised if that was the case. Perhaps you can
> outline your thinking so that *I* can check it out.

Well, I saw RegKey.close nowhere referenced. I saw the 
method it calls in _winreg not getting triggered elsewhere. I 
missed that _winreg closes them another way on dealloc.

BTW, not all your hive names exist on every Windows 
platform (or build of _winreg).
 


- Gordon



From jack at oratrix.nl  Mon Aug  7 14:27:59 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 14:27:59 +0200
Subject: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd) 
In-Reply-To: Message by Moshe Zadka <moshez@math.huji.ac.il> ,
	     Mon, 7 Aug 2000 14:45:45 +0300 (IDT) , <Pine.GSO.4.10.10008071444080.4113-100000@sundial> 
Message-ID: <20000807122800.8D0B1303181@snelboot.oratrix.nl>

> + #ifdef __hpux
> +       mallopt (M_MXFAST, 512);
> + #endif /* __hpux */
> + 

After reading this I went off and actually _read_ the mallopt manpage for the 
first time in my life, and it seems there's quite a few parameters there we 
might want to experiment with. Besides the M_MXFAST there's also M_GRAIN, 
M_BLKSIZ, M_MXCHK and M_FREEHD that could have significant impact on Python 
performance. I know that all the tweaks and tricks I did in the MacPython 
malloc implementation resulted in a speedup of 20% or more (including the 
cache-aligment code in dictobject.c).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 14:59:49 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 14:59:49 +0200 (CEST)
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd))
In-Reply-To: <20000807122800.8D0B1303181@snelboot.oratrix.nl> from "Jack Jansen" at Aug 07, 2000 02:27:59 PM
Message-ID: <200008071259.OAA22446@python.inrialpes.fr>

Jack Jansen wrote:
> 
> 
> > + #ifdef __hpux
> > +       mallopt (M_MXFAST, 512);
> > + #endif /* __hpux */
> > + 
> 
> After reading this I went off and actually _read_ the mallopt manpage for the 
> first time in my life, and it seems there's quite a few parameters there we 
> might want to experiment with. Besides the M_MXFAST there's also M_GRAIN, 
> M_BLKSIZ, M_MXCHK and M_FREEHD that could have significant impact on Python 
> performance. I know that all the tweaks and tricks I did in the MacPython 
> malloc implementation resulted in a speedup of 20% or more (including the 
> cache-aligment code in dictobject.c).

To start with, try the optional object malloc I uploaded yestedray at SF.
[Patch #101104]

Tweaking mallopt and getting 20% speedup for some scripts is no surprise
at all. For me <wink>. It is not portable though.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From jeremy at beopen.com  Mon Aug  7 15:05:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 09:05:20 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>
References: <20000806185159.A14259@thyrsus.com>
	<LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>
Message-ID: <14734.46096.366920.827786@bitdiddle.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

  TP> On the one hand, I don't think I know of a language *not* based
  TP> on Scheme that has call/cc (or a moral equivalent).

ML also has call/cc, at least the Concurrent ML variant.

Jeremy



From jeremy at beopen.com  Mon Aug  7 15:10:14 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 09:10:14 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <398E93F3.374B585A@appliedbiometrics.com>
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
	<398E93F3.374B585A@appliedbiometrics.com>
Message-ID: <14734.46390.190481.441065@bitdiddle.concentric.net>

>>>>> "CT" == Christian Tismer <tismer at appliedbiometrics.com> writes:

  >> If someone is going to write a PEP, I hope they will explain how
  >> the implementation deals with the various Python C API calls that
  >> can call back into Python.

  CT> He will.

Good!  You'll write a PEP.

  >> How does this control flow at the C level interact with a Python
  >> API call like PySequence_Tuple or PyObject_Compare that can start
  >> executing Python code again?  Say there is a Python function call
  >> which in turn calls PySequence_Tuple, which in turn calls a
  >> __getitem__ method on some Python object, which in turn uses a
  >> continuation to transfer control.  After the continuation is
  >> called, the Python function will never return and the
  >> PySquence_Tuple call is no longer necessary, but there is still a
  >> call to PySequence_Tuple on the C stack.  How does stackless deal
  >> with the return through this function?

  CT> Right. What you see here is the incompleteness of Stackless.  In
  CT> order to get this "right", I would have to change many parts of
  CT> the implementation, in order to allow for continuations in every
  CT> (probably even unwanted) place.  I could not do this.

  CT> Instead, the situation of these still occouring recursions are
  CT> handled differently. continuationmodule guarantees, that in the
  CT> context of recursive interpreter calls, the given stack order of
  CT> execution is obeyed. Violations of this cause simply an
  CT> exception.

Let me make sure I understand: If I invoke a continuation when there
are extra C stack frames between the mainloop invocation that captured
the continuation and the call of the continuation, the interpreter
raises an exception?

If so, continuations don't sound like they would mix well with C
extension modules and callbacks.  I guess it also could not be used
inside methods that implement operator overloading.  Is there a simple
set of rules that describe the situtations where they will not work?

Jeremy



From thomas at xs4all.net  Mon Aug  7 15:07:11 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 7 Aug 2000 15:07:11 +0200
Subject: [Python-Dev] augmented assignment
Message-ID: <20000807150711.W266@xs4all.nl>


I 'finished' the new augmented assignment patch yesterday, following the
suggestions made by Guido about using INPLACE_* bytecodes rather than
special GETSET_* opcodes.

I ended up with 13 new opcodes: INPLACE_* opcodes for the 11 binary
operation opcodes, DUP_TOPX which duplicates a number of stack items instead
of just the topmost item, and ROT_FOUR.

I thought I didn't need ROT_FOUR if we had DUP_TOPX but I hadn't realized
assignment needs the new value at the bottom of the 'stack', and the objects
that are used in the assignment above that. So ROT_FOUR is necessary in the
case of slice-assignment:

a[b:c] += i

LOAD a			[a]
LOAD b			[a, b]
LOAD c			[a, b, c]
DUP_TOPX 3		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
ROT_FOUR		[result, a, b, c]
STORE_SLICE+3		[]

When (and if) the *SLICE opcodes are removed, ROT_FOUR can, too :)

The patch is 'done' in my opinion, except for two tiny things:

- PyNumber_InPlacePower() takes just two arguments, not three. Three
argument power() does such 'orrible things to coerce all the arguments, and
you can't do augmented-assignment-three-argument-power anyway. If it's added
it would be for the API only, and I'm not sure if it's worth it :P

- I still don't like the '_ab_' names :) I think __inplace_add__ or __iadd__
  is better, but that's just me.

The PEP is also 'done'. Feedback is more than welcome, including spelling
fixes and the like. I've attached the PEP to this mail, for convenience.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: pep-0203.txt
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000807/c2d02a00/attachment.txt>

From guido at beopen.com  Mon Aug  7 16:11:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 09:11:52 -0500
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: Your message of "Mon, 07 Aug 2000 10:14:54 +0200."
             <m13Lj9u-000DieC@artcom0.artcom-gmbh.de> 
References: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de> 
Message-ID: <200008071411.JAA18437@cj20424-a.reston1.va.home.com>

> Guido:
> >     >> dict.default('hello', []).append('hello')
> 
> Greg Ewing <greg at cosc.canterbury.ac.nz>:
> >     GE> Is this new method going to apply to dictionaries only,
> >     GE> or is it to be considered part of the standard mapping
> >     GE> interface?
>  
> Barry A. Warsaw:
> > I think we've settled on setdefault(), which is more descriptive, even
> > if it's a little longer.  I have posted SF patch #101102 which adds
> > setdefault() to both the dictionary object and UserDict (along with
> > the requisite test suite and doco changes).

PF:
> This didn't answer the question raised by Greg Ewing.  AFAI have seen,
> the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
> the answer is "applies to dictionaries only".

I replied to Greg Ewing already: it's not part of the required mapping
protocol.

> What is with the other external mapping types already in the core,
> like 'dbm', 'shelve' and so on?
> 
> If the patch doesn't add this new method to these other mapping types, 
> this fact should at least be documented similar to the methods 'items()' 
> and 'values' that are already unimplemented in 'dbm':
>  """Dbm objects behave like mappings (dictionaries), except that 
>     keys and values are always strings.  Printing a dbm object 
>     doesn't print the keys and values, and the items() and values() 
>     methods are not supported."""

Good point.

> I'm still -1 on the name:  Nobody would expect, that a method 
> called 'setdefault()' will actually return something useful.  May be 
> it would be better to invent an absolutely obfuscuated new name, so 
> that everybody is forced to actually *READ* the documentation of this 
> method or nobody will guess, what it is supposed to do or even
> worse: how to make clever use of it.

I don't get your point.  Since when is it a requirement for a method
to convey its full meaning by just its name?  As long as the name
doesn't intuitively contradict the actual meaning it should be fine.

If you read code that does:

	dict.setdefault('key', [])
	dict['key'].append('bar')

you will have no problem understanding this.  There's no need for the
reader to know that this is suboptimal.  (Of course, if you're an
experienced Python user doing a code review, you might know that.  But
it's not needed to understand what goes on.)

Likewise, if you read code like this:

	dict.setdefault('key', []).append('bar')

it doesn't seem hard to guess what it does (under the assumption that
you already know the program works).  After all, there are at most
three things that setdefault() could *possibly* return:

1. None		-- but then the append() would't work

2. dict		-- but append() is not a dict method so wouldn't work either

3. dict['key']	-- this is the only one that makes sense

> At least it would be a lot more likely, that someone becomes curious, 
> what a method called 'grezelbatz()' is suppoed to do, than that someone
> will actually lookup the documentation of a method called 'setdefault()'.

Bogus.  This would argue that we should give all methods obscure names.

> If the average Python programmer would ever start to use this method 
> at all, then I believe it is very likely that we will see him/her
> coding:
> 	dict.setdefault('key', [])
> 	dict['key'].append('bar')

And I have no problem with that.  It's still less writing than the
currently common idioms to deal with this!

> So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
> it would be better to make this a builtin function, that can be applied
> to all mapping types.

Yes, and let's also make values(), items(), has_key() and get()
builtins instead of methods.  Come on!  Python is an OO language.

> Maybe it would be even better to delay this until in Python 3000
> builtin types may have become real classes, so that this method may
> be inherited by all mapping types from an abstract mapping base class.

Sure, but that's not an argument for not adding it to the dictionary
type today!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jack at oratrix.nl  Mon Aug  7 15:26:40 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 15:26:40 +0200
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX 
 (1.6b1) (fwd))
In-Reply-To: Message by Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov) 
 ,
	     Mon, 7 Aug 2000 14:59:49 +0200 (CEST) , <200008071259.OAA22446@python.inrialpes.fr> 
Message-ID: <20000807132641.A60E6303181@snelboot.oratrix.nl>

Don't worry, Vladimir, I hadn't forgotten your malloc stuff:-) Its just that 
if mallopt is available in the standard C library this may be a way to squeeze 
out a couple of extra percent of performance that the admin who installs 
Python needn't be aware of. And I don't think your allocator can be dropped in 
to the standard distribution, because it has the potential problem of 
fragmenting the heap due to multiple malloc packages in one address space (at 
least, that was the problem when I last looked at it, which is admittedly more 
than a year ago).

And about mallopt not being portable: right, but I would assume that something 
like
#ifdef M_MXFAST
	mallopt(M_MXFAST, xxxx);
#endif
shouldn't do any harm if we set xxxx to be a size that will cause 80% or so of 
the python objects to fall into the M_MXFAST category 
(sizeof(PyObject)+sizeof(void *), maybe?). This doesn't sound 
platform-dependent...

Similarly, M_FREEHD sounds like it could speed up Python allocation, but this 
would need to be measured. Python allocation patterns shouldn't be influenced 
too much by platform, so again if this is good on one platform it is probably 
good on all.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From mark at per.dem.csiro.au  Mon Aug  7 21:34:42 2000
From: mark at per.dem.csiro.au (Mark Favas)
Date: Mon, 7 Aug 0 21:34:42 WST
Subject: [Python-Dev] mallopt (Was: Minor compilation problem on HP-UX (1.6b1))
Message-ID: <200008071334.VAA15707@demperth.per.dem.csiro.au>

To add to Vladimir Marangozov's comments about mallopt, in terms of both
portability and utility (before too much time is expended)...


From tismer at appliedbiometrics.com  Mon Aug  7 15:47:39 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 15:47:39 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
		<398E93F3.374B585A@appliedbiometrics.com> <14734.46390.190481.441065@bitdiddle.concentric.net>
Message-ID: <398EBDFB.4ED9FAE7@appliedbiometrics.com>

[about recursion and continuations]

>   CT> Right. What you see here is the incompleteness of Stackless.  In
>   CT> order to get this "right", I would have to change many parts of
>   CT> the implementation, in order to allow for continuations in every
>   CT> (probably even unwanted) place.  I could not do this.
> 
>   CT> Instead, the situation of these still occouring recursions are
>   CT> handled differently. continuationmodule guarantees, that in the
>   CT> context of recursive interpreter calls, the given stack order of
>   CT> execution is obeyed. Violations of this cause simply an
>   CT> exception.
> 
> Let me make sure I understand: If I invoke a continuation when there
> are extra C stack frames between the mainloop invocation that captured
> the continuation and the call of the continuation, the interpreter
> raises an exception?

Not always. Frames which are not currently bound by an
interpreter acting on them can always be jump targets.
Only those frames which are currently in the middle of
an opcode are forbidden.

> If so, continuations don't sound like they would mix well with C
> extension modules and callbacks.  I guess it also could not be used
> inside methods that implement operator overloading.  Is there a simple
> set of rules that describe the situtations where they will not work?

Right. In order to have good mixing with C callbacks, extra
work is necessary. The C extension module must then play the
same frame dribbling game as the eval_loop does. An example
can be found in stackless map.
If the C extension does not do so, it restricts execution
order in the way I explained. This is not always needed,
and it is no new requirement for C developers to do so.
Only if they want to support free continuation switching,
they have to implement it.

The simple set of rules where continuations will not work at
the moment is: Generally it does not work across interpreter
recursions. At least restrictions appear:

- you cannot run an import and jump off to the caller's frame
+ but you can save a continuation in your import and use it
  later, when this recursive interpreter is gone.

- all special class functions are restricted.
+ but you can for instance save a continuation in __init__
  and use it later, when the init recursion has gone.

Reducing all these restrictions is a major task, and there
are situations where it looks impossible without an extra
subinterpreter language. If you look into the implementation
of operators like __add__, you will see that there are
repeated method calls which all may cause other interpreters
to show up. I tried to find a way to roll these functions
out in a restartable way, but it is quite a mess. The
clean way to do it would be to have microcodes, and to allow
for continuations to be caught between them.

this-is-a-stackless-3000-feature - ly y'rs - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 16:00:08 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 16:00:08 +0200 (CEST)
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX
In-Reply-To: <20000807132641.A60E6303181@snelboot.oratrix.nl> from "Jack Jansen" at Aug 07, 2000 03:26:40 PM
Message-ID: <200008071400.QAA22652@python.inrialpes.fr>

Jack Jansen wrote:
> 
> Don't worry, Vladimir, I hadn't forgotten your malloc stuff:-)

Me? worried about mallocs? :-)

> if mallopt is available in the standard C library this may be a way
> to squeeze out a couple of extra percent of performance that the admin
> who installs Python needn't be aware of.

As long as you're maintaining a Mac-specific port of Python, you can
do this without pbs on the Mac port.

> And I don't think your allocator can be dropped in 
> to the standard distribution, because it has the potential problem of 
> fragmenting the heap due to multiple malloc packages in one address
> space (at least, that was the problem when I last looked at it, which
> is admittedly more than a year ago).

Things have changed since then. Mainly on the Python side.
Have a look again.

> 
> And about mallopt not being portable: right, but I would assume that
> something like
> #ifdef M_MXFAST
> 	mallopt(M_MXFAST, xxxx);
> #endif
> shouldn't do any harm if we set xxxx to be a size that will cause 80%
> or so of the python objects to fall into the M_MXFAST category 

Which is exactly what pymalloc does, except that this applies for > 95% of
all allocations.

> (sizeof(PyObject)+sizeof(void *), maybe?). This doesn't sound 
> platform-dependent...

Indeed, I also use this trick to tune automatically the object allocator
for 64-bit platforms. I haven't tested it on such machines as I don't have
access to them, though. But it should work.

> Similarly, M_FREEHD sounds like it could speed up Python allocation,
> but this would need to be measured. Python allocation patterns shouldn't
> be influenced too much by platform, so again if this is good on one
> platform it is probably good on all.

I am against any guesses in this domain. Measures and profiling evidence:
that's it.  Being able to make lazy decisions about Python's mallocs is
our main advantage. Anything else is wild hype <0.3 wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gmcm at hypernet.com  Mon Aug  7 16:20:50 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 10:20:50 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <14734.46390.190481.441065@bitdiddle.concentric.net>
References: <398E93F3.374B585A@appliedbiometrics.com>
Message-ID: <1246464444-103035791@hypernet.com>

Jeremy wrote:

> >>>>> "CT" == Christian Tismer <tismer at appliedbiometrics.com>
> >>>>> writes:
> 
>   >> If someone is going to write a PEP, I hope they will explain
>   how >> the implementation deals with the various Python C API
>   calls that >> can call back into Python.
> 
>   CT> He will.
> 
> Good!  You'll write a PEP.

Actually, "He" is me. While I speak terrible German, my 
Tismerish is pretty good (Tismerish to English is a *huge* 
jump <wink>).

But I can't figure out what the h*ll is being PEPed. We know 
that continuations / coroutines / generators have great value. 
We know that stackless is not continuations; it's some mods 
(mostly to ceval.c) that enables continuation.c. But the 
questions you're asking (after protesting that you want a 
formal spec, not a reference implementation) are all about 
Christian's implementation of continuation.c. (Well, OK, it's 
whether the stackless mods are enough to allow a perfect 
continuations implementation.)

Assuming that stackless can get along with GC, ceval.c and 
grammar changes (or Christian can make it so), it seems to 
me the PEPable issue is whether the value this can add is 
worth the price of a less linear implementation.

still-a-no-brainer-to-me-ly y'rs

- Gordon



From jack at oratrix.nl  Mon Aug  7 16:23:14 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 16:23:14 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons 
In-Reply-To: Message by Christian Tismer <tismer@appliedbiometrics.com> ,
	     Mon, 07 Aug 2000 15:47:39 +0200 , <398EBDFB.4ED9FAE7@appliedbiometrics.com> 
Message-ID: <20000807142314.C0186303181@snelboot.oratrix.nl>

> > Let me make sure I understand: If I invoke a continuation when there
> > are extra C stack frames between the mainloop invocation that captured
> > the continuation and the call of the continuation, the interpreter
> > raises an exception?
> 
> Not always. Frames which are not currently bound by an
> interpreter acting on them can always be jump targets.
> Only those frames which are currently in the middle of
> an opcode are forbidden.

And how about the reverse? If I'm inside a Python callback from C code, will 
the Python code be able to use continuations? This is important, because there 
are a lot of GUI applications where almost all code is executed within a C 
callback. I'm pretty sure (and otherwise I'll be corrected within 
milliseconds:-) that this is the case for MacPython IDE and PythonWin (don't 
know about Idle).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jeremy at beopen.com  Mon Aug  7 16:32:35 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 10:32:35 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246464444-103035791@hypernet.com>
References: <398E93F3.374B585A@appliedbiometrics.com>
	<1246464444-103035791@hypernet.com>
Message-ID: <14734.51331.820955.54653@bitdiddle.concentric.net>

Gordon,

Thanks for channeling Christian, if that's what writing a PEP on this
entails :-).

I am also a little puzzled about the subject of the PEP.  I think you
should hash it out with Barry "PEPmeister" Warsaw.  There are two
different issues -- the stackless implementation and the new control
structure exposed to programmers (e.g. continuations, coroutines,
iterators, generators, etc.).  It seems plausible to address these in
two different PEPs, possibly in competing PEPs (e.g. coroutines
vs. continuations).

Jeremy



From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 16:38:32 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 16:38:32 +0200 (CEST)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246464444-103035791@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 10:20:50 AM
Message-ID: <200008071438.QAA22748@python.inrialpes.fr>

Gordon McMillan wrote:
> 
> But I can't figure out what the h*ll is being PEPed.
> ...
> Assuming that stackless can get along with GC,

As long as frames are not considered for GC, don't worry about GC.

> ceval.c and grammar changes (or Christian can make it so), it seems to 
> me the PEPable issue is whether the value this can add is 
> worth the price of a less linear implementation.

There's an essay + paper available, slides and an implementation.
What's the problem about formalizing this in a PEP and addressing
the controversial issues + explaining how they are dealt with?

I mean, if you're a convinced long-time Stackless user and everything
is obvious for you, this PEP should try to convince the rest of us -- 
so write it down and ask no more <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tismer at appliedbiometrics.com  Mon Aug  7 16:50:42 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 16:50:42 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <20000807142314.C0186303181@snelboot.oratrix.nl>
Message-ID: <398ECCC2.957A9F67@appliedbiometrics.com>


Jack Jansen wrote:
> 
> > > Let me make sure I understand: If I invoke a continuation when there
> > > are extra C stack frames between the mainloop invocation that captured
> > > the continuation and the call of the continuation, the interpreter
> > > raises an exception?
> >
> > Not always. Frames which are not currently bound by an
> > interpreter acting on them can always be jump targets.
> > Only those frames which are currently in the middle of
> > an opcode are forbidden.
> 
> And how about the reverse? If I'm inside a Python callback from C code, will
> the Python code be able to use continuations? This is important, because there
> are a lot of GUI applications where almost all code is executed within a C
> callback. I'm pretty sure (and otherwise I'll be corrected within
> milliseconds:-) that this is the case for MacPython IDE and PythonWin (don't
> know about Idle).

Without extra effort, this will be problematic. If C calls back
into Python, not by the trampoline scheme that stackless uses,
but by causing an interpreter recursion, then this interpreter
will be limited. It can jump to any other frame that is not held
by an interpreter on the C stack, but the calling frame of the
C extension for instance is locked. Touching it causes an
exception.
This need not necessarily be a problem. Assume you have one or a
couple of frames sitting around, caught as a continuation.
Your Python callback from C jumps to that continuation and does
something. Afterwards, it returns to the C callback.
Performing some cycles of an idle task may be a use of such
a thing.
But as soon as you want to leave the complete calling chain,
be able to modify it, return to a level above your callback
and such, you need to implement your callback in a different
way.
The scheme is rather simple and can be seen in the stackless
map implementation: You need to be able to store your complete
state information in a frame, and you need to provide an
execute function for your frame. Then you return the magic
Py_UnwindToken, and your prepared frame will be scheduled
like any pure Python function frame.

Summary: By default, C extensions are restricted to stackful
behaviror. By giving them a stackless interface, you can
enable it completely for all continuation stuff.

cheers - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From gmcm at hypernet.com  Mon Aug  7 17:28:01 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 11:28:01 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <200008071438.QAA22748@python.inrialpes.fr>
References: <1246464444-103035791@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 10:20:50 AM
Message-ID: <1246460413-103278281@hypernet.com>

Vladimir Marangozov wrote:
> Gordon McMillan wrote:
> > 
> > But I can't figure out what the h*ll is being PEPed.
> > ...
...
> 
> > ceval.c and grammar changes (or Christian can make it so), it
> > seems to me the PEPable issue is whether the value this can add
> > is worth the price of a less linear implementation.
> 
> There's an essay + paper available, slides and an implementation.

Of which the most up to date is the implementation. The 
slides / docs describe an earlier, more complex scheme.

> What's the problem about formalizing this in a PEP and addressing
> the controversial issues + explaining how they are dealt with?

That's sort of what I was asking. As far as I can tell, what's 
controversial is "continuations". That's not in scope. I would 
like to know what controversial issues there are that *are* in 
scope. 
 
> I mean, if you're a convinced long-time Stackless user and
> everything is obvious for you, this PEP should try to convince
> the rest of us -- so write it down and ask no more <wink>.

That's exactly wrong. If that were the case, I would be forced 
to vote -1 on any addition / enhancement to Python that I 
personally didn't plan on using.

- Gordon



From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 17:53:15 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 17:53:15 +0200 (CEST)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246460413-103278281@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 11:28:01 AM
Message-ID: <200008071553.RAA22891@python.inrialpes.fr>

Gordon McMillan wrote:
> 
> > What's the problem about formalizing this in a PEP and addressing
> > the controversial issues + explaining how they are dealt with?
> 
> That's sort of what I was asking. As far as I can tell, what's 
> controversial is "continuations". That's not in scope. I would 
> like to know what controversial issues there are that *are* in 
> scope. 

Here's the context that might help you figure out what I'd
like to see in this PEP. I haven't been at the last conference, I
have read the source and the essay as of years ago and have no idea
that the most up to date thing is the implementation, which I refuse
to look at again, btw, without a clear summary of what this code does,
refreshing my memory on the whole subject.

I'd like to see an overview of the changes, their expected impact on
the core, the extensions, and whatever else you judge worthy to write
about.

I'd like to see a summary of the reactions that have been emitted and
what issues are non-issues for you, and which ones are. I'd like to see
a first draft giving me a horizontal view on the subject in its entirety. 
Code examples are welcome, too. I can then start thinking about it
in a more structured way on this basis. I don't have such a basis right
now, because there's no an up to date document in plain English that
allows me to do that. And without such a document, I won't do it.

it's-simple-<wink>'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From sjoerd at oratrix.nl  Mon Aug  7 18:19:59 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Mon, 07 Aug 2000 18:19:59 +0200
Subject: [Python-Dev] SRE incompatibility
In-Reply-To: Your message of Wed, 05 Jul 2000 01:46:07 +0200.
             <002601bfe612$06e90ec0$f2a6b5d4@hagrid> 
References: <20000704095542.8697B31047C@bireme.oratrix.nl> 
            <002601bfe612$06e90ec0$f2a6b5d4@hagrid> 
Message-ID: <20000807162000.5190631047C@bireme.oratrix.nl>

Is this problem ever going to be solved or is it too hard?
If it's too hard, I can fix xmllib to not depend on this.  This
incompatibility is the only reason I'm still not using sre.

In case you don't remember, the regexp that is referred to is
regexp = '(([a-z]+):)?([a-z]+)$'

On Wed, Jul 5 2000 "Fredrik Lundh" wrote:

> sjoerd wrote:
> 
> > >>> re.match(regexp, 'smil').group(0,1,2,3)
> > ('smil', None, 's', 'smil')
> > >>> import pre
> > >>> pre.match(regexp, 'smil').group(0,1,2,3)
> > ('smil', None, None, 'smil')
> > 
> > Needless to say, I am relying on the third value being None...
> 
> I've confirmed this (last night's fix should have solved this,
> but it didn't).  I'll post patches as soon as I have them...
> 
> </F>
> 
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From pf at artcom-gmbh.de  Mon Aug  7 10:14:54 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Mon, 7 Aug 2000 10:14:54 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14734.7730.698860.642851@anthem.concentric.net> from "Barry A. Warsaw" at "Aug 6, 2000 10:25:54 pm"
Message-ID: <m13Li3i-000DieC@artcom0.artcom-gmbh.de>

Hi,

Guido:
>     >> dict.default('hello', []).append('hello')

Greg Ewing <greg at cosc.canterbury.ac.nz>:
>     GE> Is this new method going to apply to dictionaries only,
>     GE> or is it to be considered part of the standard mapping
>     GE> interface?
 
Barry A. Warsaw:
> I think we've settled on setdefault(), which is more descriptive, even
> if it's a little longer.  I have posted SF patch #101102 which adds
> setdefault() to both the dictionary object and UserDict (along with
> the requisite test suite and doco changes).

This didn't answer the question raised by Greg Ewing.  AFAI have seen,
the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
the answer is "applies to dictionaries only".

What is with the other external mapping types already in the core,
like 'dbm', 'shelve' and so on?

If the patch doesn't add this new method to these other mapping types, 
this fact should at least be documented similar to the methods 'items()' 
and 'values' that are already unimplemented in 'dbm':
 """Dbm objects behave like mappings (dictionaries), except that 
    keys and values are always strings.  Printing a dbm object 
    doesn't print the keys and values, and the items() and values() 
    methods are not supported."""

I'm still -1 on the name:  Nobody would expect, that a method 
called 'setdefault()' will actually return something useful.  May be 
it would be better to invent an absolutely obfuscuated new name, so 
that everybody is forced to actually *READ* the documentation of this 
method or nobody will guess, what it is supposed to do or even
worse: how to make clever use of it.

At least it would be a lot more likely, that someone becomes curious, 
what a method called 'grezelbatz()' is suppoed to do, than that someone
will actually lookup the documentation of a method called 'setdefault()'.

If the average Python programmer would ever start to use this method 
at all, then I believe it is very likely that we will see him/her
coding:
	dict.setdefault('key', [])
	dict['key'].append('bar')

So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
it would be better to make this a builtin function, that can be applied
to all mapping types.

Maybe it would be even better to delay this until in Python 3000
builtin types may have become real classes, so that this method may
be inherited by all mapping types from an abstract mapping base class.

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)



From tim_one at email.msn.com  Mon Aug  7 23:52:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 7 Aug 2000 17:52:18 -0400
Subject: Fun with call/cc (was RE: [Python-Dev] Stackless Python - Pros and Cons)
In-Reply-To: <14734.46096.366920.827786@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>

[Tim]
> On the one hand, I don't think I know of a language *not* based
> on Scheme that has call/cc (or a moral equivalent).

[Jeremy Hylton]
> ML also has call/cc, at least the Concurrent ML variant.

So it does!  I've found 3 language lines that have full-blown call/cc (not
counting the early versions of REBOL, since they took it out later), and at
least one web page claiming "that's all, folks":

1. Scheme + derivatives (but not including most Lisps).

2. Standard ML + derivatives (but almost unique among truly
   functional languages):

   http://cm.bell-labs.com/cm/cs/what/smlnj/doc/SMLofNJ/pages/cont.html

   That page is pretty much incomprehensible on its own.  Besides
   callcc (no "/"), SML-NJ also has related "throw", "isolate",
   "capture" and "escape" functions.  At least some of them *appear*
   to be addressing Kent Pitman's specific complaints about the
   excruciating interactions between call/cc and unwind-protect in
   Scheme.

3. Unlambda.  This one is a hoot!  Don't know why I haven't bumped
   into it before:

   http://www.eleves.ens.fr:8080/home/madore/programs/unlambda/
   "Your Functional Programming Language Nightmares Come True"

   Unlambda is a deliberately obfuscated functional programming
   language, whose only data type is function and whose only
   syntax is function application:  no lambdas (or other "special
   forms"), no integers, no lists, no variables, no if/then/else,
   ...  call/cc is spelled with the single letter "c" in Unlambda,
   and the docs note "expressions including c function calls tend
   to be hopelessly difficult to track down.  This was, of course,
   the reason for including it in the language in the first place".

   Not all frivolous, though!  The page goes on to point out that
   writing an interpreter for Unlambda in something other than Scheme
   exposes many of the difficult issues (like implementing call/cc
   in a language that doesn't have any such notion -- which is,
   after all, almost all languages), in a language that's otherwise
   relentlessly simple-minded so doesn't bog you down with
   accidental complexities.

Doesn't mean call/cc sucks, but language designers *have* been avoiding it
in vast numbers -- despite that the Scheme folks have been pushing it (&
pushing it, & pushing it) in every real language they flee to <wink>.

BTW, lest anyone get the wrong idea, I'm (mostly) in favor of it!  It can't
possibly be sold on any grounds other than that "it works, for real Python
programmers with real programming problems they can't solve in other ways",
though.  Christian has been doing a wonderful (if slow-motion <wink>) job of
building that critical base of real-life users.





From guido at beopen.com  Tue Aug  8 01:03:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 18:03:46 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre.py,1.23,1.24 sre_compile.py,1.29,1.30 sre_parse.py,1.29,1.30
In-Reply-To: Your message of "Mon, 07 Aug 2000 13:59:08 MST."
             <200008072059.NAA11904@slayer.i.sourceforge.net> 
References: <200008072059.NAA11904@slayer.i.sourceforge.net> 
Message-ID: <200008072303.SAA31635@cj20424-a.reston1.va.home.com>

> -- reset marks if repeat_one tail doesn't match
>    (this should fix Sjoerd's xmllib problem)

Somebody please add a test case for this to test_re.py !!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at thyrsus.com  Tue Aug  8 00:13:02 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:13:02 -0400
Subject: [Python-Dev] Adding library modules to the core
Message-ID: <20000807181302.A27463@thyrsus.com>

A few days ago I asked about the procedure for adding a module to the
Python core library.  I have a framework class for things like menu systems
and symbolic debuggers I'd like to add.

Guido asked if this was similar to the TreeWidget class in IDLE.  I 
investigated and discovered that it is not, and told him so.  I am left
with a couple of related questions:

1. Has anybody got a vote on the menubrowser framwork facility I described?

1. Do we have a procedure for vetting modules for inclusion in the stock
distribution?  If not, should be institute one?

2. I am willing to do a pass through the Vaults of Parnassus and other
sources for modules that seem both sufficiently useful and sufficiently
mature to be added.  I have in mind things like mimectl, PIL, and Vladimir's
shared-memory module.  

Now, assuming I do 3, would I need to go through the vote process
on each of these, or can I get a ukase from the BDFL authorizing me to
fold in stuff?

I realize I'm raising questions for which there are no easy answers.
But Python is growing.  The Python social machine needs to adapt to
make such decisions in a more timely and less ad-hoc fashion.  I'm not
attached to being the point person in this process, but somebody's gotta be.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Question with boldness even the existence of a God; because, if there
be one, he must more approve the homage of reason, than that of
blindfolded fear.... Do not be frightened from this inquiry from any
fear of its consequences. If it ends in the belief that there is no
God, you will find incitements to virtue in the comfort and
pleasantness you feel in its exercise...
	-- Thomas Jefferson, in a 1787 letter to his nephew



From esr at thyrsus.com  Tue Aug  8 00:24:03 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:24:03 -0400
Subject: Fun with call/cc (was RE: [Python-Dev] Stackless Python - Pros and Cons)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 07, 2000 at 05:52:18PM -0400
References: <14734.46096.366920.827786@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>
Message-ID: <20000807182403.A27485@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> Doesn't mean call/cc sucks, but language designers *have* been avoiding it
> in vast numbers -- despite that the Scheme folks have been pushing it (&
> pushing it, & pushing it) in every real language they flee to <wink>.

Yes, we have.  I haven't participated in conspiratorial hugggermugger with
other ex-Schemers, but I suspect we'd all answer pretty much the same way.
Lots of people have been avoiding call/cc not because it sucks but but because
the whole area is very hard to think about even if you have the right set
of primitives.
 
> BTW, lest anyone get the wrong idea, I'm (mostly) in favor of it!  It can't
> possibly be sold on any grounds other than that "it works, for real Python
> programmers with real programming problems they can't solve in other ways",
> though.  Christian has been doing a wonderful (if slow-motion <wink>) job of
> building that critical base of real-life users.

And it's now Christian's job to do the next stop, supplying up-to-date
documentation on his patch and proposal as a PEP.

Suggestion: In order to satisfy the BDFL's conservative instincts, perhaps
it would be better to break the Stackless patch into two pieces -- one 
that de-stack-izes ceval, and one that implements new language features.
That way we can build a firm base for later exploration.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Government is not reason, it is not eloquence, it is force; like fire, a
troublesome servant and a fearful master. Never for a moment should it be left
to irresponsible action."
	-- George Washington, in a speech of January 7, 1790



From thomas at xs4all.net  Tue Aug  8 00:23:35 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 00:23:35 +0200
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807181302.A27463@thyrsus.com>; from esr@thyrsus.com on Mon, Aug 07, 2000 at 06:13:02PM -0400
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <20000808002335.A266@xs4all.nl>

On Mon, Aug 07, 2000 at 06:13:02PM -0400, Eric S. Raymond wrote:

[ You didn't ask for votes on all these, but the best thing I can do is
vote :-]

> 1. Has anybody got a vote on the menubrowser framwork facility I described?

+0. I don't see any harm in adding it, but I can't envision a use for it,
myself.

> I have in mind things like mimectl,

+1. A nice complement to the current mime and message handling routines.

> PIL,

+0. The main reason I don't compile PIL myself is because it's such a hassle
to do it each time, so I think adding it would be nice. However, I'm not
sure if it's doable to add, whether it would present a lot of problems for
'strange' platforms and the like.

> and Vladimir's shared-memory module.

+1. Fits very nicely with the mmapmodule, even if it's supported on less
platforms.

But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
new PEP, 'enriching the Standard Library' ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Tue Aug  8 00:39:30 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:39:30 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:23:35AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl>
Message-ID: <20000807183930.A27556@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> new PEP, 'enriching the Standard Library' ?

I think that leads in a sub-optimal direction.  Adding suitable modules
shouldn't be a one-shot or episodic event but a continuous process of 
incorporating the best work the community has done.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Taking my gun away because I might shoot someone is like cutting my tongue
out because I might yell `Fire!' in a crowded theater."
        -- Peter Venetoklis



From esr at thyrsus.com  Tue Aug  8 00:42:24 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:42:24 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:23:35AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl>
Message-ID: <20000807184224.B27556@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> On Mon, Aug 07, 2000 at 06:13:02PM -0400, Eric S. Raymond wrote:
> 
> [ You didn't ask for votes on all these, but the best thing I can do is
> vote :-]
> 
> > 1. Has anybody got a vote on the menubrowser framwork facility I described?
> 
> +0. I don't see any harm in adding it, but I can't envision a use for it,
> myself.

I'll cheerfully admit that I think it's kind of borderline myself.  It works,
but it teeters on the edge of being too specialized for the core library.  I
might only +0 it myself :-).
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

As with the Christian religion, the worst advertisement for Socialism
is its adherents.
	-- George Orwell 



From thomas at xs4all.net  Tue Aug  8 00:38:39 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 00:38:39 +0200
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807183930.A27556@thyrsus.com>; from esr@thyrsus.com on Mon, Aug 07, 2000 at 06:39:30PM -0400
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl> <20000807183930.A27556@thyrsus.com>
Message-ID: <20000808003839.Q13365@xs4all.nl>

On Mon, Aug 07, 2000 at 06:39:30PM -0400, Eric S. Raymond wrote:
> Thomas Wouters <thomas at xs4all.net>:
> > But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> > new PEP, 'enriching the Standard Library' ?

> I think that leads in a sub-optimal direction.  Adding suitable modules
> shouldn't be a one-shot or episodic event but a continuous process of 
> incorporating the best work the community has done.

That depends on what the PEP does. PEPs can do two things (according to the
PEP that covers PEPs :): argue for a new feature/addition to the Python
language, or describe a standard or procedure of some sort. This PEP could
perhaps do both: describe a standard procedure for proposing and accepting a
new module in the library (and probably also removal, though that's a lot
trickier) AND do some catching-up on that process to get a few good modules
into the stdlib before 2.0 goes into a feature freeze (which is next week,
by the way.)

As for the procedure to add a new module, I think someone volunteering to
'adopt' the module and perhaps a few people reviewing it would about do it,
for the average module. Giving people a chance to say 'no!' of course.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Tue Aug  8 00:59:54 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:59:54 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808003839.Q13365@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:38:39AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl> <20000807183930.A27556@thyrsus.com> <20000808003839.Q13365@xs4all.nl>
Message-ID: <20000807185954.B27636@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> That depends on what the PEP does. PEPs can do two things (according to the
> PEP that covers PEPs :): argue for a new feature/addition to the Python
> language, or describe a standard or procedure of some sort. This PEP could
> perhaps do both: describe a standard procedure for proposing and accepting a
> new module in the library (and probably also removal, though that's a lot
> trickier) AND do some catching-up on that process to get a few good modules
> into the stdlib before 2.0 goes into a feature freeze (which is next week,
> by the way.)
> 
> As for the procedure to add a new module, I think someone volunteering to
> 'adopt' the module and perhaps a few people reviewing it would about do it,
> for the average module. Giving people a chance to say 'no!' of course.

Sounds like my cue to write a PEP.  What's the URL for the PEP on PEPs again?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

See, when the GOVERNMENT spends money, it creates jobs; whereas when the money
is left in the hands of TAXPAYERS, God only knows what they do with it.  Bake
it into pies, probably.  Anything to avoid creating jobs.
	-- Dave Barry



From bwarsaw at beopen.com  Tue Aug  8 00:58:42 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 18:58:42 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <14735.16162.275037.583897@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> 1. Do we have a procedure for vetting modules for inclusion
    ESR> in the stock distribution?  If not, should be institute one?

Is there any way to use the SourceForge machinery to help here?  The
first step would be to upload a patch so at least the new stuff
doesn't get forgotten, and it's always easy to find the latest version
of the changes.

Also SF has a polling or voting tool, doesn't it?  I know nothing
about it, but perhaps there's some way to leverage it to test the
pulse of the community for any new module (with BDFL veto of course).

-Barry



From esr at thyrsus.com  Tue Aug  8 01:09:39 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 19:09:39 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <14735.16162.275037.583897@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 07, 2000 at 06:58:42PM -0400
References: <20000807181302.A27463@thyrsus.com> <14735.16162.275037.583897@anthem.concentric.net>
Message-ID: <20000807190939.A27730@thyrsus.com>

Barry A. Warsaw <bwarsaw at beopen.com>:
> Is there any way to use the SourceForge machinery to help here?  The
> first step would be to upload a patch so at least the new stuff
> doesn't get forgotten, and it's always easy to find the latest version
> of the changes.

Patch?  Eh?  In most cases, adding a library module will consist of adding
one .py and one .tex, with no changes to existing code.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The price of liberty is, always has been, and always will be blood.  The person
who is not willing to die for his liberty has already lost it to the first
scoundrel who is willing to risk dying to violate that person's liberty.  Are
you free? 
	-- Andrew Ford



From bwarsaw at beopen.com  Tue Aug  8 01:04:39 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 19:04:39 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
	<20000808002335.A266@xs4all.nl>
	<20000807183930.A27556@thyrsus.com>
	<20000808003839.Q13365@xs4all.nl>
	<20000807185954.B27636@thyrsus.com>
Message-ID: <14735.16519.185236.794662@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> Sounds like my cue to write a PEP.  What's the URL for the
    ESR> PEP on PEPs again?

http://python.sourceforge.net/peps/pep-0001.html

-Barry



From bwarsaw at beopen.com  Tue Aug  8 01:06:21 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 19:06:21 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
	<14735.16162.275037.583897@anthem.concentric.net>
	<20000807190939.A27730@thyrsus.com>
Message-ID: <14735.16621.369206.564320@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> Patch?  Eh?  In most cases, adding a library module will
    ESR> consist of adding one .py and one .tex, with no changes to
    ESR> existing code.

And there's no good way to put those into SF?  If the Patch Manager
isn't appropriate, what about the Task Manager (I dunno, I've never
looked at it).  The cool thing about using SF is that there's less of
a chance that this stuff will get buried in an inbox.

-Barry



From guido at beopen.com  Tue Aug  8 02:21:43 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 19:21:43 -0500
Subject: [Python-Dev] bug-fixes in cnri-16-start branch
In-Reply-To: Your message of "Sun, 06 Aug 2000 22:49:06 GMT."
             <398DEB62.789B4C9C@nowonder.de> 
References: <398DEB62.789B4C9C@nowonder.de> 
Message-ID: <200008080021.TAA31766@cj20424-a.reston1.va.home.com>

> I have a question on the right procedure for fixing a simple
> bug in the 1.6 release branch.
> 
> Bug #111162 appeared because the tests for math.rint() are
> already contained in the cnri-16-start revision of test_math.py
> while the "try: ... except AttributeError: ..." construct which
> was checked in shortly after was not.
> 
> Now the correct bugfix is already known (and has been
> applied to the main branch). I have updated the test_math.py
> file in my working version with "-r cnri-16-start" and
> made the changes.
> 
> Now I probably should just commit, close the patch
> (with an appropriate follow-up) and be happy.

That would work, except that I prefer to remove math.rint altogether,
as explained by Tim Peters.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at snark.thyrsus.com  Tue Aug  8 01:31:21 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 19:31:21 -0400
Subject: [Python-Dev] Request for PEP number
Message-ID: <200008072331.TAA27825@snark.thyrsus.com>

In accordance with the procedures in PEP 1, I am applying to initiate PEP 2.  

Proposed title: Procedure for incorporating new modules into the core.

Abstract: This PEP will describes review and voting procedures for 
incorporating candidate modules and extensions into the Python core.

Barry, could I get you to create a pep2 at python.org mailing list for
this one?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

That the said Constitution shall never be construed to authorize
Congress to infringe the just liberty of the press or the rights of
conscience; or to prevent the people of the United states who are
peaceable citizens from keeping their own arms...
        -- Samuel Adams, in "Phila. Independent Gazetteer", August 20, 1789



From guido at beopen.com  Tue Aug  8 02:42:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 19:42:40 -0500
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: Your message of "Mon, 07 Aug 2000 18:13:02 -0400."
             <20000807181302.A27463@thyrsus.com> 
References: <20000807181302.A27463@thyrsus.com> 
Message-ID: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>

[ESR]
> 1. Has anybody got a vote on the menubrowser framwork facility I described?

Eric, as far as I can tell you haven't shown the code or given a
pointer to it.  I explained to you that your description left me in
the dark as to what it does.  Or did I miss a pointer?  It seems your
module doesn't even have a name!  This is a bad way to start a
discussion about the admission procedure.  Nothing has ever been
accepted into Python before the code was written and shown.

> 1. Do we have a procedure for vetting modules for inclusion in the stock
> distribution?  If not, should be institute one?

Typically, modules get accepted after extensive lobbying and agreement
from multiple developers.  The definition of "developer" is vague, and
I can't give a good rule -- not everybody who has been admitted to the
python-dev list has enough standing to make his opinion count!

Basically, I rely a lot on what various people say, but I have my own
bias about who I trust in what area.  I don't think I'll have to
publish a list of this bias, but one thing is clear: I;m not counting
votes!  Proposals and ideas get approved based on merit, not on how
many people argue for (or against) it.  I want Python to keep its
typical Guido-flavored style, and (apart from the occasional succesful
channeling by TP) there's only one way to do that: let me be the final
arbiter.  I'm willing to be the bottleneck, it gives Python the
typical slow-flowing evolution that has served it well over the past
ten years.  (At the same time, I can't read all messages in every
thread on python-dev any more -- that's why substantial ideas need a
PEP to summarize the discussion.)

> 2. I am willing to do a pass through the Vaults of Parnassus and other
> sources for modules that seem both sufficiently useful and sufficiently
> mature to be added.  I have in mind things like mimectl, PIL, and Vladimir's
> shared-memory module.  

I don't know mimectl or Vladimir's module (how does it compare to
mmap?).  Regarding PIL, I believe the problem there is that it is a
large body of code maintained by a third party.  It should become part
of a SUMO release and of binary releases, but I see no advantage in
carrying it along in the core source distribution.

> Now, assuming I do 3, would I need to go through the vote process
> on each of these, or can I get a ukase from the BDFL authorizing me to
> fold in stuff?

Sorry, I don't write blank checks.

> I realize I'm raising questions for which there are no easy answers.
> But Python is growing.  The Python social machine needs to adapt to
> make such decisions in a more timely and less ad-hoc fashion.  I'm
> not attached to being the point person in this process, but
> somebody's gotta be.

Watch out though: if we open the floodgates now we may seriously
deteriorate the quality of the standard library, without doing much
good.

I'd much rather see an improved Vaults of Parnassus (where every
module uses distutils and installation becomes trivial) than a
fast-track process for including new code in the core.

That said, I think writing a bunch of thoughts up as a PEP makes a lot
of sense!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at thyrsus.com  Tue Aug  8 03:23:34 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 21:23:34 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 07, 2000 at 07:42:40PM -0500
References: <20000807181302.A27463@thyrsus.com> <200008080042.TAA31856@cj20424-a.reston1.va.home.com>
Message-ID: <20000807212333.A27996@thyrsus.com>

Guido van Rossum <guido at beopen.com>:
> [ESR]
> > 1. Has anybody got a vote on the menubrowser framwork facility I described?
> 
> Eric, as far as I can tell you haven't shown the code or given a
> pointer to it.  I explained to you that your description left me in
> the dark as to what it does.  Or did I miss a pointer?  It seems your
> module doesn't even have a name!  This is a bad way to start a
> discussion about the admission procedure.  Nothing has ever been
> accepted into Python before the code was written and shown.

Easily fixed.  Code's in an enclosure.
 
> > 1. Do we have a procedure for vetting modules for inclusion in the stock
> > distribution?  If not, should be institute one?
> 
> Typically, modules get accepted after extensive lobbying and agreement
> from multiple developers.  The definition of "developer" is vague, and
> I can't give a good rule -- not everybody who has been admitted to the
> python-dev list has enough standing to make his opinion count!

Understood, and I assume one of those insufficient-standing people is
*me*, given my short tenure on the list, and I cheerfully accept that.
The real problem I'm going after here is that this vague rule won't
scale well.

> Basically, I rely a lot on what various people say, but I have my own
> bias about who I trust in what area.  I don't think I'll have to
> publish a list of this bias, but one thing is clear: I;m not counting
> votes! 

I wasn't necessarily expecting you to.  I can't imagine writing a
procedure in which the BDFL doesn't retain a veto.

> I don't know mimectl or Vladimir's module (how does it compare to
> mmap?).

Different, as Thomas Wouters has already observed.  Vladimir's module is more
oriented towards supporting semaphores and exclusion.  At one point many months
ago, before Vladimir was on the list, I looked into it as a way to do exclusion
locking for shared shelves.  Vladimir and I even negotiated a license change
with INRIA so Python could use it.  That was my first pass at sharable 
shelves; it foundered on problems with the BSDDB 1.85 API.  But shm would
still be worth having in the core librariy, IMO.

The mimecntl module supports classes for representing MIME objects that
include MIME-structure-sensitive mutation operations.  Very strong candidate
for inclusion, IMO.

> > Now, assuming I do 3, would I need to go through the vote process
> > on each of these, or can I get a ukase from the BDFL authorizing me to
> > fold in stuff?
> 
> Sorry, I don't write blank checks.

And I wasn't expecting one.  I'll write up some thoughts about this in the PEP.
 
> > I realize I'm raising questions for which there are no easy answers.
> > But Python is growing.  The Python social machine needs to adapt to
> > make such decisions in a more timely and less ad-hoc fashion.  I'm
> > not attached to being the point person in this process, but
> > somebody's gotta be.
> 
> Watch out though: if we open the floodgates now we may seriously
> deteriorate the quality of the standard library, without doing much
> good.

The alternative is to end up with a Perl-like Case of the Missing Modules,
where lots of things Python writers should be able to count on as standard
builtins can't realistically be used, because the users they deliver to
aren't going to want to go through a download step.
 
> I'd much rather see an improved Vaults of Parnassus (where every
> module uses distutils and installation becomes trivial) than a
> fast-track process for including new code in the core.

The trouble is that I flat don't believe in this solution.  It works OK
for developers, who will be willing to do extra download steps -- but it
won't fly with end-user types.

> That said, I think writing a bunch of thoughts up as a PEP makes a lot
> of sense!

I've applied to initiate PEP 2.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Hoplophobia (n.): The irrational fear of weapons, correctly described by 
Freud as "a sign of emotional and sexual immaturity".  Hoplophobia, like
homophobia, is a displacement symptom; hoplophobes fear their own
"forbidden" feelings and urges to commit violence.  This would be
harmless, except that they project these feelings onto others.  The
sequelae of this neurosis include irrational and dangerous behaviors
such as passing "gun-control" laws and trashing the Constitution.
-------------- next part --------------
# menubrowser.py -- framework class for abstract browser objects

from sys import stderr

class MenuBrowser:
    "Support abstract browser operations on a stack of indexable objects."
    def __init__(self, debug=0, errout=stderr):
        self.page_stack = []
        self.selection_stack = []
        self.viewbase_stack = []
        self.viewport_height = 0
        self.debug = debug
        self.errout = errout

    def match(self, a, b):
        "Browseable-object comparison."
        return a == b

    def push(self, browseable, selected=None):
        "Push a browseable object onto the location stack."
        if self.debug:
            self.errout.write("menubrowser.push(): pushing %s=@%d, selection=%s\n" % (browseable, id(browseable), `selected`))
        selnum = 0
        if selected == None:
            if self.debug:
                self.errout.write("menubrowser.push(): selection defaulted\n")
        else:
            for i in range(len(browseable)):
                selnum = len(browseable) - i - 1
                if self.match(browseable[selnum], selected):
                     break
            if self.debug:
                self.errout.write("menubrowser.push(): selection set to %d\n" % (selnum))
        self.page_stack.append(browseable)
        self.selection_stack.append(selnum)
        self.viewbase_stack.append(selnum - selnum % self.viewport_height)
        if self.debug:
            object = self.page_stack[-1]
            selection = self.selection_stack[-1]
            viewbase = self.viewbase_stack[-1]
            self.errout.write("menubrowser.push(): pushed %s=@%d->%d, selection=%d, viewbase=%d\n" % (object, id(object), len(self.page_stack), selection, viewbase))

    def pop(self):
        "Pop a browseable object off the location stack."
        if not self.page_stack:
            if self.debug:
                self.errout.write("menubrowser.pop(): stack empty\n")
            return None
        else:
            item = self.page_stack[-1]
            self.page_stack = self.page_stack[:-1]
            self.selection_stack = self.selection_stack[:-1]
            self.viewbase_stack = self.viewbase_stack[:-1]
            if self.debug:
                if len(self.page_stack) == 0:
                    self.errout.write("menubrowser.pop(): stack is empty.")
                else:
                    self.errout.write("menubrowser.pop(): new level %d, object=@%d, selection=%d, viewbase=%d\n" % (len(self.page_stack), id(self.page_stack[-1]), self.selection_stack[-1], self.viewbase_stack[-1]))
            return item

    def stackdepth(self):
        "Return the current stack depth."
        return len(self.page_stack)

    def list(self):
        "Return all elements of the current object that ought to be visible."
        if not self.page_stack:
            return None
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        viewbase = self.viewbase_stack[-1]

        if self.debug:
            self.errout.write("menubrowser.list(): stack level %d. object @%d, listing %s\n" % (len(self.page_stack)-1, id(object), object[viewbase:viewbase+self.viewport_height]))

        # This requires a slice method
        return object[viewbase:viewbase+self.viewport_height]

    def top(self):
        "Return the top-of-stack menu"
        if self.debug >= 2:
            self.errout.write("menubrowser.top(): level=%d, @%d\n" % (len(self.page_stack)-1,id(self.page_stack[-1])))
        return self.page_stack[-1]

    def selected(self):
        "Return the currently selected element in the top menu."
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        if self.debug:
            self.errout.write("menubrowser.selected(): at %d, object=@%d, %s\n" % (len(self.page_stack)-1, id(object), self.selection_stack[-1]))
        return object[selection]

    def viewbase(self):
        "Return the viewport base of the current menu."
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        base = self.viewbase_stack[-1]
        if self.debug:
            self.errout.write("menubrowser.viewbase(): at level=%d, object=@%d, %d\n" % (len(self.page_stack)-1, id(object), base,))
        return base

    def thumb(self):
        "Return top and bottom boundaries of a thumb scaled to the viewport."
        object = self.page_stack[-1]
        windowscale = float(self.viewport_height) / float(len(object))
        thumb_top = self.viewbase() * windowscale
        thumb_bottom = thumb_top + windowscale * self.viewport_height - 1
        return (thumb_top, thumb_bottom)

    def move(self, delta=1, wrap=0):
        "Move the selection on the current item downward."
        if delta == 0:
            return
        object = self.page_stack[-1]
        oldloc = self.selection_stack[-1]

        # Change the selection.  Requires a length method
        if oldloc + delta in range(len(object)):
            newloc = oldloc + delta
        elif wrap:
            newloc = (oldloc + delta) % len(object)
        elif delta > 0:
            newloc = len(object) - 1
        else:
            newloc = 0
        self.selection_stack[-1] = newloc

        # When the selection is moved out of the viewport, move the viewbase
        # just part enough to track it.
        oldbase = self.viewbase_stack[-1]
        if newloc in range(oldbase, oldbase + self.viewport_height):
            pass
        elif newloc < oldbase:
            self.viewbase_stack[-1] = newloc
        else:
            self.scroll(newloc - (oldbase + self.viewport_height) + 1)

        if self.debug:
            self.errout.write("menubrowser.down(): at level=%d, object=@%d, old selection=%d, new selection = %d, new base = %d\n" % (len(self.page_stack)-1, id(object), oldloc, newloc, self.viewbase_stack[-1]))

        return (oldloc != newloc)

    def scroll(self, delta=1, wrap=0):
        "Scroll the viewport up or down in the current option."
        print "delta:", delta
        object = self.page_stack[-1]
        if not wrap:
            oldbase = self.viewbase_stack[-1]
            if delta > 0 and oldbase+delta > len(object)-self.viewport_height:
                return
            elif delta < 0 and oldbase + delta < 0:
                return
        self.viewbase_stack[-1] = (self.viewbase_stack[-1] + delta) % len(object)

    def dump(self):
        "Dump the whole stack of objects."
        self.errout.write("Viewport height: %d\n" % (self.viewport_height,))
        for i in range(len(self.page_stack)):
            self.errout.write("Page: %d\n" % (i,))
            self.errout.write("Selection: %d\n" % (self.selection_stack[i],))
            self.errout.write(`self.page_stack[i]` + "\n");

    def next(self, wrap=0):
        return self.move(1, wrap)

    def previous(self, wrap=0):
        return self.move(-1, wrap)

    def page_down(self):
        return self.move(2*self.viewport_height-1)

    def page_up(self):
        return self.move(-(2*self.viewport_height-1))

if __name__ == '__main__': 
    import cmd, string, readline

    def itemgen(prefix, count):
        return map(lambda x, pre=prefix: pre + `x`, range(count))

    testobject = menubrowser()
    testobject.viewport_height = 6
    testobject.push(itemgen("A", 11))

    class browser(cmd.Cmd):
        def __init__(self):
            self.wrap = 0
            self.prompt = "browser> "

        def preloop(self):
            print "%d: %s (%d) in %s" %  (testobject.stackdepth(), testobject.selected(), testobject.viewbase(), testobject.list())

        def postloop(self):
            print "Goodbye."

        def postcmd(self, stop, line):
            self.preloop()
            return stop

        def do_quit(self, line):
            return 1

        def do_exit(self, line):
            return 1

        def do_EOF(self, line):
            return 1

        def do_list(self, line):
            testobject.dump()

        def do_n(self, line):
            testobject.next()

        def do_p(self, line):
            testobject.previous()

        def do_pu(self, line):
            testobject.page_up()

        def do_pd(self, line):
            testobject.page_down()

        def do_up(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.move(-n, self.wrap)

        def do_down(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.move(n, self.wrap)

        def do_s(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.scroll(n, self.wrap)

        def do_pop(self, line):
            testobject.pop()

        def do_gen(self, line):
            tokens = string.split(line)
            testobject.push(itemgen(tokens[0], string.atoi(tokens[1])))

        def do_dump(self, line):
            testobject.dump()

        def do_wrap(self, line):
            self.wrap = 1 - self.wrap
            if self.wrap:
                print "Wrap is now on."
            else:
                print "Wrap is now off."

        def emptyline(self):
            pass

    browser().cmdloop()

From MarkH at ActiveState.com  Tue Aug  8 03:36:24 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 8 Aug 2000 11:36:24 +1000
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807181302.A27463@thyrsus.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>

> Guido asked if this was similar to the TreeWidget class in IDLE.  I
> investigated and discovered that it is not, and told him so.  I am left
> with a couple of related questions:
>
> 1. Has anybody got a vote on the menubrowser framwork facility I
> described?

I would have to give it a -1.  It probably should only be a -0, but I
dropped it a level in the interests of keeping the library small and
relevant.

In a nutshell, it is proposed as a "framework class for abstract browser
objects", but I don't see how.  It looks like a reasonable framework for a
particular kind of browser built for a text based system.  I can not see
how a GUI browser could take advantage of it.

For example:
* How does a "page" concept make sense in a high-res GUI?  Why do we have a
stack of pages?
* What is a "viewport height" - is that a measure of pixels?  If not, what
font are you assuming?  (sorry - obviously rhetorical, given my "text only"
comments above.)
* How does a "thumb position" relate to scroll bars that existing GUI
widgets almost certainly have.

etc.

While I am sure you find it useful, I don't see how it helps anyone else,
so I dont see how it qualifies as a standard module.

If it is designed as part of a "curses" package, then I would be +0 - I
would happily defer to your (or someone elses) judgement regarding its
relevance in that domain.

Obviously, there is a reasonable chance I am missing the point....

Mark.




From bwarsaw at beopen.com  Tue Aug  8 04:34:18 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 22:34:18 -0400 (EDT)
Subject: [Python-Dev] Request for PEP number
References: <200008072331.TAA27825@snark.thyrsus.com>
Message-ID: <14735.29098.168698.86981@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at snark.thyrsus.com> writes:

    ESR> In accordance with the procedures in PEP 1, I am applying to
    ESR> initiate PEP 2.

    ESR> Proposed title: Procedure for incorporating new modules into
    ESR> the core.

    ESR> Abstract: This PEP will describes review and voting
    ESR> procedures for incorporating candidate modules and extensions
    ESR> into the Python core.

Done.

    ESR> Barry, could I get you to create a pep2 at python.org mailing
    ESR> list for this one?

We decided not to create separate mailing lists for each PEP.

-Barry



From greg at cosc.canterbury.ac.nz  Tue Aug  8 05:08:48 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 08 Aug 2000 15:08:48 +1200 (NZST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A
 small proposed change to dictionaries' "get" method...)
In-Reply-To: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <200008080308.PAA12740@s454.cosc.canterbury.ac.nz>

artcom0!pf at artcom-gmbh.de:
>	dict.setdefault('key', [])
>	dict['key'].append('bar')

I would agree with this more if it said

   dict.setdefault([])
   dict['key'].append('bar')

But I have a problem with all of these proposals: they require
implicitly making a copy of the default value, which violates
the principle that Python never copies anything unless you
tell it to. The default "value" should really be a thunk, not
a value, e.g.

   dict.setdefault(lambda: [])
   dict['key'].append('bar')

or

   dict.get_or_add('key', lambda: []).append('bar')

But I don't really like that, either, because lambdas look
ugly to me, and I don't want to see any more builtin
constructs that more-or-less require their use.

I keep thinking that the solution to this lies somewhere
in the direction of short-circuit evaluation techniques and/or
augmented assignment, but I can't quite see how yet.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From esr at thyrsus.com  Tue Aug  8 05:30:03 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 23:30:03 -0400
Subject: [Python-Dev] Request for PEP number
In-Reply-To: <14735.29098.168698.86981@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 07, 2000 at 10:34:18PM -0400
References: <200008072331.TAA27825@snark.thyrsus.com> <14735.29098.168698.86981@anthem.concentric.net>
Message-ID: <20000807233003.A28267@thyrsus.com>

Barry A. Warsaw <bwarsaw at beopen.com>:
>     ESR> Barry, could I get you to create a pep2 at python.org mailing
>     ESR> list for this one?
> 
> We decided not to create separate mailing lists for each PEP.

OK, where should discussion take place?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

A ``decay in the social contract'' is detectable; there is a growing
feeling, particularly among middle-income taxpayers, that they are not
getting back, from society and government, their money's worth for
taxes paid. The tendency is for taxpayers to try to take more control
of their finances ..
	-- IRS Strategic Plan, (May 1984)



From tim_one at email.msn.com  Tue Aug  8 05:44:05 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 7 Aug 2000 23:44:05 -0400
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <200008080308.PAA12740@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com>

> artcom0!pf at artcom-gmbh.de:
> >	dict.setdefault('key', [])
> >	dict['key'].append('bar')
>

[Greg Ewing]
> I would agree with this more if it said
>
>    dict.setdefault([])
>    dict['key'].append('bar')

Ha!  I *told* Guido people would think that's the proper use of something
named setdefault <0.9 wink>.

> But I have a problem with all of these proposals: they require
> implicitly making a copy of the default value, which violates
> the principle that Python never copies anything unless you
> tell it to.

But they don't.  The occurrence of an, e.g., [] literal in Python source
*always* leads to a fresh list being created whenever the line of code
containing it is executed.  That behavior is guaranteed by the Reference
Manual.  In that respect

    dict.get('hi', [])
or
    dict.getorset('hi', []).append(42)  # getorset is my favorite

is exactly the same as

    x = []

No copy of anything is made; the real irritation is that because arguments
are always evaluated, we end up mucking around allocating an empty list
regardless of whether it's needed; which you partly get away from via your:

 The default "value" should really be a thunk, not
> a value, e.g.
>
>    dict.setdefault(lambda: [])
>    dict['key'].append('bar')
>
> or
>
>    dict.get_or_add('key', lambda: []).append('bar')

except that lambda is also an executable expression and so now we end up
creating an anonymous function dynamically regardless of whether it's
needed.

> But I don't really like that, either, because lambdas look
> ugly to me, and I don't want to see any more builtin
> constructs that more-or-less require their use.

Ditto.

> I keep thinking that the solution to this lies somewhere
> in the direction of short-circuit evaluation techniques and/or
> augmented assignment, but I can't quite see how yet.

If new *syntax* were added, the compiler could generate short-circuiting
code.  Guido will never go for this <wink>, but to make it concrete, e.g.,

    dict['key']||[].append('bar')
    count[word]||0 += 1

I found that dict.get(...) already confused my brain at times because my
*eyes* want to stop at "[]" when scanning code for dict references.
".get()" just doesn't stick out as much; setdefault/default/getorset won't
either.

can't-win-ly y'rs  - tim





From esr at thyrsus.com  Tue Aug  8 05:55:14 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 23:55:14 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Tue, Aug 08, 2000 at 11:36:24AM +1000
References: <20000807181302.A27463@thyrsus.com> <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>
Message-ID: <20000807235514.C28267@thyrsus.com>

Mark Hammond <MarkH at ActiveState.com>:
> For example:
> * How does a "page" concept make sense in a high-res GUI?  Why do we have a
> stack of pages?
> * What is a "viewport height" - is that a measure of pixels?  If not, what
> font are you assuming?  (sorry - obviously rhetorical, given my "text only"
> comments above.)
> * How does a "thumb position" relate to scroll bars that existing GUI
> widgets almost certainly have.

It's not designed for use with graphical browsers.  Here are three contexts
that could use it:

* A menu tree being presented through a window or viewport (this is how it's
  being used now).

* A symbolic debugger that can browse text around a current line.

* A database browser for a sequential record-based file format.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Under democracy one party always devotes its chief energies
to trying to prove that the other party is unfit to rule--and
both commonly succeed, and are right... The United States
has never developed an aristocracy really disinterested or an
intelligentsia really intelligent. Its history is simply a record
of vacillations between two gangs of frauds. 
	--- H. L. Mencken



From tim_one at email.msn.com  Tue Aug  8 06:52:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 8 Aug 2000 00:52:20 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJJGOAA.tim_one@email.msn.com>

[Guido]
> ...
> Nothing has ever been accepted into Python before the code
> was written and shown.

C'mon, admit it:  you were sooooo appalled by the thread that lead to the
creation of tabnanny.py that you decided at once it would end up in the
distribution, just so you could justify skipping all the dozens of tedious
long messages in which The Community developed The General Theory of
Tab-Space Equivalence ab initio.  It was just too much of a
stupid-yet-difficult hack to resist <wink>.

> ...
> I want Python to keep its typical Guido-flavored style,

So do most of us, most of the time.  Paradoxically, it may be easier to
stick to that as Python's popularity zooms beyond the point where it's even
*conceivable* "votes" make any sense.

> and (apart from the occasional succesful channeling by TP) there's
> only one way to do that: let me be the final arbiter.

Well, there's only one *obvious* way to do it.  That's what keeps it
Pythonic.

> I'm willing to be the bottleneck, it gives Python the typical slow-
> flowing evolution that has served it well over the past ten years.

Except presumably for 2.0, where we decided at the last second to change
large patches from "postponed" to "gotta have it".  Consistency is the
hobgoblin ...

but-that's-pythonic-too-ly y'rs  - tim





From moshez at math.huji.ac.il  Tue Aug  8 07:42:30 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 8 Aug 2000 08:42:30 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008080836470.1417-100000@sundial>

On Tue, 8 Aug 2000, Thomas Wouters wrote:

> > PIL,
> 
> +0. The main reason I don't compile PIL myself is because it's such a hassle
> to do it each time, so I think adding it would be nice. However, I'm not
> sure if it's doable to add, whether it would present a lot of problems for
> 'strange' platforms and the like.
> 
> > and Vladimir's shared-memory module.
> 
> +1. Fits very nicely with the mmapmodule, even if it's supported on less
> platforms.
> 
> But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> new PEP, 'enriching the Standard Library' ?

PIL is definitely in the 206 PEP. The others are not yet there. Please
note that a central database of "useful modules" which are in distutils'
.tar.gz (or maybe .zip, now that Python has zipfile.py") + a simple tool
to download and install. The main reason for the "batteries included" 
PEP is reliance on external libraries, which do not mesh as well with
the distutils.

expect-a-change-of-direction-in-the-pep-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From pf at artcom-gmbh.de  Tue Aug  8 10:00:29 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Tue, 8 Aug 2000 10:00:29 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com> from Tim Peters at "Aug 7, 2000 11:44: 5 pm"
Message-ID: <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>

Hi Tim!

Tim Peters:
[...]
> Ha!  I *told* Guido people would think that's the proper use of something
> named setdefault <0.9 wink>.
[...]
>     dict.getorset('hi', []).append(42)  # getorset is my favorite

'getorset' is a *MUCH* better name compared to 'default' or 'setdefault'. 

Regards, Peter



From R.Liebscher at gmx.de  Tue Aug  8 11:26:47 2000
From: R.Liebscher at gmx.de (Rene Liebscher)
Date: Tue, 08 Aug 2000 11:26:47 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub>
Message-ID: <398FD257.CFDC3B74@gmx.de>

Greg Ward wrote:
> 
> On 04 August 2000, Mark Hammond said:
> > I would prefer python20_bcpp.lib, but that is not an issue.
> 
> Good suggestion: the contents of the library are more important than the
> format.  Rene, can you make this change and include it in your next
> patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as
> opposed to "python20_bcpp"?
OK, it is no problem to change it.
> 
> > I am a little confused by the intention, tho.  Wouldnt it make sense to
> > have Borland builds of the core create a Python20.lib, then we could keep
> > the pragma in too?
> >
> > If people want to use Borland for extensions, can't we ask them to use that
> > same compiler to build the core too?  That would seem to make lots of the
> > problems go away?
> 
> But that requires people to build all of Python from source, which I'm
> guessing is a bit more bothersome than building an extension or two from
> source.  Especially since Python is already distributed as a very
> easy-to-use binary installer for Windows, but most extensions are not.
> 
> Rest assured that we probably won't be making things *completely*
> painless for those who do not toe Chairman Bill's party line and insist
> on using "non-standard" Windows compilers.  They'll probably have to get
> python20_bcpp.lib (or python20_gcc.lib, or python20_lcc.lib) on their
> own -- whether downloaded or generated, I don't know.  But the
> alternative is to include 3 or 4 python20_xxx.lib files in the standard
> Windows distribution, which I think is silly.
(GCC uses libpython20.a)
It is not necessary to include the libraries for all compilers. The only
thing what is necessary is a def-file for the library. Every compiler I
know has a program to create a import library from a def-file.
BCC55 can even convert python20.lib in its own format. (The program is
called "coff2omf". BCC55 uses the OMF format for its libraries which is
different of MSVC's COFF format. (This answers your question, Tim?))  
Maybe it should be there a file in the distribution, which explains what
to do
if someone wants to use another compiler, especially how to build a
import
library for this compiler or at least some general information what you
need to do. 
( or should it be included in the 'Ext' documentation. )

kind regards

Rene Liebscher



From Vladimir.Marangozov at inrialpes.fr  Tue Aug  8 12:00:35 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 8 Aug 2000 12:00:35 +0200 (CEST)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 07, 2000 07:42:40 PM
Message-ID: <200008081000.MAA29344@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> I don't know mimectl or Vladimir's module (how does it compare to
> mmap?).

To complement ESR:

- written 3 years ago
- exports a file-like interface, defines 2 object types: shm & sem
- resembles buffer but lacks the slice interface.
- has all sysV shared memory bells & whistles + native semaphore support

http://sirac.inrialpes.fr/~marangoz/python/shm

Technically, mmap is often built on top of shared memory OS facilities.
Adding slices + Windows code for shared mem & semaphores + a simplified
unified interface might be a plan.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From mal at lemburg.com  Tue Aug  8 12:46:25 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 08 Aug 2000 12:46:25 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <200008081000.MAA29344@python.inrialpes.fr>
Message-ID: <398FE501.FFF09FAE@lemburg.com>

Vladimir Marangozov wrote:
> 
> Guido van Rossum wrote:
> >
> > I don't know mimectl or Vladimir's module (how does it compare to
> > mmap?).
> 
> To complement ESR:
> 
> - written 3 years ago
> - exports a file-like interface, defines 2 object types: shm & sem
> - resembles buffer but lacks the slice interface.
> - has all sysV shared memory bells & whistles + native semaphore support
> 
> http://sirac.inrialpes.fr/~marangoz/python/shm
> 
> Technically, mmap is often built on top of shared memory OS facilities.
> Adding slices + Windows code for shared mem & semaphores + a simplified
> unified interface might be a plan.

I would be +1 if you could get it to work on Windows, +0
otherwise.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From R.Liebscher at gmx.de  Tue Aug  8 13:41:12 2000
From: R.Liebscher at gmx.de (Rene Liebscher)
Date: Tue, 08 Aug 2000 13:41:12 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de>
Message-ID: <398FF1D8.A91A8C02@gmx.de>

Rene Liebscher wrote:
> 
> Greg Ward wrote:
> >
> > On 04 August 2000, Mark Hammond said:
> > > I would prefer python20_bcpp.lib, but that is not an issue.
> >
> > Good suggestion: the contents of the library are more important than the
> > format.  Rene, can you make this change and include it in your next
> > patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as
> > opposed to "python20_bcpp"?
> OK, it is no problem to change it.
I forgot to ask which name you would like for debug libraries

	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"

may be we should use "bcpp_python20_d.lib", and use the name schema
which 
I suggested first.


kind regards
 
Rene Liebscher



From skip at mojam.com  Tue Aug  8 15:24:06 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 8 Aug 2000 08:24:06 -0500 (CDT)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>
References: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com>
	<m13M4JJ-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <14736.2550.586217.758500@beluga.mojam.com>

    >> dict.getorset('hi', []).append(42)  # getorset is my favorite

    Peter> 'getorset' is a *MUCH* better name compared to 'default' or
    Peter> 'setdefault'.

Shouldn't that be getorsetandget?  After all, it doesn't just set or get it
gets, but if it's undefined, it sets, then gets.

I know I'll be shouted down, but I still vote against a method that both
sets and gets dict values.  I don't think the abbreviation in the source is
worth the obfuscation of the code.

Skip




From akuchlin at mems-exchange.org  Tue Aug  8 15:31:29 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 8 Aug 2000 09:31:29 -0400
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 08, 2000 at 08:24:06AM -0500
References: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com> <m13M4JJ-000DieC@artcom0.artcom-gmbh.de> <14736.2550.586217.758500@beluga.mojam.com>
Message-ID: <20000808093129.A18519@kronos.cnri.reston.va.us>

On Tue, Aug 08, 2000 at 08:24:06AM -0500, Skip Montanaro wrote:
>I know I'll be shouted down, but I still vote against a method that both
>sets and gets dict values.  I don't think the abbreviation in the source is
>worth the obfuscation of the code.

-1 from me, too.  A shortcut that only saves a line or two of code
isn't worth the obscurity of the name.

("Ohhh, I get it.  Back on that old minimalism kick, Andrew?"

"Not back on it.  Still on it.")

--amk




From effbot at telia.com  Tue Aug  8 17:10:28 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 17:10:28 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <ECEPKNMJLHAPFFJHDOJBMEHPDDAA.MarkH@ActiveState.com>
Message-ID: <00c901c0014a$cc7c9be0$f2a6b5d4@hagrid>

mark wrote:
> So I think that the adoption of our half-solution (ie, we are really only
> forcing a better error message - not even getting a traceback to indicate
> _which_ module fails)

note that the module name is available to the Py_InitModule4
module (for obvious reasons ;-), so it's not that difficult to
improve the error message.

how about:

...

static char not_initialized_error[] =
"ERROR: Module %.200s loaded an uninitialized interpreter!\n\
  This Python has API version %d, module %.200s has version %d.\n";

...

    if (!Py_IsInitialized()) {
        char message[500];
        sprintf(message, not_initialized_error, name, PYTHON_API_VERSION,
            name, module_api_version)
        Py_FatalError(message);
    }

</F>




From pf at artcom-gmbh.de  Tue Aug  8 16:48:32 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Tue, 8 Aug 2000 16:48:32 +0200 (MEST)
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com> from Skip Montanaro at "Aug 8, 2000  8:24: 6 am"
Message-ID: <m13MAgC-000DieC@artcom0.artcom-gmbh.de>

Hi,

Tim Peters:
>     >> dict.getorset('hi', []).append(42)  # getorset is my favorite
> 
>     Peter> 'getorset' is a *MUCH* better name compared to 'default' or
>     Peter> 'setdefault'.
 
Skip Montanaro:
> Shouldn't that be getorsetandget?  After all, it doesn't just set or get it
> gets, but if it's undefined, it sets, then gets.

That would defeat the main purpose of this method: abbreviation.
This name is simply too long.

> I know I'll be shouted down, but I still vote against a method that both
> sets and gets dict values.  I don't think the abbreviation in the source is
> worth the obfuscation of the code.

Yes.  
But I got the impression that Patch#101102 can't be avoided any more.  
So in this situation Tims '.getorset()' is the lesser of two evils 
compared to '.default()' or '.setdefault()'.

BTW: 
I think the "informal" mapping interface should get a more
explicit documentation.  The language reference only mentions the
'len()' builtin method and indexing.  But the section about mappings
contains the sentence: "The extension modules dbm, gdbm, bsddb provide
additional examples of mapping types."

On the other hand section "2.1.6 Mapping Types" of the library reference
says: "The following operations are defined on mappings ..." and than
lists all methods including 'get()', 'update()', 'copy()' ...

Unfortunately only a small subset of these methods actually works on
a dbm mapping:

>>> import dbm
>>> d = dbm.open("piff", "c")
>>> d.get('foo', [])
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  AttributeError: get
>>> d.copy()
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  AttributeError: copy
   
That should be documented.

Regards, Peter



From trentm at ActiveState.com  Tue Aug  8 17:18:12 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 8 Aug 2000 08:18:12 -0700
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <398FF1D8.A91A8C02@gmx.de>; from R.Liebscher@gmx.de on Tue, Aug 08, 2000 at 01:41:12PM +0200
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de>
Message-ID: <20000808081811.A10965@ActiveState.com>

On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> I forgot to ask which name you would like for debug libraries
> 
> 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"
> 
> may be we should use "bcpp_python20_d.lib", and use the name schema
> which 
> I suggested first.

Python20 is most important so it should go first. Then I suppose it is
debatable whether 'd' or 'bcpp' should come first. My preference is
"python20_bcpp_d.lib" because this would maintain the pattern that the
basename of debug-built libs, etc. end in "_d".

Generally speaking this would give a name spec of

python<version>(_<metadata>)*(_d)?.lib


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Tue Aug  8 17:22:17 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 17:22:17 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808081811.A10965@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 08, 2000 at 08:18:12AM -0700
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com>
Message-ID: <20000808172217.G266@xs4all.nl>

On Tue, Aug 08, 2000 at 08:18:12AM -0700, Trent Mick wrote:
> On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> > I forgot to ask which name you would like for debug libraries

> > 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"

> > may be we should use "bcpp_python20_d.lib", and use the name schema
> > which I suggested first.

> Python20 is most important so it should go first.

To clarify something Rene said earlier (I appear to have deleted that mail
eventhough I had intended to reply to it :P) 'gcc' names its libraries
'libpython<version>.{so,a}' because that's the UNIX convention: libraries
are named 'lib<name>.<libtype>', where libtype is '.a' for static libraries
and '.so' for dynamic (ELF, in any case) ones, and you link with -l<name>,
without the 'lib' in front of it. The 'lib' is UNIX-imposed, not something
gcc or Guido made up.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Tue Aug  8 17:26:03 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 8 Aug 2000 08:26:03 -0700
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808172217.G266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 05:22:17PM +0200
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com> <20000808172217.G266@xs4all.nl>
Message-ID: <20000808082603.B10965@ActiveState.com>

On Tue, Aug 08, 2000 at 05:22:17PM +0200, Thomas Wouters wrote:
> On Tue, Aug 08, 2000 at 08:18:12AM -0700, Trent Mick wrote:
> > On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> > > I forgot to ask which name you would like for debug libraries
> 
> > > 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"
> 
> > > may be we should use "bcpp_python20_d.lib", and use the name schema
> > > which I suggested first.
> 
> > Python20 is most important so it should go first.
> 
> To clarify something Rene said earlier (I appear to have deleted that mail
> eventhough I had intended to reply to it :P) 'gcc' names its libraries
> 'libpython<version>.{so,a}' because that's the UNIX convention: libraries
> are named 'lib<name>.<libtype>', where libtype is '.a' for static libraries
> and '.so' for dynamic (ELF, in any case) ones, and you link with -l<name>,
> without the 'lib' in front of it. The 'lib' is UNIX-imposed, not something
> gcc or Guido made up.
> 

Yes, you are right. I was being a Windows bigot there for an email. :)


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Tue Aug  8 17:35:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 17:35:24 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808082603.B10965@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 08, 2000 at 08:26:03AM -0700
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com> <20000808172217.G266@xs4all.nl> <20000808082603.B10965@ActiveState.com>
Message-ID: <20000808173524.H266@xs4all.nl>

On Tue, Aug 08, 2000 at 08:26:03AM -0700, Trent Mick wrote:

[ Discussion about what to call the Borland version of python20.dll:
  bcpp_python20.dll or python20_bcpp.dll. Rene brought up that gcc calls
  "its" library libpython.so, and Thomas points out that that isn't Python's
  decision. ]

> Yes, you are right. I was being a Windows bigot there for an email. :)

And rightly so ! :) I think the 'python20_bcpp' name is more Windows-like,
and if there is some area in which Python should try to stay as platform
specific as possible, it's platform specifics such as libraries :)

Would Windows users(*) when seeing 'bcpp_python20.dll' be thinking "this is a
bcpp specific library of python20", or would they be thinking "this is a
bcpp library for use with python20" ? I'm more inclined to think the second,
myself :-)

*) And the 'user' in this context is the extention-writer and
python-embedder, of course.
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Tue Aug  8 17:46:55 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 11:46:55 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081000.MAA29344@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 12:00:35PM +0200
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr>
Message-ID: <20000808114655.C29686@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:
> Guido van Rossum wrote:
> > 
> > I don't know mimectl or Vladimir's module (how does it compare to
> > mmap?).
> 
> To complement ESR:
> 
> - written 3 years ago
> - exports a file-like interface, defines 2 object types: shm & sem
> - resembles buffer but lacks the slice interface.
> - has all sysV shared memory bells & whistles + native semaphore support
> 
> http://sirac.inrialpes.fr/~marangoz/python/shm
> 
> Technically, mmap is often built on top of shared memory OS facilities.
> Adding slices + Windows code for shared mem & semaphores + a simplified
> unified interface might be a plan.

Vladimir, I suggest that the most useful thing you could do to advance
the process at this point would be to document shm in core-library style.

At the moment, core Python has nothing (with the weak and nonportable 
exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
shm would address a real gap in the language.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Are we at last brought to such a humiliating and debasing degradation,
that we cannot be trusted with arms for our own defence?  Where is the
difference between having our arms in our own possession and under our
own direction, and having them under the management of Congress?  If
our defence be the *real* object of having those arms, in whose hands
can they be trusted with more propriety, or equal safety to us, as in
our own hands?
        -- Patrick Henry, speech of June 9 1788



From tim_one at email.msn.com  Tue Aug  8 17:46:00 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 8 Aug 2000 11:46:00 -0400
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com>

[Skip Montanaro, on .getorset]
> Shouldn't that be getorsetandget?  After all, it doesn't just set
> or get it gets, but if it's undefined, it sets, then gets.

It's mnemonic enough for me.  You can take comfort in that Guido seems to
like "default" better, and is merely incensed by arguments about names
<wink>.

> I know I'll be shouted down, but I still vote against a method that both
> sets and gets dict values.  I don't think the abbreviation in the
> source is worth the obfuscation of the code.

So this is at least your second vote, while I haven't voted at all?  I
protest.

+1 from me.  I'd use it a lot.  Yes, I'm one of those who probably has more
dicts mapping to lists than to strings, and

    if dict.has_key(thing):
        dict[thing].append(newvalue)
    else:
        dict[thing] = [newvalue]

litters my code -- talk about obfuscated!  Of course I know shorter ways to
spell that, but I find them even more obscure than the above.  Easing a
common operation is valuable, firmly in the tradition of the list.extend(),
list.pop(), dict.get(), 3-arg getattr() and no-arg "raise" extensions.  The
*semantics* are clear and non-controversial and frequently desired, they're
simply clumsy to spell now.

The usual ploy in cases like this is to add the new gimmick and call it
"experimental".  Phooey.  Add it or don't.

for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim





From guido at beopen.com  Tue Aug  8 18:51:27 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 08 Aug 2000 11:51:27 -0500
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: Your message of "Tue, 08 Aug 2000 11:46:55 -0400."
             <20000808114655.C29686@thyrsus.com> 
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr>  
            <20000808114655.C29686@thyrsus.com> 
Message-ID: <200008081651.LAA01319@cj20424-a.reston1.va.home.com>

> At the moment, core Python has nothing (with the weak and nonportable 
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

If it also works on Windows.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Tue Aug  8 17:58:27 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 8 Aug 2000 17:58:27 +0200 (CEST)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808114655.C29686@thyrsus.com> from "Eric S. Raymond" at Aug 08, 2000 11:46:55 AM
Message-ID: <200008081558.RAA30190@python.inrialpes.fr>

Eric S. Raymond wrote:
>
> At the moment, core Python has nothing (with the weak and nonportable
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

There's a Semaphore class in Lib/threading.py. Are there any problems
with it? I haven't used it, but threading.py has thread mutexes and
semaphores on top of them, so as long as you don't need IPC, they should
be fine. Or am I missing something?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From esr at thyrsus.com  Tue Aug  8 18:07:15 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 12:07:15 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081558.RAA30190@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 05:58:27PM +0200
References: <20000808114655.C29686@thyrsus.com> <200008081558.RAA30190@python.inrialpes.fr>
Message-ID: <20000808120715.A29873@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:
> Eric S. Raymond wrote:
> >
> > At the moment, core Python has nothing (with the weak and nonportable
> > exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> > shm would address a real gap in the language.
> 
> There's a Semaphore class in Lib/threading.py. Are there any problems
> with it? I haven't used it, but threading.py has thread mutexes and
> semaphores on top of them, so as long as you don't need IPC, they should
> be fine. Or am I missing something?

If I'm not mistaken, that's semaphores across a thread bundle within
a single process. It's semaphores visible across processes that I 
don't think we currently have a facility for.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The people cannot delegate to government the power to do anything
which would be unlawful for them to do themselves.
	-- John Locke, "A Treatise Concerning Civil Government"



From esr at thyrsus.com  Tue Aug  8 18:07:58 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 12:07:58 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081651.LAA01319@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Tue, Aug 08, 2000 at 11:51:27AM -0500
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr> <20000808114655.C29686@thyrsus.com> <200008081651.LAA01319@cj20424-a.reston1.va.home.com>
Message-ID: <20000808120758.B29873@thyrsus.com>

Guido van Rossum <guido at beopen.com>:
> > At the moment, core Python has nothing (with the weak and nonportable 
> > exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> > shm would address a real gap in the language.
> 
> If it also works on Windows.

As usual, I expect Unix to lead and Windows to follow.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Government is actually the worst failure of civilized man. There has
never been a really good one, and even those that are most tolerable
are arbitrary, cruel, grasping and unintelligent.
	-- H. L. Mencken 



From guido at beopen.com  Tue Aug  8 19:18:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 08 Aug 2000 12:18:49 -0500
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: Your message of "Tue, 08 Aug 2000 11:46:00 -0400."
             <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com> 
Message-ID: <200008081718.MAA01681@cj20424-a.reston1.va.home.com>

Enough said.  I've checked it in and closed Barry's patch.  Since
'default' is a Java reserved word, I decided that that would not be a
good name for it after all, so I stuck with setdefault().

> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Tue Aug  8 18:17:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 18:17:01 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com><14735.16162.275037.583897@anthem.concentric.net><20000807190939.A27730@thyrsus.com> <14735.16621.369206.564320@anthem.concentric.net>
Message-ID: <001101c00155$cc86ad00$f2a6b5d4@hagrid>

barry wrote:
> And there's no good way to put those into SF?  If the Patch Manager
> isn't appropriate, what about the Task Manager (I dunno, I've never
> looked at it).  The cool thing about using SF is that there's less of
> a chance that this stuff will get buried in an inbox.

why not just switch it on, and see what happens.  I'd prefer
to get a concise TODO list on the login page, rather than having
to look in various strange places (like PEP-160 and PEP-200 ;-)

</F>




From gmcm at hypernet.com  Tue Aug  8 19:51:51 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 8 Aug 2000 13:51:51 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808120715.A29873@thyrsus.com>
References: <200008081558.RAA30190@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 05:58:27PM +0200
Message-ID: <1246365382-108994225@hypernet.com>

Eric Raymond wrote:
> Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:

> > There's a Semaphore class in Lib/threading.py. Are there any
> > problems with it? I haven't used it, but threading.py has
> > thread mutexes and semaphores on top of them, so as long as you
> > don't need IPC, they should be fine. Or am I missing something?
> 
> If I'm not mistaken, that's semaphores across a thread bundle
> within a single process. It's semaphores visible across processes
> that I don't think we currently have a facility for. 

There's the interprocess semaphore / mutex stuff in 
win32event... oh, never mind...

- Gordon



From ping at lfw.org  Tue Aug  8 22:29:52 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 8 Aug 2000 13:29:52 -0700 (PDT)
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <m13MAgC-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <Pine.LNX.4.10.10008081256050.497-100000@skuld.lfw.org>

On Tue, 8 Aug 2000, Peter Funk wrote:
> 
> Unfortunately only a small subset of these methods actually works on
> a dbm mapping:
> 
> >>> import dbm
> >>> d = dbm.open("piff", "c")
> >>> d.get('foo', [])
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
>   AttributeError: get

I just got burned (again!) because neither the cgi.FieldStorage()
nor the cgi.FormContentDict() support .get().

I've submitted a patch that adds FieldStorage.get() and makes
FormContentDict a subclass of UserDict (the latter nicely eliminates
almost all of the code in FormContentDict).

(I know it says we're supposed to use FieldStorage, but i rarely if
ever need to use file-upload forms, so SvFormContentDict() is still
by far the most useful to me of the 17 different form implementations
<wink> in the cgi module, i don't care what anyone says...)

By the way, when/why did all of the documentation at the top of
cgi.py get blown away?


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box




From effbot at telia.com  Tue Aug  8 22:46:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 22:46:15 +0200
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
References: <Pine.LNX.4.10.10008081256050.497-100000@skuld.lfw.org>
Message-ID: <015901c00179$b718cba0$f2a6b5d4@hagrid>

ping wrote:
> By the way, when/why did all of the documentation at the top of
> cgi.py get blown away?

    Date: Thu, 3 Aug 2000 13:57:47 -0700
    From: Jeremy Hylton <jhylton at users.sourceforge.net>
    To: python-checkins at python.org
    Subject: [Python-checkins] CVS: python/dist/src/Lib cgi.py,1.48,1.49

    Update of /cvsroot/python/python/dist/src/Lib
    In directory slayer.i.sourceforge.net:/tmp/cvs-serv2916

    Modified Files:
     cgi.py 
    Log Message:
    Remove very long doc string (it's all in the docs)
    Modify parse_qsl to interpret 'a=b=c' as key 'a' and value 'b=c'
    (which matches Perl's CGI.pm) 

</F>




From tim_one at email.msn.com  Wed Aug  9 06:57:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 9 Aug 2000 00:57:02 -0400
Subject: [Python-Dev] Task Manager on SourceForge
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>

Under the "what the heck" theory, I enabled the Task Manager on the Python
project -- beware the 6-hour delay!  Created two "subprojects" in it, P1.6
and P2, for tasks generally related to finishing the Python 1.6 and 2.0
releases, respectively.

Don't know anything more about it.  It appears you can set up a web of tasks
under a "subproject", with fields for who's assigned, percent complete,
status, hours of work, priority, start & end dates, and a list of tasks each
task depends on.

If anyone can think of a use for it, be my guest <wink>.

I *suspect* everyone already has admin privileges for the Task Manager, but
can't be sure.  Today I couldn't fool either Netscape or IE5 into displaying
the user-permissions Admin page correctly.  Everyone down to "lemburg" does
have admin privs for TaskMan, but from the middle of MAL's line on on down
it's all empty space for me.





From greg at cosc.canterbury.ac.nz  Wed Aug  9 07:27:24 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 09 Aug 2000 17:27:24 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
Message-ID: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>

I think I've actually found a syntax for lockstep
iteration that looks reasonable (or at least not
completely unreasonable) and is backward compatible:

   for (x in a, y in b):
      ...

Not sure what the implications are for the parser
yet.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From MarkH at ActiveState.com  Wed Aug  9 08:39:30 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Wed, 9 Aug 2000 16:39:30 +1000
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>

>    for (x in a, y in b):
>       ...

Hmmm.  Until someone smarter than me shoots it down for some obvious reason
<wink>, it certainly appeals to me.

My immediate reaction _is_ lockstep iteration, and that is the first time I
can say that.  Part of the reason is that it looks like a tuple unpack,
which I think of as a "lockstep/parallel/atomic" operation...

Mark.




From jack at oratrix.nl  Wed Aug  9 10:31:27 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 09 Aug 2000 10:31:27 +0200
Subject: [Python-Dev] A question for the Python Secret Police
Message-ID: <20000809083127.7FFF6303181@snelboot.oratrix.nl>

A question for the Python Secret Police (or P. Inquisition, or whoever 
else:-).

Is the following morally allowed:

package1/mod.py:
class Foo:
    def method1(self):
        ...

package2/mod.py:
from package1.mod import *

class Foo(Foo):
    def method2(self):
        ...

(The background is that the modules are machine-generated and contain
AppleEvent classes. There's a large set of standard classes, such as
Standard_Suite, and applications can signal that they implement
Standard_Suite with a couple of extensions to it. So, in the
Application-X Standard_Suite I'd like to import everything from the
standard Standard_Suite and override/add those methods that are
specific to Application X)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++






From tim_one at email.msn.com  Wed Aug  9 09:15:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 9 Aug 2000 03:15:02 -0400
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <200008081718.MAA01681@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>

[Tim]
>> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

[Guido]
> Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

But it doesn't need to be, right?  That is, change the stuff following
'import' in

    'from' dotted_name 'import' ('*' | NAME (',' NAME)*)

to

    ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

and verify that whenever the 3-NAME form triggers that the middle of the
NAMEs is exactly "as".  The grammar in the Reference Manual can still
advertise it as a syntactic constraint; if a particular implementation
happens to need to treat it as a semantic constraint due to parser
limitations (and CPython specifically would), the user will never know it.

It doesn't interfere with using "as" a regular NAME elsewhere.  Anyone
pointing out that the line

    from as import as as as

would then be legal will be shot.  Fortran had no reserved words of any
kind, and nobody abused that in practice.  Users may be idiots, but they're
not infants <wink>.





From thomas at xs4all.net  Wed Aug  9 10:42:32 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 10:42:32 +0200
Subject: [Python-Dev] Task Manager on SourceForge
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 09, 2000 at 12:57:02AM -0400
References: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>
Message-ID: <20000809104232.I266@xs4all.nl>

On Wed, Aug 09, 2000 at 12:57:02AM -0400, Tim Peters wrote:

> Don't know anything more about it.  It appears you can set up a web of tasks
> under a "subproject", with fields for who's assigned, percent complete,
> status, hours of work, priority, start & end dates, and a list of tasks each
> task depends on.

Well, it seems mildly useful... It's missing some things that would make it
fairly useful (per-subtask and per-project todo-list, where you an say 'I
need help with this' and such things, the ability to attach patches to
subtasks (which would be useful for 'my' task of adding augmented
assignment ;) and probably more) but I can imagine why SF didn't include all
that (yet) -- it's a lot of work to do right, and I'm not sure if SF has
much projects of the size that needs a project manager like this ;)

But unless Guido and the rest of the PyLab team want to keep an overview of
what us overseas or at least other-state lazy bums are doing by trusting us
to keep a webpage up to date rather than informing the mailing list, I don't
think we'll see much use for it. If you *do* want such an overview, it might
be useful. In which case I'll send out some RFE's on my wishes for the
project manager ;)
 
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From ping at lfw.org  Wed Aug  9 11:37:07 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 9 Aug 2000 02:37:07 -0700 (PDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>

On Wed, 9 Aug 2000, Greg Ewing wrote:
> 
>    for (x in a, y in b):
>       ...

It looks nice, but i'm pretty sure it won't fly.  (x in a, y in b)
is a perfectly valid expression.  For compatibility the parser must
also accept

    for (x, y) in list_of_pairs:

and since the thing after the open-paren can be arbitrarily long,
how is the parser to know whether the lockstep form has been invoked?

Besides, i think Guido has Pronounced quite firmly on zip().

I would much rather petition now to get indices() and irange() into
the built-ins... please pretty please?


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box




From thomas at xs4all.net  Wed Aug  9 13:06:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 13:06:45 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Wed, Aug 09, 2000 at 04:39:30PM +1000
References: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz> <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>
Message-ID: <20000809130645.J266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:39:30PM +1000, Mark Hammond wrote:

> >    for (x in a, y in b):
> >       ...

> Hmmm.  Until someone smarter than me shoots it down for some obvious reason
> <wink>, it certainly appeals to me.

The only objection I can bring up is that parentheses are almost always
optional, in Python, and this kind of violates it. Suddenly the presence of
parentheses changes the entire expression, not just the grouping of it. Oh,
and there is the question of whether 'for (x in a):' is allowed, too (it
isn't, currently.)

I'm not entirely sure that the parser will swallow this, however, because
'for (x in a, y in b) in z:' *is* valid syntax... so it might be ambiguous.
Then again, it can probably be worked around. It might not be too pretty,
but it can be worked around ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Wed Aug  9 13:29:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 13:29:13 +0200
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 09, 2000 at 03:15:02AM -0400
References: <200008081718.MAA01681@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>
Message-ID: <20000809132913.K266@xs4all.nl>

[Tim]
> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

[Guido]
> Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

[Tim]
> But it doesn't need to be, right?  That is, change the stuff following
> 'import' in
>     'from' dotted_name 'import' ('*' | NAME (',' NAME)*)
> to
>     ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

I'm very, very much +1 on this. the fact that (for example) 'from' is a
reserved word bothers me no end. If noone is going to comment anymore on
range literals or augmented assignment, I might just tackle this ;)

> Anyone pointing out that the line
>     from as import as as as
> would then be legal will be shot. 

"Cool, that would make 'from from import as as as' a legal sta"<BANG>

Damned American gun laws ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Wed Aug  9 14:30:43 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:30:43 -0500
Subject: [Python-Dev] Task Manager on SourceForge
In-Reply-To: Your message of "Wed, 09 Aug 2000 00:57:02 -0400."
             <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com> 
Message-ID: <200008091230.HAA23379@cj20424-a.reston1.va.home.com>

> Under the "what the heck" theory, I enabled the Task Manager on the Python
> project -- beware the 6-hour delay!  Created two "subprojects" in it, P1.6
> and P2, for tasks generally related to finishing the Python 1.6 and 2.0
> releases, respectively.

Beauuuutiful!

> Don't know anything more about it.  It appears you can set up a web of tasks
> under a "subproject", with fields for who's assigned, percent complete,
> status, hours of work, priority, start & end dates, and a list of tasks each
> task depends on.
> 
> If anyone can think of a use for it, be my guest <wink>.

I played with it a bit.  I added three tasks under 1.6 that need to be
done.

> I *suspect* everyone already has admin privileges for the Task Manager, but
> can't be sure.  Today I couldn't fool either Netscape or IE5 into displaying
> the user-permissions Admin page correctly.  Everyone down to "lemburg" does
> have admin privs for TaskMan, but from the middle of MAL's line on on down
> it's all empty space for me.

That must be a Windows limitation on how many popup menus you can
have.  Stupid Windows :-) !  This looks fine on Linux in Netscape (is
there any other browser :-) ?  and indeed the permissions are set
correctly.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From guido at beopen.com  Wed Aug  9 14:42:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:42:49 -0500
Subject: [Python-Dev] A question for the Python Secret Police
In-Reply-To: Your message of "Wed, 09 Aug 2000 10:31:27 +0200."
             <20000809083127.7FFF6303181@snelboot.oratrix.nl> 
References: <20000809083127.7FFF6303181@snelboot.oratrix.nl> 
Message-ID: <200008091242.HAA23451@cj20424-a.reston1.va.home.com>

> A question for the Python Secret Police (or P. Inquisition, or whoever 
> else:-).

That would be the Namespace Police in this case.

> Is the following morally allowed:
> 
> package1/mod.py:
> class Foo:
>     def method1(self):
>         ...
> 
> package2/mod.py:
> from package1.mod import *
> 
> class Foo(Foo):
>     def method2(self):
>         ...

I see no problem with this.  It's totally well-defined and I don't
expect I'll ever have a reason to disallow it.  Future picky compilers
or IDEs might warn about a redefined name, but I suppose you can live
with that given that it's machine-generated.

> (The background is that the modules are machine-generated and contain
> AppleEvent classes. There's a large set of standard classes, such as
> Standard_Suite, and applications can signal that they implement
> Standard_Suite with a couple of extensions to it. So, in the
> Application-X Standard_Suite I'd like to import everything from the
> standard Standard_Suite and override/add those methods that are
> specific to Application X)

That actually looks like a *good* reason to do exactly what you
propose.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From guido at beopen.com  Wed Aug  9 14:49:43 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:49:43 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: Your message of "Wed, 09 Aug 2000 02:37:07 MST."
             <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> 
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> 
Message-ID: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>

> On Wed, 9 Aug 2000, Greg Ewing wrote:
> > 
> >    for (x in a, y in b):
> >       ...

No, for exactly the reasons Ping explained.  Let's give this a rest okay?

> I would much rather petition now to get indices() and irange() into
> the built-ins... please pretty please?

I forget what indices() was -- is it the moreal equivalent of keys()?
That's range(len(s)), I don't see a need for a new function.  In fact
I think indices() would reduce readability because you have to guess
what it means.  Everybody knows range() and len(); not everybody will
know indices() because it's not needed that often.

If irange(s) is zip(range(len(s)), s), I see how that's a bit
unwieldy.  In the past there were syntax proposals, e.g. ``for i
indexing s''.  Maybe you and Just can draft a PEP?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Wed Aug  9 14:58:00 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 09 Aug 2000 14:58:00 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
Message-ID: <39915558.A68D7792@lemburg.com>

Guido van Rossum wrote:
> 
> > On Wed, 9 Aug 2000, Greg Ewing wrote:
> > >
> > >    for (x in a, y in b):
> > >       ...
> 
> No, for exactly the reasons Ping explained.  Let's give this a rest okay?
> 
> > I would much rather petition now to get indices() and irange() into
> > the built-ins... please pretty please?
> 
> I forget what indices() was -- is it the moreal equivalent of keys()?

indices() and irange() are both builtins which originated from
mx.Tools. See:

	http://starship.python.net/crew/lemburg/mxTools.html

* indices(object) is the same as tuple(range(len(object))) - only faster
and using a more intuitive and less convoluted name.

* irange(object[,indices]) (in its mx.Tools version) creates
a tuple of tuples (index, object[index]). indices defaults
to indices(object) if not given, otherwise, only the indexes
found in indices are used to create the mentioned tuple -- and
this even works with arbitrary keys, since the PyObject_GetItem()
API is used.

Typical use is:

for i,value in irange(sequence):
    sequence[i] = value + 1


In practice I found that I could always use irange() where indices()
would have been used, since I typically need the indexed
sequence object anyway.

> That's range(len(s)), I don't see a need for a new function.  In fact
> I think indices() would reduce readability because you have to guess
> what it means.  Everybody knows range() and len(); not everybody will
> know indices() because it's not needed that often.
> 
> If irange(s) is zip(range(len(s)), s), I see how that's a bit
> unwieldy.  In the past there were syntax proposals, e.g. ``for i
> indexing s''.  Maybe you and Just can draft a PEP?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Wed Aug  9 17:19:02 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 09 Aug 2000 15:19:02 +0000
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
References: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>
Message-ID: <39917666.87C823E9@nowonder.de>

Tim Peters wrote:
> 
> But it doesn't need to be, right?  That is, change the stuff following
> 'import' in
> 
>     'from' dotted_name 'import' ('*' | NAME (',' NAME)*)
> 
> to
> 
>     ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

What about doing the same for the regular import?

import_stmt: 'import' dotted_name [NAME NAME] (',' dotted_name [NAME
NAME])* | 'from' dotted_name 'import' ('*' | NAME (',' NAME)*)

"import as as as"-isn't-that-impressive-though-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From just at letterror.com  Wed Aug  9 17:01:18 2000
From: just at letterror.com (Just van Rossum)
Date: Wed, 9 Aug 2000 16:01:18 +0100
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."            
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <l03102802b5b71c40f9fc@[193.78.237.121]>

At 7:49 AM -0500 09-08-2000, Guido van Rossum wrote:
>In the past there were syntax proposals, e.g. ``for i
>indexing s''.  Maybe you and Just can draft a PEP?

PEP:            1716099-3
Title:          Index-enhanced sequence iteration
Version:        $Revision: 1.1 $
Owner:          Someone-with-commit-rights
Python-Version: 2.0
Status:         Incomplete

Introduction

    This PEP proposes a way to more conveniently iterate over a
    sequence and its indices.

Features

    It adds an optional clause to the 'for' statement:

        for <index> indexing <element> in <seq>:
            ...

    This is equivalent to (see the zip() PEP):

        for <index>, <element> in zip(range(len(seq)), seq):
            ...

    Except no new list is created.

Mechanism

    The index of the current element in a for-loop already
    exists in the implementation, however, it is not reachable
    from Python. The new 'indexing' keyword merely exposes the
    internal counter.

Implementation

    Implementation should be trivial for anyone named Guido,
    Tim or Thomas. Justs better not try.

Advantages:

    Less code needed for this common operation, which is
    currently most often written as:

        for index in range(len(seq)):
            element = seq[i]
            ...

Disadvantages:

    It will break that one person's code that uses "indexing"
    as a variable name.

Copyright

    This document has been placed in the public domain.

Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:





From thomas at xs4all.net  Wed Aug  9 18:15:39 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 18:15:39 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>; from just@letterror.com on Wed, Aug 09, 2000 at 04:01:18PM +0100
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <20000809181539.M266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:01:18PM +0100, Just van Rossum wrote:

> PEP:            1716099-3
> Title:          Index-enhanced sequence iteration
> Version:        $Revision: 1.1 $
> Owner:          Someone-with-commit-rights

I'd be willing to adopt this PEP, if the other two PEPs on my name don't
need extensive rewrites anymore.

> Features
> 
>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:

Ever since I saw the implementation of FOR_LOOP I've wanted this, but I
never could think up a backwards compatible and readable syntax for it ;P

> Disadvantages:

>     It will break that one person's code that uses "indexing"
>     as a variable name.

This needn't be true, if it's done in the same way as Tim proposed the 'form
from import as as as' syntax change ;)

for_stmt: 'for' exprlist [NAME exprlist] 'in' testlist ':' suite ['else' ':' suite]

If the 5th subnode of the expression is 'in', the 3rd should be 'indexing'
and the 4th would be the variable to assign the index number to. If it's
':', the loop is index-less.

(this is just a quick and dirty example; 'exprlist' is probably not the
right subnode for the indexing variable, because it can't be a tuple or
anything like that.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From skip at mojam.com  Wed Aug  9 18:40:27 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 9 Aug 2000 11:40:27 -0500 (CDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809181539.M266@xs4all.nl>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
	<200008091249.HAA23481@cj20424-a.reston1.va.home.com>
	<l03102802b5b71c40f9fc@[193.78.237.121]>
	<20000809181539.M266@xs4all.nl>
Message-ID: <14737.35195.31385.867664@beluga.mojam.com>

    >> Disadvantages:

    >> It will break that one person's code that uses "indexing" as a
    >> variable name.

    Thomas> This needn't be true, if it's done in the same way as Tim
    Thomas> proposed the 'form from import as as as' syntax change ;)

Could this be extended to many/most/all current instances of keywords in
Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
(for example) can't define a method named "print".

Skip




From nowonder at nowonder.de  Wed Aug  9 20:49:53 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 09 Aug 2000 18:49:53 +0000
Subject: [Python-Dev] cannot commit 1.6 changes
Message-ID: <3991A7D0.4D2479C7@nowonder.de>

I have taken care of removing all occurences of math.rint()
from the 1.6 sources. The commit worked fine for the Doc,
Include and Module directory, but cvs won't let me commit
the changes to config.h.in, configure.in, configure:

cvs server: sticky tag `cnri-16-start' for file `config.h.in' is not a
branch
cvs server: sticky tag `cnri-16-start' for file `configure' is not a
branch
cvs server: sticky tag `cnri-16-start' for file `configure.in' is not a
branch
cvs [server aborted]: correct above errors first!

What am I missing?

confused-ly y'rs Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From esr at thyrsus.com  Wed Aug  9 20:03:21 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Wed, 9 Aug 2000 14:03:21 -0400
Subject: [Python-Dev] Un-stalling Berkeley DB support
Message-ID: <20000809140321.A836@thyrsus.com>

I'm still interested in getting support for the version 3 Berkeley DB
into the core.  This is one of my top three Python priorities currently, along
with drafting PEP2 and overhauling the curses HOWTO.  (I'd sure like to see
shm get in, too, but that's blocked on Vladimir writing suitable documentation,
too.

I'd like to get the necessary C extension in before 2.0 freeze, if
possible.  I've copied its author.  Again, the motivation here is to make
shelving transactional, with useful read-many/write-once guarantees.
Thousands of CGI programmers would thank us for this.

When we last discussed this subject, there was general support for the
functionality, but a couple of people went "bletch!" about SWIG-generated
code (there was unhappiness about pointers being treated as strings).

Somebody said something about having SWIG patches to address this.  Is this
the only real issue with SWIG-generated code?  If so, we can pursue two paths:
(1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
extension that meets our cleanliness criteria, and (2) press the SWIG guy 
to incorporate these patches in his next release.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"The best we can hope for concerning the people at large is that they be
properly armed."
        -- Alexander Hamilton, The Federalist Papers at 184-188



From akuchlin at mems-exchange.org  Wed Aug  9 20:09:55 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 9 Aug 2000 14:09:55 -0400
Subject: [Python-Dev] py-howto project now operational
Message-ID: <20000809140955.C4838@kronos.cnri.reston.va.us>

I've just gotten around to setting up the checkin list for the Python
HOWTO project on SourceForge (py-howto.sourceforge.net), so the
project is now fully operational.  People who want to update the
HOWTOs, such as ESR and the curses HOWTO, can now go ahead and make
changes.

And this is the last you'll hear about the HOWTOs on python-dev;
please use the Doc-SIG mailing list (doc-sig at python.org) for further
discussion of the HOWTOs.

--amk




From thomas at xs4all.net  Wed Aug  9 20:28:54 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 20:28:54 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>; from skip@mojam.com on Wed, Aug 09, 2000 at 11:40:27AM -0500
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]> <20000809181539.M266@xs4all.nl> <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <20000809202854.N266@xs4all.nl>

On Wed, Aug 09, 2000 at 11:40:27AM -0500, Skip Montanaro wrote:

>     >> Disadvantages:

>     >> It will break that one person's code that uses "indexing" as a
>     >> variable name.

>     Thomas> This needn't be true, if it's done in the same way as Tim
>     Thomas> proposed the 'form from import as as as' syntax change ;)

> Could this be extended to many/most/all current instances of keywords in
> Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
> (for example) can't define a method named "print".

No. I just (in the trainride from work to home ;) wrote a patch that adds
'from x import y as z' and 'import foo as fee', and came to the conclusion
that we can't make 'from' a non-reserved word, for instance. Because if we
change

'from' dotted_name 'import' NAME*

into

NAME dotted_name 'import' NAME*

the parser won't know how to parse other expressions that start with NAME,
like 'NAME = expr' or 'NAME is expr'. I know this because I tried it and it
didn't work :-) So we can probably make most names that are *part* of a
statement non-reserved words, but not those that uniquely identify a
statement. That doesn't leave much words, except perhaps for the 'in' in
'for' -- but 'in' is already a reserved word for other purposes ;)

As for the patch that adds 'as' (as a non-reserved word) to both imports,
I'll upload it to SF after I rewrite it a bit ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bckfnn at worldonline.dk  Wed Aug  9 21:43:58 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Wed, 09 Aug 2000 19:43:58 GMT
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809202854.N266@xs4all.nl>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]> <20000809181539.M266@xs4all.nl> <14737.35195.31385.867664@beluga.mojam.com> <20000809202854.N266@xs4all.nl>
Message-ID: <3991acc4.10990753@smtp.worldonline.dk>

[Skip Montanaro]
> Could this be extended to many/most/all current instances of keywords in
> Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
> (for example) can't define a method named "print".

[Thomas Wouters]
>No. I just (in the trainride from work to home ;) wrote a patch that adds
>'from x import y as z' and 'import foo as fee', and came to the conclusion
>that we can't make 'from' a non-reserved word, for instance. Because if we
>change
>
>'from' dotted_name 'import' NAME*
>
>into
>
>NAME dotted_name 'import' NAME*
>
>the parser won't know how to parse other expressions that start with NAME,
>like 'NAME = expr' or 'NAME is expr'. I know this because I tried it and it
>didn't work :-) So we can probably make most names that are *part* of a
>statement non-reserved words, but not those that uniquely identify a
>statement. That doesn't leave much words, except perhaps for the 'in' in
>'for' -- but 'in' is already a reserved word for other purposes ;)

Just a datapoint.

JPython goes a bit further in its attempt to unreserve reserved words in
certain cases:

- after "def"
- after a dot "."
- after "import"
- after "from" (in an import stmt)
- and as argument names

This allow JPython to do:

   from from import x
   def def(): pass
   x.exec(from=1, to=2)


This feature was added to ease JPython's integration to existing java
libraries. IIRC it was remarked that CPython could also make use of such
a feature when integrating to f.ex Tk or COM.


regards,
finn



From nascheme at enme.ucalgary.ca  Wed Aug  9 22:11:04 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 9 Aug 2000 14:11:04 -0600
Subject: [Python-Dev] test_fork1 on SMP? (was Re: [Python Dev] test_fork1 failing --with-threads (for some people)...)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>; from Tim Peters on Mon, Jul 31, 2000 at 04:42:50AM -0400
References: <14724.22554.818853.722906@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>
Message-ID: <20000809141104.A10805@keymaster.enme.ucalgary.ca>

On Mon, Jul 31, 2000 at 04:42:50AM -0400, Tim Peters wrote:
> It's a baffler!  AFAIK, nobody yet has thought of a way that a fork can
> screw up the state of the locks in the *parent* process (it must be easy to
> see how they can get screwed up in a child, because two of us already did
> <wink>).

If I add Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around fork()
in posixmodule then the child is the process which always seems to hang.
The child is hanging at:

#0  0x4006d58b in __sigsuspend (set=0xbf7ffac4)
    at ../sysdeps/unix/sysv/linux/sigsuspend.c:48
#1  0x4001f1a0 in pthread_cond_wait (cond=0x8264e1c, mutex=0x8264e28)
    at restart.h:49
#2  0x806f3c3 in PyThread_acquire_lock (lock=0x8264e18, waitflag=1)
    at thread_pthread.h:311
#3  0x80564a8 in PyEval_RestoreThread (tstate=0x8265a78) at ceval.c:178
#4  0x80bf274 in posix_fork (self=0x0, args=0x8226ccc) at ./posixmodule.c:1659
#5  0x8059460 in call_builtin (func=0x82380e0, arg=0x8226ccc, kw=0x0)
    at ceval.c:2376
#6  0x8059378 in PyEval_CallObjectWithKeywords (func=0x82380e0, arg=0x8226ccc, 
    kw=0x0) at ceval.c:2344
#7  0x80584f2 in eval_code2 (co=0x8265e98, globals=0x822755c, locals=0x0, 
    args=0x8226cd8, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, 
    owner=0x0) at ceval.c:1682
#8  0x805974b in call_function (func=0x8264ddc, arg=0x8226ccc, kw=0x0)
    at ceval.c:2498
#9  0x805936b in PyEval_CallObjectWithKeywords (func=0x8264ddc, arg=0x8226ccc, 
    kw=0x0) at ceval.c:2342
#10 0x80af26a in t_bootstrap (boot_raw=0x8264e00) at ./threadmodule.c:199
#11 0x4001feca in pthread_start_thread (arg=0xbf7ffe60) at manager.c:213

Since there is only one thread in the child this should not be
happening.  Can someone explain this?  I have tested this both a SMP
Linux machine and a UP Linux machine.

   Neil



From thomas at xs4all.net  Wed Aug  9 22:27:50 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 22:27:50 +0200
Subject: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd)
Message-ID: <20000809222749.O266@xs4all.nl>

For those of you not on the patches list, here's the summary of the patch I
just uploaded to SF. In short, it adds "import x as y" and "from module
import x as y", in the way Tim proposed this morning. (Probably late last
night for most of you.)

----- Forwarded message from noreply at sourceforge.net -----

This patch adds the oft-proposed 'import as' syntax, to both 'import module'
and 'from module import ...', but without making 'as' a reserved word (by
using the technique Tim Peters proposed on python-dev.)

'import spam as egg' is a very simple patch to compile.c, which doesn't need
changes to the VM, but 'from spam import dog as meat' needs a new bytecode,
which this patch calls 'FROM_IMPORT_AS'. The bytecode loads an object from a
module onto the stack, so a STORE_NAME can store it later. This can't be
done by the normal FROM_IMPORT opcode, because it needs to take the special
case of '*' into account. Also, because it uses 'STORE_NAME', it's now
possible to mix 'import' and 'global', like so:

global X
from foo import X as X

The patch still generates the old code for

from foo import X

(without 'as') mostly to save on bytecode size, and for the 'compatibility'
with mixing 'global' and 'from .. import'... I'm not sure what's the best
thing to do.

The patch doesn't include a test suite or documentation, yet.

-------------------------------------------------------
For more info, visit:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101135&group_id=5470

----- End forwarded message -----

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From greg at mad-scientist.com  Wed Aug  9 22:27:33 2000
From: greg at mad-scientist.com (Gregory P . Smith)
Date: Wed, 9 Aug 2000 13:27:33 -0700
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809140321.A836@thyrsus.com>; from esr@thyrsus.com on Wed, Aug 09, 2000 at 02:03:21PM -0400
References: <20000809140321.A836@thyrsus.com>
Message-ID: <20000809132733.C2019@mad-scientist.com>

On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
> 
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).
> 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy 
> to incorporate these patches in his next release.

I'm not surprised to see the "bletch!" for SWIG's string/pointer things,
they are technically gross.  Anyone know what SWIG v1.3a3 does (v1.3
is a total rewrite from v1.1)?  py-bsddb3 as distributed was build
using SWIG v1.1-883.  In the meantime, if someone knows of a version of
SWIG that does this better, try using it to build bsddb3 (just pass a
SWIG=/usr/spam/eggs/bin/swig to the Makefile).  If you run into problems,
send them and a copy of that swig my way.

I'll take a quick look at SWIG v1.3alpha3 here and see what that does.

Greg



From skip at mojam.com  Wed Aug  9 22:41:57 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 9 Aug 2000 15:41:57 -0500 (CDT)
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809132733.C2019@mad-scientist.com>
References: <20000809140321.A836@thyrsus.com>
	<20000809132733.C2019@mad-scientist.com>
Message-ID: <14737.49685.902542.576229@beluga.mojam.com>

>>>>> "Greg" == Gregory P Smith <greg at mad-scientist.com> writes:

    Greg> On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
    >> 
    >> When we last discussed this subject, there was general support for
    >> the functionality, but a couple of people went "bletch!" about
    >> SWIG-generated code (there was unhappiness about pointers being
    >> treated as strings).
    ...
    Greg> I'm not surprised to see the "bletch!" for SWIG's string/pointer
    Greg> things, they are technically gross.

We're talking about a wrapper around a single smallish library (probably <
20 exposed functions), right?  Seems to me that SWIG is the wrong tool to
use here.  It's for wrapping massive libraries automatically.  Why not just
recode the current SWIG-generated module manually?

What am I missing?

-- 
Skip Montanaro, skip at mojam.com, http://www.mojam.com/, http://www.musi-cal.com/
"To get what you want you must commit yourself for sometime" - fortune cookie



From nascheme at enme.ucalgary.ca  Wed Aug  9 22:49:25 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 9 Aug 2000 14:49:25 -0600
Subject: [Python-Dev] Re: [Patches] [Patch #101135] 'import x as y' and 'from x import y as z'
In-Reply-To: <200008092014.NAA08040@delerium.i.sourceforge.net>; from noreply@sourceforge.net on Wed, Aug 09, 2000 at 01:14:52PM -0700
References: <200008092014.NAA08040@delerium.i.sourceforge.net>
Message-ID: <20000809144925.A11242@keymaster.enme.ucalgary.ca>

On Wed, Aug 09, 2000 at 01:14:52PM -0700, noreply at sourceforge.net wrote:
> Patch #101135 has been updated. 
> 
> Project: 
> Category: core (C code)
> Status: Open
> Summary: 'import x as y' and 'from x import y as z'

+1.  This is much more useful and clear than setdefault (which I was -1
on, not that it matters).

  Neil



From esr at thyrsus.com  Wed Aug  9 23:03:51 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Wed, 9 Aug 2000 17:03:51 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101135] 'import x as y' and 'from x import y as z'
In-Reply-To: <20000809144925.A11242@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Wed, Aug 09, 2000 at 02:49:25PM -0600
References: <200008092014.NAA08040@delerium.i.sourceforge.net> <20000809144925.A11242@keymaster.enme.ucalgary.ca>
Message-ID: <20000809170351.A1550@thyrsus.com>

Neil Schemenauer <nascheme at enme.ucalgary.ca>:
> On Wed, Aug 09, 2000 at 01:14:52PM -0700, noreply at sourceforge.net wrote:
> > Patch #101135 has been updated. 
> > 
> > Project: 
> > Category: core (C code)
> > Status: Open
> > Summary: 'import x as y' and 'from x import y as z'
> 
> +1.  This is much more useful and clear than setdefault (which I was -1
> on, not that it matters).

I'm +0 on this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The most foolish mistake we could possibly make would be to permit 
the conquered Eastern peoples to have arms.  History teaches that all 
conquerors who have allowed their subject races to carry arms have 
prepared their own downfall by doing so.
        -- Hitler, April 11 1942, revealing the real agenda of "gun control"



From greg at mad-scientist.com  Wed Aug  9 23:16:39 2000
From: greg at mad-scientist.com (Gregory P . Smith)
Date: Wed, 9 Aug 2000 14:16:39 -0700
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809140321.A836@thyrsus.com>; from esr@thyrsus.com on Wed, Aug 09, 2000 at 02:03:21PM -0400
References: <20000809140321.A836@thyrsus.com>
Message-ID: <20000809141639.D2019@mad-scientist.com>

On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
> 
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).
> 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy 
> to incorporate these patches in his next release.

Out of curiosity, I just made a version of py-bsddb3 that uses SWIG
v1.3alpha3 instead of SWIG v1.1-883.  It looks like 1.3a3 is still
using strings for pointerish things.  One thing to note that may calm
some peoples sense of "eww gross, pointer strings" is that programmers
should never see them.  They are "hidden" behind the python shadow class.
The pointer strings are only contained within the shadow objects "this"
member.

example:

  >>> from bsddb3.db import *
  >>> e = DbEnv()
  >>> e
  <C DbEnv instance at _807eea8_MyDB_ENV_p>
  >>> e.this
  '_807eea8_MyDB_ENV_p'

Anyways, the update if anyone is curious about a version using the more
recent swig is on the py-bsddb3 web site:

http://electricrain.com/greg/python/bsddb3/


Greg




From guido at beopen.com  Thu Aug 10 00:29:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 17:29:58 -0500
Subject: [Python-Dev] cannot commit 1.6 changes
In-Reply-To: Your message of "Wed, 09 Aug 2000 18:49:53 GMT."
             <3991A7D0.4D2479C7@nowonder.de> 
References: <3991A7D0.4D2479C7@nowonder.de> 
Message-ID: <200008092229.RAA24802@cj20424-a.reston1.va.home.com>

> I have taken care of removing all occurences of math.rint()
> from the 1.6 sources. The commit worked fine for the Doc,
> Include and Module directory, but cvs won't let me commit
> the changes to config.h.in, configure.in, configure:
> 
> cvs server: sticky tag `cnri-16-start' for file `config.h.in' is not a
> branch
> cvs server: sticky tag `cnri-16-start' for file `configure' is not a
> branch
> cvs server: sticky tag `cnri-16-start' for file `configure.in' is not a
> branch
> cvs [server aborted]: correct above errors first!
> 
> What am I missing?

The error message is right.  Somehow whoever set those tags on those
files did not make them branch tags.  I don't know why -- I think it
was Fred, I don't know why he did that.  The quickest way to fix this
is to issue the command

  cvs tag -F -b -r <revision> cnri-16-start <file>

for each file, where <whatever> is the revision where the tag should
be and <file> is the file.  Note that -F means "force" (otherwise you
get a complaint because the tag is already defined) and -b means
"branch" which makes the tag a branch tag.  I *believe* that branch
tags are recognized because they have the form
<major>.<minor>.0.<branch> but I'm not sure this is documented.

I alread did this for you for these three files!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 10 00:43:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 17:43:35 -0500
Subject: [Python-Dev] test_fork1 on SMP? (was Re: [Python Dev] test_fork1 failing --with-threads (for some people)...)
In-Reply-To: Your message of "Wed, 09 Aug 2000 14:11:04 CST."
             <20000809141104.A10805@keymaster.enme.ucalgary.ca> 
References: <14724.22554.818853.722906@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>  
            <20000809141104.A10805@keymaster.enme.ucalgary.ca> 
Message-ID: <200008092243.RAA24914@cj20424-a.reston1.va.home.com>

> If I add Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around fork()
> in posixmodule then the child is the process which always seems to hang.

I first thought that the lock should be released around the fork too,
but later I realized that that was exactly wrong: if you release the
lock before you fork, another thread will likely grab the lock before
you fork; then in the child the lock is held by that other thread but
that thread doesn't exist, so when the main thread tries to get the
lock back it hangs in the Py_END_ALLOW_THREADS.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From ping at lfw.org  Thu Aug 10 00:06:15 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 9 Aug 2000 15:06:15 -0700 (PDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>

On Wed, 9 Aug 2000, Guido van Rossum wrote:
> I forget what indices() was -- is it the moreal equivalent of keys()?

Yes, it's range(len(s)).

> If irange(s) is zip(range(len(s)), s), I see how that's a bit
> unwieldy.  In the past there were syntax proposals, e.g. ``for i
> indexing s''.  Maybe you and Just can draft a PEP?

In the same vein as zip(), i think it's much easier to just toss in
a couple of built-ins than try to settle on a new syntax.  (I already
uploaded a patch to add indices() and irange() to the built-ins,
immediately after i posted my first message on this thread.)

Surely a PEP isn't required for a couple of built-in functions that
are simple and well understood?  You can just call thumbs-up or
thumbs-down and be done with it.


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box




From klm at digicool.com  Thu Aug 10 00:05:57 2000
From: klm at digicool.com (Ken Manheimer)
Date: Wed, 9 Aug 2000 18:05:57 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <Pine.LNX.4.21.0008091739020.1282-100000@korak.digicool.com>

On Wed, 9 Aug 2000, Just van Rossum wrote:

> PEP:            1716099-3
> Title:          Index-enhanced sequence iteration
> [...]
>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:
>             ...
> [...]
> Disadvantages:
> 
>     It will break that one person's code that uses "indexing"
>     as a variable name.

      It creates a new 'for' variant, increasing challenge for beginners 
      (and the befuddled, like me) of tracking the correct syntax.

I could see that disadvantage being justified by a more significant change
- lockstep iteration would qualify, for me (though it's circumventing this
drawback with zip()).  List comprehensions have that weight, and analogize
elegantly against the existing slice syntax.  I don't think the 'indexing'
benefits are of that order, not enough so to double the number of 'for'
forms, even if there are some performance gains over the (syntactically
equivalent) zip(), so, sorry, but i'm -1.

Ken
klm at digicool.com




From klm at digicool.com  Thu Aug 10 00:13:37 2000
From: klm at digicool.com (Ken Manheimer)
Date: Wed, 9 Aug 2000 18:13:37 -0400 (EDT)
Subject: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import
 y as z' (fwd)
In-Reply-To: <20000809222749.O266@xs4all.nl>
Message-ID: <Pine.LNX.4.21.0008091808390.1282-100000@korak.digicool.com>

On Wed, 9 Aug 2000, Thomas Wouters wrote:

> For those of you not on the patches list, here's the summary of the patch I
> just uploaded to SF. In short, it adds "import x as y" and "from module
> import x as y", in the way Tim proposed this morning. (Probably late last
> night for most of you.)

I guess the criteria i used in my thumbs down on 'indexing' is very
subjective, because i would say the added functionality of 'import x as y'
*does* satisfy my added-functionality test, and i'd be +1.  (I think the
determining thing is the ability to avoid name collisions without any
gross shuffle.)

I also really like the non-keyword basis for the, um, keyword.

Ken
klm at digicool.com




From guido at beopen.com  Thu Aug 10 01:14:19 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 18:14:19 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: Your message of "Wed, 09 Aug 2000 15:06:15 MST."
             <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> 
References: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> 
Message-ID: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>

> On Wed, 9 Aug 2000, Guido van Rossum wrote:
> > I forget what indices() was -- is it the moreal equivalent of keys()?

[Ping]
> Yes, it's range(len(s)).
> 
> > If irange(s) is zip(range(len(s)), s), I see how that's a bit
> > unwieldy.  In the past there were syntax proposals, e.g. ``for i
> > indexing s''.  Maybe you and Just can draft a PEP?
> 
> In the same vein as zip(), i think it's much easier to just toss in
> a couple of built-ins than try to settle on a new syntax.  (I already
> uploaded a patch to add indices() and irange() to the built-ins,
> immediately after i posted my first message on this thread.)
> 
> Surely a PEP isn't required for a couple of built-in functions that
> are simple and well understood?  You can just call thumbs-up or
> thumbs-down and be done with it.

-1 for indices

-0 for irange

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 10 00:15:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 00:15:10 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>; from just@letterror.com on Wed, Aug 09, 2000 at 04:01:18PM +0100
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <20000810001510.P266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:01:18PM +0100, Just van Rossum wrote:

> Features

>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:
>             ...

> Implementation
> 
>     Implementation should be trivial for anyone named Guido,
>     Tim or Thomas.

Well, to justify that vote of confidence <0.4 wink> I wrote a quick hack
that adds this to Python for loops. It can be found on SF, patch #101138.
It's small, but it works. I'll iron out any bugs if there's enough positive
feelings towards this kind of syntax change.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Aug 10 00:22:55 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 00:22:55 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 09, 2000 at 06:14:19PM -0500
References: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> <200008092314.SAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <20000810002255.Q266@xs4all.nl>

On Wed, Aug 09, 2000 at 06:14:19PM -0500, Guido van Rossum wrote:

> -1 for indices
> 
> -0 for irange

The same for me, though I prefer 'for i indexing x in l' over 'irange()'. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From beazley at schlitz.cs.uchicago.edu  Thu Aug 10 00:34:16 2000
From: beazley at schlitz.cs.uchicago.edu (David M. Beazley)
Date: Wed,  9 Aug 2000 17:34:16 -0500 (CDT)
Subject: [Python-Dev] Python-Dev digest, Vol 1 #737 - 17 msgs
In-Reply-To: <20000809221115.AC4E61D182@dinsdale.python.org>
References: <20000809221115.AC4E61D182@dinsdale.python.org>
Message-ID: <14737.55249.87871.538988@schlitz.cs.uchicago.edu>

python-dev-request at python.org writes:
 > 
 > I'd like to get the necessary C extension in before 2.0 freeze, if
 > possible.  I've copied its author.  Again, the motivation here is to make
 > shelving transactional, with useful read-many/write-once guarantees.
 > Thousands of CGI programmers would thank us for this.
 > 
 > When we last discussed this subject, there was general support for the
 > functionality, but a couple of people went "bletch!" about SWIG-generated
 > code (there was unhappiness about pointers being treated as strings).
 > 
 > Somebody said something about having SWIG patches to address this.  Is this
 > the only real issue with SWIG-generated code?  If so, we can pursue
 > two paths:

Well, as the guilty party on the SWIG front, I can say that the
current development version of SWIG is using CObjects instead of
strings (well, actually I lie---you have to compile the wrappers with
-DSWIG_COBJECT_TYPES to turn that feature on).  Just as a general
aside on this topic, I did a number of experiments comparing the
performance of using CObjects vs.the gross string-pointer hack about 6
months ago.  Strangely enough, there was virtually no-difference in
runtime performance and if recall correctly, the string hack might
have even been just a little bit faster. Go figure :-).

Overall, the main difference between SWIG1.3 and SWIG1.1 is in runtime 
performance of the wrappers as well as various changes to reduce the
amount of wrapper code.   However, 1.3 is also very much an alpha release
right now---if you're going to use that, make sure you thoroughly test 
everything.

On the subject of the Berkeley DB module, I would definitely like to 
see a module for this.  If there is anything I can do to either modify
the behavior of SWIG or to build an extension module by hand, let me know.

Cheers,

Dave





From MarkH at ActiveState.com  Thu Aug 10 01:03:19 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:03:19 +1000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>

[Skip laments...]
> Could this be extended to many/most/all current instances of
> keywords in Python?  As Tim pointed out, Fortran has no
> keywords.  It annoys me that I (for example) can't define
> a method named "print".

Sometimes it is worse than annoying!

In the COM and CORBA worlds, it can be a showstopper - if an external
object happens to expose a method or property named after a Python keyword,
then you simply can not use it!

This has lead to COM support having to check _every_ attribute name it sees
externally, and mangle it if a keyword.

A bigger support exists for .NET.  The .NET framework explicitly dictates
that a compliant language _must_ have a way of overriding its own keywords
when calling external methods (it was either that, or try and dictate a
union of reserved words they can ban)

Eg, C# allows you to surround a keyword with brackets.  ie, I believe
something like:

object.[if]

Would work in C# to provide access to an attribute named "if"

Unfortunately, Python COM is a layer ontop of CPython, and Python .NET
still uses the CPython parser - so in neither of these cases is there a
simple hack I can use to work around it at the parser level.

Needless to say, as this affects the 2 major technologies I work with
currently, I would like an official way to work around Python keywords!

Mark.




From guido at beopen.com  Thu Aug 10 02:12:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 19:12:59 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: Your message of "Thu, 10 Aug 2000 09:03:19 +1000."
             <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com> 
Message-ID: <200008100012.TAA25968@cj20424-a.reston1.va.home.com>

> [Skip laments...]
> > Could this be extended to many/most/all current instances of
> > keywords in Python?  As Tim pointed out, Fortran has no
> > keywords.  It annoys me that I (for example) can't define
> > a method named "print".
> 
> Sometimes it is worse than annoying!
> 
> In the COM and CORBA worlds, it can be a showstopper - if an external
> object happens to expose a method or property named after a Python keyword,
> then you simply can not use it!
> 
> This has lead to COM support having to check _every_ attribute name it sees
> externally, and mangle it if a keyword.
> 
> A bigger support exists for .NET.  The .NET framework explicitly dictates
> that a compliant language _must_ have a way of overriding its own keywords
> when calling external methods (it was either that, or try and dictate a
> union of reserved words they can ban)
> 
> Eg, C# allows you to surround a keyword with brackets.  ie, I believe
> something like:
> 
> object.[if]
> 
> Would work in C# to provide access to an attribute named "if"
> 
> Unfortunately, Python COM is a layer ontop of CPython, and Python .NET
> still uses the CPython parser - so in neither of these cases is there a
> simple hack I can use to work around it at the parser level.
> 
> Needless to say, as this affects the 2 major technologies I work with
> currently, I would like an official way to work around Python keywords!

The JPython approach should be added to CPython.  This effectively
turns off keywords directly after ".", "def" and in a few other
places.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From MarkH at ActiveState.com  Thu Aug 10 01:17:35 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:17:35 +1000
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEBFDEAA.MarkH@ActiveState.com>

Guido commented yesterday that he doesnt tally votes (yay), but obviously
he still issues them!  It made me think of a Dutch Crocodile Dundee on a
visit to New York, muttering to his harassers as he whips something out
from under his clothing...

> -1 for indices

"You call that a -1,  _this_ is a -1"

:-)

[Apologies to anyone who hasnt seen the knife scene in the forementioned
movie ;-]

Mark.




From MarkH at ActiveState.com  Thu Aug 10 01:21:33 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:21:33 +1000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <200008100012.TAA25968@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>

[Guido]
> The JPython approach should be added to CPython.  This effectively
> turns off keywords directly after ".", "def" and in a few other
> places.

Excellent.  I saw a reference to this after I sent my mail.

I'd be happy to help, in a long, drawn out, when-I-find-time kind of way.
What is the general strategy - is it simply to maintain a state in the
parser?  Where would I start to look into?

Mark.




From guido at beopen.com  Thu Aug 10 02:36:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 19:36:30 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: Your message of "Thu, 10 Aug 2000 09:21:33 +1000."
             <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com> 
Message-ID: <200008100036.TAA26235@cj20424-a.reston1.va.home.com>

> [Guido]
> > The JPython approach should be added to CPython.  This effectively
> > turns off keywords directly after ".", "def" and in a few other
> > places.
> 
> Excellent.  I saw a reference to this after I sent my mail.
> 
> I'd be happy to help, in a long, drawn out, when-I-find-time kind of way.
> What is the general strategy - is it simply to maintain a state in the
> parser?  Where would I start to look into?
> 
> Mark.

Alas, I'm not sure how easy it will be.  The parser generator will
probably have to be changed to allow you to indicate not to do a
resword lookup at certain points in the grammar.  I don't know where
to start. :-(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Thu Aug 10 03:12:59 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 04:12:59 +0300 (IDT)
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809141639.D2019@mad-scientist.com>
Message-ID: <Pine.GSO.4.10.10008100411500.26961-100000@sundial>

On Wed, 9 Aug 2000, Gregory P . Smith wrote:

> Out of curiosity, I just made a version of py-bsddb3 that uses SWIG
> v1.3alpha3 instead of SWIG v1.1-883.  It looks like 1.3a3 is still
> using strings for pointerish things.  One thing to note that may calm
> some peoples sense of "eww gross, pointer strings" is that programmers
> should never see them.  They are "hidden" behind the python shadow class.
> The pointer strings are only contained within the shadow objects "this"
> member.

It's not "ewww gross", it's "dangerous!". This makes Python "not safe",
since users can access random memory location.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From moshez at math.huji.ac.il  Thu Aug 10 03:28:00 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 04:28:00 +0300 (IDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>
Message-ID: <Pine.GSO.4.10.10008100425430.26961-100000@sundial>

On Thu, 10 Aug 2000, Mark Hammond wrote:

> Sometimes it is worse than annoying!
> 
> In the COM and CORBA worlds, it can be a showstopper - if an external
> object happens to expose a method or property named after a Python keyword,
> then you simply can not use it!

How about this (simple, but relatively unannoying) convention:

To COM name:
	- remove last "_", if any


From greg at cosc.canterbury.ac.nz  Thu Aug 10 03:29:38 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 13:29:38 +1200 (NZST)
Subject: [Python-Dev] A question for the Python Secret Police
In-Reply-To: <20000809083127.7FFF6303181@snelboot.oratrix.nl>
Message-ID: <200008100129.NAA13775@s454.cosc.canterbury.ac.nz>

Jack Jansen <jack at oratrix.nl>:

> Is the following morally allowed:
>   class Foo(Foo):

Well, the standard admonitions against 'import *' apply.
Whether using 'import *' or not, though, in the interests 
of clarity I think I would write it as

   class Foo(package1.mod.Foo):

On the other hand, the funkiness factor of it does
have a certain appeal!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 03:56:55 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 13:56:55 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <200008100156.NAA13782@s454.cosc.canterbury.ac.nz>

> It looks nice, but i'm pretty sure it won't fly.

It will! Try it:

>>> for (x in a, y in b):
  File "<stdin>", line 1
    for (x in a, y in b):
                        ^
SyntaxError: invalid syntax

> how is the parser to know whether the lockstep form has been
> invoked?

The parser doesn't have to know as long as the compiler can
tell, and clearly one of them can.

> Besides, i think Guido has Pronounced quite firmly on zip().

That won't stop me from gently trying to change his mind
one day. The existence of zip() doesn't preclude something
more elegant being adopted in a future version.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 04:12:08 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:12:08 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809130645.J266@xs4all.nl>
Message-ID: <200008100212.OAA13789@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas at xs4all.net>:

> The only objection I can bring up is that parentheses are almost always
> optional, in Python, and this kind of violates it.

They're optional around tuple constructors, but this is not
a tuple constructor.

The parentheses around function arguments aren't optional
either, and nobody complains about that.

> 'for (x in a, y in b) in z:' *is* valid syntax...

But it's not valid Python:

>>> for (x in a, y in b) in z:
...   print x,y
... 
SyntaxError: can't assign to operator

> It might not be too pretty, but it can be worked around ;)

It wouldn't be any uglier than what's currently done with
the LHS of an assignment, which is parsed as a general
expression and treated specially later on.

There's-more-to-the-Python-syntax-than-what-it-says-in-
the-Grammar-file-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 04:19:32 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:19:32 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <200008100219.OAA13793@s454.cosc.canterbury.ac.nz>

Just van Rossum <just at letterror.com>:

>        for <index> indexing <element> in <seq>:

Then idea is good, but I don't like this particular syntax much. It
seems to be trying to do too much at once, giving you both an index
and an element.  Also, the wording reminds me unpleasantly of COBOL
for some reason.

Some time ago I suggested

   for <index> over <sequence>:

as a way of getting hold of the index, and as a direct
replacement for 'for i in range(len(blarg))' constructs.
It could also be used for lockstep iteration applications,
e.g.

   for i over a:
      frobulate(a[i], b[i])

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 04:23:50 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:23:50 +1200 (NZST)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <200008100036.TAA26235@cj20424-a.reston1.va.home.com>
Message-ID: <200008100223.OAA13796@s454.cosc.canterbury.ac.nz>

BDFL:

> The parser generator will probably have to be changed to allow you to
> indicate not to do a resword lookup at certain points in the grammar.

Isn't it the scanner which recognises reserved words?

In that case, just make it not do that for the next
token after certain tokens.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From billtut at microsoft.com  Thu Aug 10 05:24:11 2000
From: billtut at microsoft.com (Bill Tutt)
Date: Wed, 9 Aug 2000 20:24:11 -0700 
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
	!)
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com>

The parser actually recognizes keywords atm.

We could change that so that each keyword is a token.
Then you would have something like:

keyword_allowed_name: KEY1 | KEY2 | KEY3 | ... | KEYN | NAME
and then tweak func_def like so:
func_def:  DEF keyword_allowed_name parameters ':' suite

I haven't pondered whether or not this would cause the DFA to fall into a
degenerate case or not.

Wondering where the metagrammer source file went to,
Bill


 -----Original Message-----
From: 	Greg Ewing [mailto:greg at cosc.canterbury.ac.nz] 
Sent:	Wednesday, August 09, 2000 7:24 PM
To:	python-dev at python.org
Subject:	Re: [Python-Dev] Python keywords (was Lockstep iteration -
eureka!)

BDFL:

> The parser generator will probably have to be changed to allow you to
> indicate not to do a resword lookup at certain points in the grammar.

Isn't it the scanner which recognises reserved words?

In that case, just make it not do that for the next
token after certain tokens.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+

_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://www.python.org/mailman/listinfo/python-dev



From guido at beopen.com  Thu Aug 10 06:44:45 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 23:44:45 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka !)
In-Reply-To: Your message of "Wed, 09 Aug 2000 20:24:11 MST."
             <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com> 
References: <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com> 
Message-ID: <200008100444.XAA27348@cj20424-a.reston1.va.home.com>

> The parser actually recognizes keywords atm.
> 
> We could change that so that each keyword is a token.
> Then you would have something like:
> 
> keyword_allowed_name: KEY1 | KEY2 | KEY3 | ... | KEYN | NAME
> and then tweak func_def like so:
> func_def:  DEF keyword_allowed_name parameters ':' suite
> 
> I haven't pondered whether or not this would cause the DFA to fall into a
> degenerate case or not.

This would be a good and simple approach.

> Wondering where the metagrammer source file went to,

It may not have existed; I may have handcrafted the metagrammar.c
file.

I believe the metagrammar was something like this:

MSTART: RULE*
RULE: NAME ':' RHS
RHS: ITEM ('|' ITEM)*
ITEM: (ATOM ['*' | '?'])+
ATOM: NAME | STRING | '(' RHS ')' | '[' RHS ']'

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From nowonder at nowonder.de  Thu Aug 10 09:02:12 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:02:12 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 
 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl>
Message-ID: <39925374.59D974FA@nowonder.de>

Thomas Wouters wrote:
> 
> For those of you not on the patches list, here's the summary of the patch I
> just uploaded to SF. In short, it adds "import x as y" and "from module
> import x as y", in the way Tim proposed this morning. (Probably late last
> night for most of you.)

-1 on the implementation. Although it looked okay on a first visual
   inspection, it builds a segfaulting python executable on linux:
      make distclean && ./configure && make test
   segfaults when first time starting python to run regrtest.py.
   Reversing the patch and doing a simple 'make test' has everything
   running again.

+1 on the idea, though. It just seems sooo natural. My first
   reaction before applying the patch was testing if Python
   did not already do this <0.25 wink - really did it>

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From nowonder at nowonder.de  Thu Aug 10 09:21:13 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:21:13 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de>
Message-ID: <399257E9.E399D52D@nowonder.de>

Peter Schneider-Kamp wrote:
> 
> -1 on the implementation. Although it looked okay on a first visual
>    inspection, it builds a segfaulting python executable on linux:
>       make distclean && ./configure && make test
>    segfaults when first time starting python to run regrtest.py.
>    Reversing the patch and doing a simple 'make test' has everything
>    running again.

Also note the following problems:

nowonder at mobility:~/python/python/dist/src > ./python
Python 2.0b1 (#12, Aug 10 2000, 07:17:46)  [GCC 2.95.2 19991024
(release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> from string import join
Speicherzugriffsfehler
nowonder at mobility:~/python/python/dist/src > ./python
Python 2.0b1 (#12, Aug 10 2000, 07:17:46)  [GCC 2.95.2 19991024
(release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> from string import join as j
  File "<stdin>", line 1
    from string import join as j
                             ^
SyntaxError: invalid syntax
>>>  

I think the problem is in compile.c, but that's just my bet.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 07:24:19 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 07:24:19 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39925374.59D974FA@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 07:02:12AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de>
Message-ID: <20000810072419.A17171@xs4all.nl>

On Thu, Aug 10, 2000 at 07:02:12AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:
> > 
> > For those of you not on the patches list, here's the summary of the patch I
> > just uploaded to SF. In short, it adds "import x as y" and "from module
> > import x as y", in the way Tim proposed this morning. (Probably late last
> > night for most of you.)

> -1 on the implementation. Although it looked okay on a first visual
>    inspection, it builds a segfaulting python executable on linux:
>       make distclean && ./configure && make test
>    segfaults when first time starting python to run regrtest.py.
>    Reversing the patch and doing a simple 'make test' has everything
>    running again.

Try running 'make' in 'Grammar/' first. None of my patches that touch
Grammar include the changes to graminit.h and graminit.c, because they can
be quite lengthy (in the order of several thousand lines, in this case, if
I'm not mistaken.) So the same goes for the 'indexing for', 'range literal'
and 'augmented assignment' patches ;)

If it still goes crashy crashy after you re-make the grammar, I'll, well,
I'll, I'll make Baldrick eat one of his own dirty socks ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 09:37:44 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:37:44 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl>
Message-ID: <39925BC8.17CD051@nowonder.de>

Thomas Wouters wrote:
> 
> If it still goes crashy crashy after you re-make the grammar, I'll, well,
> I'll, I'll make Baldrick eat one of his own dirty socks ;)

I just found that out for myself. The syntaxerror in the
second examples lead my way ...

Sorry for the hassle, but next time please remind me that
I have to remake the grammar.

+1 on the implementation now.

perversely-minded-note:
What about 'from string import *, join as j'?
I think that would make sense, but as we are not fond of
the star in any case maybe we don't need that.

Peter

P.S.: I'd like to see Baldrick do that. What the heck is
      a Baldrick? I am longing for breakfast, so I hope
      I can eat it. Mjam.
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 07:55:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 07:55:10 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39925BC8.17CD051@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 07:37:44AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de>
Message-ID: <20000810075510.B17171@xs4all.nl>

On Thu, Aug 10, 2000 at 07:37:44AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > If it still goes crashy crashy after you re-make the grammar, I'll, well,
> > I'll, I'll make Baldrick eat one of his own dirty socks ;)

> I just found that out for myself. The syntaxerror in the
> second examples lead my way ...

> Sorry for the hassle, but next time please remind me that
> I have to remake the grammar.

It was late, last night, and I have to force myself not to write essays when
submitting a patch in the first place ;-P How about we fix the dependencies
so that the grammar gets re-made when necessary ? Or is there a good reason
not to do that ?

> perversely-minded-note:
> What about 'from string import *, join as j'?
> I think that would make sense, but as we are not fond of
> the star in any case maybe we don't need that.

'join as j' ? What would it do ? Import all symbols from 'string' into a
new namespace 'j' ? How about you do 'import string as j' instead ? It means
you will still be able to do 'j._somevar', which you probably wouldn't in
your example, but I don't think that's enough reason :P

> P.S.: I'd like to see Baldrick do that. What the heck is
>       a Baldrick? I am longing for breakfast, so I hope
>       I can eat it. Mjam.

Sorry :) They've been doing re-runs of Blackadder (1st through 4th, they're
nearly done) on one of the belgian channels, and it happens to be one of my
favorite comedy shows ;) It's a damned sight funnier than Crocodile Dundee,
hey, Mark ? <nudge> <nudge> <wink> <wink> :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 10:10:13 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 08:10:13 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl>
Message-ID: <39926365.909B2835@nowonder.de>

Thomas Wouters wrote:
> 
> 'join as j' ? What would it do ? Import all symbols from 'string' into a
> new namespace 'j' ? How about you do 'import string as j' instead ? It means
> you will still be able to do 'j._somevar', which you probably wouldn't in
> your example, but I don't think that's enough reason :P

Okay, your misunderstanding of the semantics I had in mind are
reason enough <0.5 wink>.

from string import *, join as j
(or equivalently)
from string import join as j, *

would (in my book) import all "public" symbols from string
and assign j = join.

Assuming we have a Tkinter app (where all the tutorials
do a 'from Tkinter import *') and we don't like
'createtimerhandle'. Then the following would give
us tk_timer instead while still importing all the stuff
from Tkinter with their regular names:

from Tkinter import *, createtimerhandle as tk_timer

An even better way of doing this were if it would not
only give you another name but if it would not import
the original one. In this example our expression
would import all the symbols from Tkinter but would
rename createtimerhandle as tk_timer. In this way you
could still use * if you have a namespace clash. E.g.:

from Tkinter import *, mainloop as tk_mainloop

def mainloop():
  <do some really useful stuff calling tk_mainloop()>

if __name__ == '__main__':
  mainloop()

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 08:23:16 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 08:23:16 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39926365.909B2835@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 08:10:13AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl> <39926365.909B2835@nowonder.de>
Message-ID: <20000810082316.C17171@xs4all.nl>

On Thu, Aug 10, 2000 at 08:10:13AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > 'join as j' ? What would it do ? Import all symbols from 'string' into a
> > new namespace 'j' ? How about you do 'import string as j' instead ? It means
> > you will still be able to do 'j._somevar', which you probably wouldn't in
> > your example, but I don't think that's enough reason :P

> Okay, your misunderstanding of the semantics I had in mind are
> reason enough <0.5 wink>.

> from string import *, join as j
> (or equivalently)
> from string import join as j, *

Ahh, like that :) Well, I'd say 'no'. "from module import *" has only one
legitimate use, as far as I'm concerned, and that's taking over all symbols
without prejudice, to encapsulate another module. It shouldn't be used in
code that attempts to stay readable, so 'import join as j' is insanity ;-)
If you really must do the above, do it in two import statements.

> An even better way of doing this were if it would not
> only give you another name but if it would not import
> the original one. In this example our expression
> would import all the symbols from Tkinter but would
> rename createtimerhandle as tk_timer. In this way you
> could still use * if you have a namespace clash. E.g.:

No, that isn't possible. You can't pass a list of names to 'FROM_IMPORT *'
to omit loading them. (That's also the reason the patch needs a new opcode,
because you can't pass both the name to be imported from a module and the
name it should be stored at, to the FROM_IMPORT bytecode :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 10:52:31 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 08:52:31 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl> <39926365.909B2835@nowonder.de> <20000810082316.C17171@xs4all.nl>
Message-ID: <39926D4F.83CAE9C2@nowonder.de>

Thomas Wouters wrote:
> 
> On Thu, Aug 10, 2000 at 08:10:13AM +0000, Peter Schneider-Kamp wrote:
> > An even better way of doing this were if it would not
> > only give you another name but if it would not import
> > the original one. In this example our expression
> > would import all the symbols from Tkinter but would
> > rename createtimerhandle as tk_timer. In this way you
> > could still use * if you have a namespace clash. E.g.:
> 
> No, that isn't possible. You can't pass a list of names to 'FROM_IMPORT *'
> to omit loading them. (That's also the reason the patch needs a new opcode,
> because you can't pass both the name to be imported from a module and the
> name it should be stored at, to the FROM_IMPORT bytecode :)

Yes, it is possible. But as you correctly point out, not
without some serious changes to compile.c and ceval.c.

As we both agree (trying to channel you) it is not worth it
to make 'from import *' more usable, I think we should stop
this discussion before somebody thinks we seriously want
to do this.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From mal at lemburg.com  Thu Aug 10 10:36:07 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 10 Aug 2000 10:36:07 +0200
Subject: [Python-Dev] Un-stalling Berkeley DB support
References: <20000809140321.A836@thyrsus.com>
Message-ID: <39926977.F8495AAD@lemburg.com>

"Eric S. Raymond" wrote:
> [Berkeley DB 3]
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).

AFAIK, recent versions of SWIG now make proper use of PyCObjects
to store pointers. Don't know how well this works though: I've
had a report that the new support can cause core dumps.
 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy
> to incorporate these patches in his next release.

Perhaps these patches are what I was talking about above ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From sjoerd at oratrix.nl  Thu Aug 10 12:59:06 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Thu, 10 Aug 2000 12:59:06 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib mailbox.py,1.20,1.21
In-Reply-To: Your message of Wed, 09 Aug 2000 20:05:30 -0700.
             <200008100305.UAA05018@slayer.i.sourceforge.net> 
References: <200008100305.UAA05018@slayer.i.sourceforge.net> 
Message-ID: <20000810105907.713B331047C@bireme.oratrix.nl>

On Wed, Aug 9 2000 Guido van Rossum wrote:

>           files = os.listdir(self.dirname)
> !         list = []
>           for f in files:
>               if pat.match(f):
> !                 list.append(f)
> !         list = map(long, list)
> !         list.sort()

Isn't this just:
	list = os.listdir(self.dirname)
	list = filter(pat.match, list)
	list = map(long, list)
	list.sort()

Or even shorter:
	list = map(long, filter(pat.match, os.listdir(self.dirname)))
	list.sort()
(Although I can and do see the advantage of the slightly longer
version.)

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From gward at mems-exchange.org  Thu Aug 10 14:38:02 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 10 Aug 2000 08:38:02 -0400
Subject: [Python-Dev] Adding library modules to the core
Message-ID: <20000810083802.A7912@ludwig.cnri.reston.va.us>

[hmmm, this bounced 'cause the root partition on python.org was
full... let's try again, shall we?]

On 07 August 2000, Eric S. Raymond said:
> A few days ago I asked about the procedure for adding a module to the
> Python core library.  I have a framework class for things like menu systems
> and symbolic debuggers I'd like to add.
> 
> Guido asked if this was similar to the TreeWidget class in IDLE.  I 
> investigated and discovered that it is not, and told him so.  I am left
> with a couple of related questions:

Well, I just ploughed through this entire thread, and no one came up
with an idea I've been toying with for a while: the Python Advanced
Library.

This would be the place for well-known, useful, popular, tested, robust,
stable, documented module collections that are just too big or too
specialized to go in the core.  Examples: PIL, mxDateTime, mxTextTools,
mxODBC, ExtensionClass, ZODB, and anything else that I use in my daily
work and wish that we didn't have maintain separate builds of.  ;-)

Obviously this would be most useful as an RPM/Debian package/Windows
installer/etc., so that non-developers could be told, "You need to
install Python 1.6 and the Python Advanced Library 1.0 from ..." and
that's *it*.

Thoughts?  Candidates for admission?  Proposed requirements for admission?

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From gward at mems-exchange.org  Thu Aug 10 15:47:48 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 10 Aug 2000 09:47:48 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <Pine.GSO.4.10.10008101557580.1582-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 10, 2000 at 04:00:51PM +0300
References: <20000810083802.A7912@ludwig.cnri.reston.va.us> <Pine.GSO.4.10.10008101557580.1582-100000@sundial>
Message-ID: <20000810094747.C7912@ludwig.cnri.reston.va.us>

[cc'd to python-dev, since I think this belongs out in the open: Moshe,
if you really meant to keep this private, go ahead and slap my wrist]

On 10 August 2000, Moshe Zadka said:
> Greg, this sounds very close to PEP-206. Please let me know if you see
> any useful collaboration with it.

They're definitely related, and I think we're trying to address the same
problem -- but in a different way.

If I read the PEP (http://python.sourceforge.net/peps/pep-0206.html)
correctly, you want to fatten the standard Python distribution
considerably, first by adding lots of third-party C libraries to it, and
second by adding lots of third-party Python libraries ("module
distributions") to it.  This has the advantage of making all these
goodies immediately available in a typical Python installation.  But it
has a couple of serious disadvantages:
  * makes Python even harder to build and install; why should I have
    to build half a dozen major C libraries just to get a basic
    Python installation working?
  * all these libraries are redundant on modern free Unices -- at
    least the Linux distributions that I have experience with all
    include zlib, Tcl/Tk, libjpeg, and ncurses out of the box.
    Including copies of them with throws out one of the advantages
    of having all these installed as shared libraries, namely that
    there only has to be one copy of each in memory.
  * tell me again: what was the point of the Distutils if we just
    throw "everything useful" into the standard distribution?

Anyways, my idea -- the Python Advanced Library -- is to make all of
these goodies available as a single download, *separate* from Python
itself.  It could well be at the the Advanced Library would be larger
than the Python distribution.  (Especially if Tcl/Tk migrates from the
standard Windows installer to the Advanced Library.)

Advantages:
  * keeps the standard distribution relatively small and focussed;
    IMHO the "big framework" libraries (PIL, NumPy, etc.) don't
    belong in the standard library.  (I think there could someday
    be a strong case for moving Tkinter out of the standard library
    if the Advanced Library really takes off.)
  * relieves licensing problems in the Python distribution; if something
    can't be included with Python for licence reasons, then put
    it in the Advanced Library
  * can have variations on the PAL for different platforms.  Eg. could
    have an RPM or Debian package that just requires libjpeg,
    libncurses, libtcl, libtk etc. for the various Linuces, and an
    enormous installer with separate of absolutely everything for
    Windows
  * excellent test case for the Distutils ;-)
  * great acronym: the Python Advanced Library is your PAL.

Sounds worth a PEP to me; I think it should be distinct from (and in
competition with!) PEP 206.

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From moshez at math.huji.ac.il  Thu Aug 10 16:09:23 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 17:09:23 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000810094747.C7912@ludwig.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008101707240.1582-100000@sundial>

On Thu, 10 Aug 2000, Greg Ward wrote:

> Sounds worth a PEP to me; I think it should be distinct from (and in
> competition with!) PEP 206.

That's sort of why I wanted to keep this off Python-Dev: I don't think
so (I don't really want competing PEPs), I'd rather we hashed out our
differences in private and come up with a unified PEP to save everyone
on Python-Dev a lot of time. 

So let's keep the conversation off python-dev until we either reach
a consensus or agree to disagree.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Thu Aug 10 16:28:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 10 Aug 2000 16:28:34 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <Pine.GSO.4.10.10008101707240.1582-100000@sundial>
Message-ID: <3992BC12.BFA16AAC@lemburg.com>

Moshe Zadka wrote:
> 
> On Thu, 10 Aug 2000, Greg Ward wrote:
> 
> > Sounds worth a PEP to me; I think it should be distinct from (and in
> > competition with!) PEP 206.
> 
> That's sort of why I wanted to keep this off Python-Dev: I don't think
> so (I don't really want competing PEPs), I'd rather we hashed out our
> differences in private and come up with a unified PEP to save everyone
> on Python-Dev a lot of time.
> 
> So let's keep the conversation off python-dev until we either reach
> a consensus or agree to disagree.

Just a side note: As I recall Guido is not willing to include
all these third party tools to the core distribution, but rather
to a SUMO Python distribution, which then includes Python +
all those nice goodies available to the Python Community.

Maintaining this SUMO distribution should, IMHO, be left to
a commercial entity like e.g. ActiveState or BeOpen to insure
quality and robustness -- this is not an easy job, believe me.
I've tried something like this before: it was called Python
PowerTools and should still be available at:

  http://starship.python.net/crew/lemburg/PowerTools-0.2.zip

I never got far, though, due to the complexity of getting
all that Good Stuff under one umbrella.

Perhaps you ought to retarget you PEP206, Moshe ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Thu Aug 10 16:30:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 17:30:40 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <3992BC12.BFA16AAC@lemburg.com>
Message-ID: <Pine.GSO.4.10.10008101729280.17061-100000@sundial>

On Thu, 10 Aug 2000, M.-A. Lemburg wrote:

> Just a side note: As I recall Guido is not willing to include
> all these third party tools to the core distribution, but rather
> to a SUMO Python distribution, which then includes Python +
> all those nice goodies available to the Python Community.

Yes, that's correct. 

> Maintaining this SUMO distribution should, IMHO, be left to
> a commercial entity like e.g. ActiveState or BeOpen to insure
> quality and robustness -- this is not an easy job, believe me.

Well, I'm hoping that distutils will make this easier.

> Perhaps you ought to retarget you PEP206, Moshe ?!

I'm sorry -- I'm too foolhardy. 

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From nowonder at nowonder.de  Thu Aug 10 19:00:14 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 17:00:14 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
Message-ID: <3992DF9E.BF5A080C@nowonder.de>

Hi Guido!

After submitting the patch to smtplib, I got a bad feeling
about only trying to get the FQDN for the localhost case.

Shouldn't _get_fdqn_hostname() try to get the FQDN
for every argument passed? Currently it does so only
for len(name) == 0

I think (but couldn't immediately find a reference) it
is required by some RFC. There is at least an internet
draft by the the ietf that says it is required
and a lot of references (mostly from postfix) to some
RFC, too.

Of course, automatically trying to get the fully
qualified domain name would mean that the programmer
looses some flexibility (by loosing responsibility).

If that is a problem I would make _get_fqdn_hostname
a public function (and choose a better name). helo()
and ehlo() could still call it for the local host case.

or-should-I-just-leave-things-as-they-are-ly y'rs
Peter

P.S.: I am cc'ing the list so everyone and Thomas can
      rush in and provide their RFC knowledge.
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From guido at beopen.com  Thu Aug 10 18:14:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 10 Aug 2000 11:14:20 -0500
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: Your message of "Thu, 10 Aug 2000 17:00:14 GMT."
             <3992DF9E.BF5A080C@nowonder.de> 
References: <3992DF9E.BF5A080C@nowonder.de> 
Message-ID: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>

> Hi Guido!
> 
> After submitting the patch to smtplib, I got a bad feeling
> about only trying to get the FQDN for the localhost case.
> 
> Shouldn't _get_fdqn_hostname() try to get the FQDN
> for every argument passed? Currently it does so only
> for len(name) == 0
> 
> I think (but couldn't immediately find a reference) it
> is required by some RFC. There is at least an internet
> draft by the the ietf that says it is required
> and a lot of references (mostly from postfix) to some
> RFC, too.
> 
> Of course, automatically trying to get the fully
> qualified domain name would mean that the programmer
> looses some flexibility (by loosing responsibility).
> 
> If that is a problem I would make _get_fqdn_hostname
> a public function (and choose a better name). helo()
> and ehlo() could still call it for the local host case.
> 
> or-should-I-just-leave-things-as-they-are-ly y'rs
> Peter
> 
> P.S.: I am cc'ing the list so everyone and Thomas can
>       rush in and provide their RFC knowledge.

Good idea -- I don't know anything about SMTP!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 10 17:40:26 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 17:40:26 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 10, 2000 at 11:14:20AM -0500
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
Message-ID: <20000810174026.D17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:14:20AM -0500, Guido van Rossum wrote:

> > After submitting the patch to smtplib, I got a bad feeling
> > about only trying to get the FQDN for the localhost case.
> > for len(name) == 0

> > I think (but couldn't immediately find a reference) it
> > is required by some RFC. There is at least an internet
> > draft by the the ietf that says it is required
> > and a lot of references (mostly from postfix) to some
> > RFC, too.

If this is for helo() and ehlo(), screw it. No sane mailer, technician or
abuse desk employee pays any attention what so ever to the HELO message,
except possibly for debugging.

The only use I've ever had for the HELO message is with clients that setup a
WinGate or similar braindead port-forwarding service on their dail-in
machine, and then buy one of our products, batched-SMTP. They then get their
mail passed to them via SMTP when they dial in... except that these
*cough*users*cough* redirect their SMTP port to *our* smtp server, creating
a confusing mail loop. We first noticed that because their server connected
to our server using *our* HELO message ;)

> > If that is a problem I would make _get_fqdn_hostname
> > a public function (and choose a better name). helo()
> > and ehlo() could still call it for the local host case.

I don't think this is worth the trouble. Assembling a FQDN is tricky at
best, and it's not needed in that many cases. (Sometimes you can break
something by trying to FQDN a name and getting it wrong ;) Where would this
function be used ? In SMTP chats ? Not necessary. A 'best guess' is enough
-- the remote SMTP server won't listen to you, anyway, and provide the
ipaddress and it's reverse DNS entry in the mail logs. Mailers that rely on
the HELO message are (rightly!) considered insecure, spam-magnets, and are a
thankfully dying race.

Of course, if anyone else needs a FQDN, it might be worth exposing this
algorithm.... but smtplib doesn't seem like the proper place ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 20:13:04 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 18:13:04 +0000
Subject: [Python-Dev] open 'Accepted' patches
Message-ID: <3992F0B0.C8CBF85B@nowonder.de>

Changing the patch view at sf to 'Accepted' in order to find
my patch, I was surprised by the amount of patches that have
been accepted and are still lying around. In an insane attack
of self-destructiveness I decided to bring up the issue<wink>.

I know there can be a lot of issues with patches relative to
another patch etc., but letting them rot won't improve the
situation. "Checked in they should be." <PYoda> If there
are still problems with them or the have already been
checked in, the status should at least be 'Postponed',
'Out of Date', 'Rejected', 'Open' or 'Closed'.

Here is a list of the open 'Accepted' patches that have had
no comment for more than a week and which are not obviously
checked in yet (those that are, I have closed):

patch# | summary                             | last comment
-------+-------------------------------------+--------------
100510 | largefile support for Win64 (and...)| 2000-Jul-31
100511 | test largefile support (test_lar...)| 2000-Jul-31
100851 | traceback.py, with unicode except...| 2000-Aug-01
100874 | Better error message with Unbound...| 2000-Jul-26
100955 | ptags, eptags: regex->re, 4 char ...| 2000-Jul-26
100978 | Minor updates for BeOS R5           | 2000-Jul-25
100994 | Allow JPython to use more tests     | 2000-Jul-27

If I should review, adapt and/or check in some of these,
please tell me which ones.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 18:30:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 18:30:10 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 10, 2000 at 11:14:20AM -0500
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
Message-ID: <20000810183010.E17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:14:20AM -0500, Guido van Rossum wrote:

> > P.S.: I am cc'ing the list so everyone and Thomas can
> >       rush in and provide their RFC knowledge.

Oh, I forgot to point out: I have some RFC knowledge, but decided not to use
it in the case of the HELO message ;) I do have a lot of hands-on experience
with SMTP, and I know for a fact very little MUA that talk SMTP send a FQDN
in the HELO message. I think that sending the FQDN when we can (like we do,
now) is a good idea, but I don't see a reason to force the HELO message to
be a FQDN. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Thu Aug 10 18:43:41 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 19:43:41 +0300 (IDT)
Subject: [Python-Dev] open 'Accepted' patches
In-Reply-To: <3992F0B0.C8CBF85B@nowonder.de>
Message-ID: <Pine.GSO.4.10.10008101941220.19610-100000@sundial>

(Meta: seems every now and again, a developer has a fit of neurosa. I
think this is a good thing)

On Thu, 10 Aug 2000, Peter Schneider-Kamp wrote:

> patch# | summary                             | last comment
> -------+-------------------------------------+--------------
...
> 100955 | ptags, eptags: regex->re, 4 char ...| 2000-Jul-26

This is the only one I actually know about: Jeremy, Guido has approved it,
I assigned it to you for final eyeballing -- shouldn't be *too* hard to
check it in...
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From DavidA at ActiveState.com  Thu Aug 10 18:47:54 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 10 Aug 2000 09:47:54 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] FYI: Software Carpentry winners announced
Message-ID: <Pine.WNT.4.21.0008100945480.1052-100000@loom>

I wanted to make sure that everyone here knew that the Software Carpentry
winners were announced, and that our very own Ping won in the Track
category.  Winners in the Config and Build category were Linday Todd
(SapCat) and Steven Knight (sccons) respectively.  Congrats to all.

--david

http://software-carpentry.codesourcery.com/entries/second-round/results.html




From trentm at ActiveState.com  Thu Aug 10 18:50:15 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 10 Aug 2000 09:50:15 -0700
Subject: [Python-Dev] open 'Accepted' patches
In-Reply-To: <3992F0B0.C8CBF85B@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 06:13:04PM +0000
References: <3992F0B0.C8CBF85B@nowonder.de>
Message-ID: <20000810095015.A28562@ActiveState.com>

On Thu, Aug 10, 2000 at 06:13:04PM +0000, Peter Schneider-Kamp wrote:
> 
> Here is a list of the open 'Accepted' patches that have had
> no comment for more than a week and which are not obviously
> checked in yet (those that are, I have closed):
> 
> patch# | summary                             | last comment
> -------+-------------------------------------+--------------
> 100510 | largefile support for Win64 (and...)| 2000-Jul-31
> 100511 | test largefile support (test_lar...)| 2000-Jul-31

These two are mine. For a while I just thought that they had been checked in.
Guido poked me to check them in a week or so ago and I will this week.


Trent


-- 
Trent Mick
TrentM at ActiveState.com



From nowonder at nowonder.de  Fri Aug 11 01:29:28 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 23:29:28 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl>
Message-ID: <39933AD8.B8EF5D59@nowonder.de>

Thomas Wouters wrote:
> 
> If this is for helo() and ehlo(), screw it. No sane mailer, technician or
> abuse desk employee pays any attention what so ever to the HELO message,
> except possibly for debugging.

Well, there are some MTAs (like Postfix) that seem to care. Postfix has
an option called "reject_non_fqdn_hostname" with the following description:

"""
Reject the request when the hostname in the client HELO (EHLO) command is not in 
fully-qualified domain form, as required by the RFC. The non_fqdn_reject_code
specifies the response code to rejected requests (default: 504)."""

The submittor of the bug which was addressed by the patch I checked in had
a problem with mailman and a postfix program that seemed to have this option
turned on.

What I am proposing for smtplib is to send every name given to
helo (or ehlo) through the guessing framework of gethostbyaddr()
if possible. Could this hurt anything?

> Of course, if anyone else needs a FQDN, it might be worth exposing this
> algorithm.... but smtplib doesn't seem like the proper place ;P

Agreed. Where could it go?

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From nowonder at nowonder.de  Fri Aug 11 01:34:38 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 23:34:38 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810183010.E17171@xs4all.nl>
Message-ID: <39933C0E.7A84D6E2@nowonder.de>

Thomas Wouters wrote:
> 
> Oh, I forgot to point out: I have some RFC knowledge, but decided not to use
> it in the case of the HELO message ;) I do have a lot of hands-on experience
> with SMTP, and I know for a fact very little MUA that talk SMTP send a FQDN
> in the HELO message. I think that sending the FQDN when we can (like we do,
> now) is a good idea, but I don't see a reason to force the HELO message to
> be a FQDN.

I don't want to force anything. I think it's time for some
code to speak for itself, rather than me trying to
speak for it <0.8 wink>:

def _get_fqdn_hostname(name):
    name = string.strip(name)
    if len(name) == 0:
        name = socket.gethostname()
    try:
        hostname, aliases, ipaddrs = socket.gethostbyaddr(name)
    except socket.error:
        pass
    else:
        aliases.insert(0, hostname)
        for name in aliases:
            if '.' in name:
                break
        else:
            name = hostname
    return name

This is the same function as the one I checked into
smtplib.py with the exception of executing the try-block
also for names with len(name) != 0.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From bckfnn at worldonline.dk  Fri Aug 11 00:17:47 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Thu, 10 Aug 2000 22:17:47 GMT
Subject: [Python-Dev] Freezing unicode codecs.
Message-ID: <3993287a.1852013@smtp.worldonline.dk>

While porting the unicode API and the encoding modules to JPython I came
across a problem which may also (or maybe not) exists in CPython.

jpythonc is a compiler for jpython which try to track dependencies
between modules in an attempt to detect which modules an application or
applet uses. I have the impression that some of the freeze tools for
CPython does something similar.

A call to unicode("abc", "cp1250") and "abc".encode("cp1250") will cause
the encoding.cp1250 module to be loaded as a side effect. The freeze
tools will have a hard time figuring this out by scanning the python
source.


For JPython I'm leaning towards making it a requirement that the
encodings must be loading explicit from somewhere in application. Adding


   import encoding.cp1250

somewhere in the application will allow jpythonc to include this python
module in the frozen application.

How does CPython solve this?


PS. The latest release of the JPython errata have full unicode support
and includes the "sre" module and unicode codecs.

    http://sourceforge.net/project/filelist.php?group_id=1842


regards,
finn



From thomas at xs4all.net  Fri Aug 11 00:50:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 00:50:13 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <39933AD8.B8EF5D59@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 11:29:28PM +0000
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl> <39933AD8.B8EF5D59@nowonder.de>
Message-ID: <20000811005013.F17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:29:28PM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > If this is for helo() and ehlo(), screw it. No sane mailer, technician or
> > abuse desk employee pays any attention what so ever to the HELO message,
> > except possibly for debugging.

> Well, there are some MTAs (like Postfix) that seem to care. Postfix has
> an option called "reject_non_fqdn_hostname" with the following description:

> """
> Reject the request when the hostname in the client HELO (EHLO) command is not in 
> fully-qualified domain form, as required by the RFC. The non_fqdn_reject_code
> specifies the response code to rejected requests (default: 504)."""

> The submittor of the bug which was addressed by the patch I checked in had
> a problem with mailman and a postfix program that seemed to have this option
> turned on.

Fine, the patch addresses that. When the hostname passed to smtplib is ""
(which is the default), it should be turned into a FQDN. I agree. However,
if someone passed in a name, we do not know if they even *want* the name
turned into a FQDN. In the face of ambiguity, refuse the temptation to
guess.

Turning on this Postfix feature (which is completely along the lines of
Postfix, and I applaud Wietse(*) for supplying it ;) is a tricky decision at
best. Like I said in the other email, there are a *lot* of MUAs and MTAs and
other throw-away-programs-gone-commercial that don't speak proper SMTP, and
don't even pretend to send a FQDN. Most Windows clients send the machine's
netbios name, for crying out loud. Turning this on would break all those
clients, and more. I'm not too worried about it breaking Python scripts that
are explicitly setting the HELO response -- those scripts are probably doing
it for a reason.

To note, I haven't seen software that uses smtplib that does supply their
own HELO message, except for a little script I saw that was *explicitly*
setting the HELO message in order to test the SMTP server on the other end.
That instance would certainly have been broken by rewriting the name into a
FQDN.

> > Of course, if anyone else needs a FQDN, it might be worth exposing this
> > algorithm.... but smtplib doesn't seem like the proper place ;P

> Agreed. Where could it go?

On second though, I can't imagine anyone needing such a function outside of
smtplib. FQDN's are nice for reporting URIs to the outside world, but for
connecting to a certain service you simply pass the hostname you got (which
can be an ipaddress) through to the OS-provided network layer. Kind of like
not doing type checking on the objects passed to your function, but instead
assuming it conforms to an interface and will work correctly or fail
obviously when attempted to be used as an object of a certain type.

So, make it an exposed function on smtplib, for those people who don't want
to set the HELO message to "", but do want it to be rewritten into a FQDN.

(*) Amazing how all good software came to be through Dutch people. Even
Linux: if it wasn't for Tanenbaum, it wouldn't be what it is today :-)

PS: I'm talking as a sysadmin for a large ISP here, not as a user-friendly
library-implementor. We won't be able to turn on this postfix feature for
many, many years, and I wouldn't advise anyone who expects mail to be sent
from the internet to a postfix machine to enable it, either. But if your
mailserver is internal-only, or with fixed entrypoints that are running
reliable software, I can imagine people turning it on. It would please me no
end if we could turn this on ! I spend on average an hour a day closing
customer-accounts and helping them find out why their mailserver sucks. And
I still discover new mailserver software and new ways for them to suck, it's
really amazing ;)

that-PS-was-a-bit-long-for-a-signoff-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Fri Aug 11 02:44:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 10 Aug 2000 20:44:06 -0400
Subject: Keyword abuse  (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOECAGPAA.tim_one@email.msn.com>

[Skip Montanaro]
> Could this be extended to many/most/all current instances of
> keywords in Python?  As Tim pointed out, Fortran has no keywords.
> It annoys me that I (for example) can't define a method named "print".

This wasn't accidental in Fortran, though:  X3J3 spent many tedious hours
fiddling the grammar to guarantee it was always possible.  Python wasn't
designed with this in mind, and e.g. there's no meaningful way to figure out
whether

    raise

is an expression or a "raise stmt" in the absence of keywords.  Fortran is
very careful to make sure such ambiguities can't arise.

A *reasonable* thing is to restrict global keywords to special tokens that
can begin a line.  There's real human and machine parsing value in being
able to figure out what *kind* of stmt a line represents from its first
token.  So letting "print" be a variable name too would, IMO, really suck.

But after that, I don't think users have any problem understanding that
different stmt types can have different syntax.  For example, if "@" has a
special meaning in "print" statments, big deal.  Nobody splits a spleen over
seeing

    a   b, c, d

when "a" happens to be "exec" or "print" today, despite that most stmts
don't allow that syntax, and even between "exec" and "print" it has very
different meanings.  Toss in "global", "del" and "import" too for other
twists on what the "b, c, d" part can look like and mean.

As far as I'm concerned, each stmt type can have any syntax it damn well
likes!   Identifiers with special meaning *after* a keyword-introduced stmt
can usually be anything at all without making them global keywords (be it
"as" after "import", or "indexing" after "for", or ...).  The only thing
Python is missing then is a lisp stmt <wink>:

    lisp (setq a (+ a 1))

Other than that, the JPython hack looks cool too.

Note that SSKs (stmt-specific keywords) present a new problem to colorizers
(or moral equivalents like font-lock), and to other tools that do more than
a trivial parse.

the-road-to-p3k-has-toll-booths-ly y'rs  - tim





From tim_one at email.msn.com  Fri Aug 11 02:44:08 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 10 Aug 2000 20:44:08 -0400
Subject: PEP praise (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCAECBGPAA.tim_one@email.msn.com>

[Ka-Ping Yee]
> ...
> Surely a PEP isn't required for a couple of built-in functions that
> are simple and well understood?  You can just call thumbs-up or
> thumbs-down and be done with it.

Only half of that is true, and even then only partially:  if the verdict is
thumbs-up, *almost* cool, except that newcomers delight in pestering "but
how come it wasn't done *my* way instead?".  You did a bit of that yourself
in your day, you know <wink>.  We're hoping the stream of newcomers never
ends, but the group of old-timers willing and able to take an hour or two to
explain the past in detail is actually dwindling (heck, you can count the
Python-Dev members chipping in on Python-Help with a couple of fingers, and
if anything fewer still active on c.l.py).

If it's thumbs-down, in the absence of a PEP it's much worse:  it will just
come back again, and again, and again, and again.  The sheer repetition in
these endlessly recycled arguments all but guarantees that most old-timers
ignore these threads completely.

A prime purpose of the PEPs is to be the community's collective memory, pro
or con, so I don't have to be <wink>.  You surely can't believe this is the
first time these particular functions have been pushed for core adoption!?
If not, why do we need to have the same arguments all over again?  It's not
because we're assholes, and neither because there's anything truly new here,
it's simply because a mailing list has no coherent memory.

Not so much as a comma gets changed in an ANSI or ISO std without an
elaborate pile of proposal paperwork and formal reviews.  PEPs are a very
lightweight mechanism compared to that.  And it would take you less time to
write a PEP for this than I alone spent reading the 21 msgs waiting for me
in this thread today.  Multiply the savings by billions <wink>.

world-domination-has-some-scary-aspects-ly y'rs  - tim





From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 03:59:30 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 03:59:30 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <200008110159.DAA09540@python.inrialpes.fr>

I'm looking at preventing core dumps due to recursive calls. With
simple nested call counters for every function in object.c, limited to
500 levels deep recursions, I think this works okay for repr, str and
print. It solves most of the complaints, like:

class Crasher:
	def __str__(self): print self

print Crasher()

With such protection, instead of a core dump, we'll get an exception:

RuntimeError: Recursion too deep


So far, so good. 500 nested calls to repr, str or print are likely
to be programming bugs. Now I wonder whether it's a good idea to do
the same thing for getattr and setattr, to avoid crashes like:

class Crasher:
	def __getattr__(self, x): return self.x 

Crasher().bonk

Solving this the same way is likely to slow things down a bit, but
would prevent the crash. OTOH, in a complex object hierarchy with
tons of delegation and/or lookup dispatching, 500 nested calls is
probably not enough. Or am I wondering too much? Opinions?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From bwarsaw at beopen.com  Fri Aug 11 05:00:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 10 Aug 2000 23:00:32 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>
	<200008100036.TAA26235@cj20424-a.reston1.va.home.com>
Message-ID: <14739.27728.960099.342321@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Alas, I'm not sure how easy it will be.  The parser generator
    GvR> will probably have to be changed to allow you to indicate not
    GvR> to do a resword lookup at certain points in the grammar.  I
    GvR> don't know where to start. :-(

Yet another reason why it would be nice to (eventually) merge the
parsing technology in CPython and JPython.

i-don't-wanna-work-i-jes-wanna-bang-on-my-drum-all-day-ly y'rs,
-Barry



From MarkH at ActiveState.com  Fri Aug 11 08:15:00 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 11 Aug 2000 16:15:00 +1000
Subject: [Python-Dev] Patches and checkins for 1.6
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>

I would like a little guidance on how to handle patches during this 1.6
episode.

My understanding of CVS tells me that 1.6 has forked from the main
development tree.  Any work done in the 1.6 branch will need to also be
done in the main branch.  Is this correct?

If so, it means that all patches assigned to me need to be applied and
tested twice, which involves completely refetching the entire tree, and
rebuilding the world?

Given that 1.6 appears to be mainly an exercise in posturing by CNRI, is it
reasonable that I hold some patches off while I'm working with 1.6, and
check them in when I move back to the main branch?  Surely no one will
stick with 1.6 in the long (or even medium) term, once all active
development of that code ceases?

Of course, this wouldn't include critical bugs, but no one is mad enough to
assign them to me anyway <wink>

Confused-and-in-need-of-a-machine-upgrade ly,

Mark.




From tim_one at email.msn.com  Fri Aug 11 08:48:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 11 Aug 2000 02:48:56 -0400
Subject: [Python-Dev] Patches and checkins for 1.6
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIECPGPAA.tim_one@email.msn.com>

[Mark Hammond]
> I would like a little guidance on how to handle patches during this
> 1.6 episode.
>
> My understanding of CVS tells me that 1.6 has forked from the
> main development tree.  Any work done in the 1.6 branch will need
> to also be done in the main branch.  Is this correct?

Don't look at me -- I first ran screaming in terror from CVS tricks more
than a decade ago, and haven't looked back.  OTOH, I don't know of *any*
work done in the 1.6 branch yet that needs also to be done in the 2.0
branch.  Most of what Fred Drake has been doing is in the other direction,
and the rest has been fixing buglets unique to 1.6.

> If so, it means that all patches assigned to me need to be applied
> and tested twice, which involves completely refetching the entire
> tree, and rebuilding the world?

Patches with new features should *not* go into the 1.6 branch at all!  1.6
is meant to reflect only work that CNRI has clear claims to, plus whatever
bugfixes are needed to make that a good release.  Actual cash dollars for
Unicode development were funneled through CNRI, and that's why the Unicode
features are getting backstitched into it.  They're unique, though.

> Given that 1.6 appears to be mainly an exercise in posturing by
> CNRI,

Speaking on behalf of BeOpen PythonLabs, 1.6 is a milestone in Python
development, worthy of honor, praise and repeated downloading by all.  We at
BeOpen PythonLabs regret the unfortunate misconceptions that have arisen
about its true nature, and fully support CNRI's wise decision to force a
release of Python 1.6 in the public interest.

> is it reasonable that I hold some patches off while I'm working
> with 1.6, and check them in when I move back to the main branch?

I really don't know what you're doing.  If you find a bug in 1.6 that's also
a bug in 2.0, it should go without saying that we'd like that fixed ASAP in
2.0 as well.  But since that went without saying, and you seem to be saying
something else, I'm not sure what you're asking.  If you're asking whether
you're allowed to maximize your own efficiency, well, only Guido can force
you to do something self-damaging <wink>.

> Surely no one will stick with 1.6 in the long (or even
> medium) term, once all active development of that code ceases?

Active development of the 1.6 code has already ceased, far as I can tell.
Maybe some more Unicode patches?  Other than that, just bugfixes as needed.
It's down to a trickle.  We're aiming for a quick beta cycle on 1.6b1, and--
last I heard, and barring scads of fresh bug reports --intending to release
1.6 final next.  Then bugs opened against 1.6 will be answered by "fixed in
2.0".

> Of course, this wouldn't include critical bugs, but no one is mad
> enough to assign them to me anyway <wink>
>
> Confused-and-in-need-of-a-machine-upgrade ly,

And we'll buy you one, too, if you promise to use it to fix the test_fork1
family of bugs on SMP Linux boxes!

don't-forget-that-patches-to-1.6-still-need-cnri-release-forms!-
    and-that-should-clarify-everything-ly y'rs  - tim





From gstein at lyra.org  Fri Aug 11 09:07:29 2000
From: gstein at lyra.org (Greg Stein)
Date: Fri, 11 Aug 2000 00:07:29 -0700
Subject: [Python-Dev] Patches and checkins for 1.6
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 11, 2000 at 04:15:00PM +1000
References: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>
Message-ID: <20000811000729.M19525@lyra.org>

On Fri, Aug 11, 2000 at 04:15:00PM +1000, Mark Hammond wrote:
>...
> If so, it means that all patches assigned to me need to be applied and
> tested twice, which involves completely refetching the entire tree, and
> rebuilding the world?

Just fetch two trees.

c:\src16
c:\src20

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From mal at lemburg.com  Fri Aug 11 10:04:48 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 11 Aug 2000 10:04:48 +0200
Subject: [Python-Dev] Freezing unicode codecs.
References: <3993287a.1852013@smtp.worldonline.dk>
Message-ID: <3993B3A0.28500B22@lemburg.com>

Finn Bock wrote:
> 
> While porting the unicode API and the encoding modules to JPython I came
> across a problem which may also (or maybe not) exists in CPython.
> 
> jpythonc is a compiler for jpython which try to track dependencies
> between modules in an attempt to detect which modules an application or
> applet uses. I have the impression that some of the freeze tools for
> CPython does something similar.
> 
> A call to unicode("abc", "cp1250") and "abc".encode("cp1250") will cause
> the encoding.cp1250 module to be loaded as a side effect. The freeze
> tools will have a hard time figuring this out by scanning the python
> source.
> 
> For JPython I'm leaning towards making it a requirement that the
> encodings must be loading explicit from somewhere in application. Adding
> 
>    import encoding.cp1250
> 
> somewhere in the application will allow jpythonc to include this python
> module in the frozen application.
> 
> How does CPython solve this?

It doesn't. The design of the codec registry is such that it
uses search functions which then locate and load the codecs.
These search function can implement whatever scheme they desire
for the lookup and also w/r to loading the codec, e.g. they
could get the data from a ZIP archive.

This design was chosen to allow drop-in configuration of the
Python codecs. Applications can easily add new codecs to the
registry by registering a new search function (and without
having to copy files into the encodings Lib subdir).
 
When it comes to making an application freezable, I'd suggest
adding explicit imports to some freeze support module in the
application. There are other occasions where this is needed
too, e.g. for packages using lazy import of modules such
as mx.DateTime.

This module would then make sure freeze.py finds the right
modules to include in its output.

> PS. The latest release of the JPython errata have full unicode support
> and includes the "sre" module and unicode codecs.
> 
>     http://sourceforge.net/project/filelist.php?group_id=1842

Cool :-)
 
> regards,
> finn
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Fri Aug 11 12:29:04 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 11 Aug 2000 10:29:04 +0000
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl>
Message-ID: <3993D570.7578FE71@nowonder.de>

After sleeping over it, I noticed that at least
BaseHTTPServer and ftplib also use a similar
algorithm to get a fully qualified domain name.

Together with smtplib there are four occurences
of the algorithm (2 in BaseHTTPServer). I think
it would be good not to have four, but one
implementation.

First I thought it could be socket.get_fqdn(),
but it seems a bit troublesome to write it in C.

Should this go somewhere? If yes, where should
it go?

I'll happily prepare a patch as soon as I know
where to put it.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From moshez at math.huji.ac.il  Fri Aug 11 10:40:08 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 11:40:08 +0300 (IDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <3993D570.7578FE71@nowonder.de>
Message-ID: <Pine.GSO.4.10.10008111136390.27824-100000@sundial>

On Fri, 11 Aug 2000, Peter Schneider-Kamp wrote:

> First I thought it could be socket.get_fqdn(),
> but it seems a bit troublesome to write it in C.
> 
> Should this go somewhere?

Yes. We need some OnceAndOnlyOnce mentality here...

> If yes, where should
> it go?

Good question. You'll notice that SimpleHTTPServer imports shutil for
copyfileobj, because I had no good answer to a similar question. GS seems
to think "put it somewhere" is a good enough answer. I think I might
agree.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From barry at scottb.demon.co.uk  Fri Aug 11 13:42:11 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 11 Aug 2000 12:42:11 +0100
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008110159.DAA09540@python.inrialpes.fr>
Message-ID: <000401c00389$2fa577b0$060210ac@private>

Why not set a limit in the intepreter? Fixing this for every call in object.c
seems a lots of hard work and will always leave holes.

For embedding Python being able to control the recursion depth of the intepreter
is very useful. I would want to be able to set, from C, the max call depth limit
and the current call depth limit. I'd expect Python to set a min call depth limit.

		BArry


> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Vladimir Marangozov
> Sent: 11 August 2000 03:00
> To: Python core developers
> Subject: [Python-Dev] Preventing recursion core dumps
> 
> 
> 
> I'm looking at preventing core dumps due to recursive calls. With
> simple nested call counters for every function in object.c, limited to
> 500 levels deep recursions, I think this works okay for repr, str and
> print. It solves most of the complaints, like:
> 
> class Crasher:
> 	def __str__(self): print self
> 
> print Crasher()
> 
> With such protection, instead of a core dump, we'll get an exception:
> 
> RuntimeError: Recursion too deep
> 
> 
> So far, so good. 500 nested calls to repr, str or print are likely
> to be programming bugs. Now I wonder whether it's a good idea to do
> the same thing for getattr and setattr, to avoid crashes like:
> 
> class Crasher:
> 	def __getattr__(self, x): return self.x 
> 
> Crasher().bonk
> 
> Solving this the same way is likely to slow things down a bit, but
> would prevent the crash. OTOH, in a complex object hierarchy with
> tons of delegation and/or lookup dispatching, 500 nested calls is
> probably not enough. Or am I wondering too much? Opinions?
> 
> -- 
>        Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
> http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From guido at beopen.com  Fri Aug 11 14:47:09 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 07:47:09 -0500
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Your message of "Fri, 11 Aug 2000 03:59:30 +0200."
             <200008110159.DAA09540@python.inrialpes.fr> 
References: <200008110159.DAA09540@python.inrialpes.fr> 
Message-ID: <200008111247.HAA03687@cj20424-a.reston1.va.home.com>

> I'm looking at preventing core dumps due to recursive calls. With
> simple nested call counters for every function in object.c, limited to
> 500 levels deep recursions, I think this works okay for repr, str and
> print. It solves most of the complaints, like:
> 
> class Crasher:
> 	def __str__(self): print self
> 
> print Crasher()
> 
> With such protection, instead of a core dump, we'll get an exception:
> 
> RuntimeError: Recursion too deep
> 
> 
> So far, so good. 500 nested calls to repr, str or print are likely
> to be programming bugs. Now I wonder whether it's a good idea to do
> the same thing for getattr and setattr, to avoid crashes like:
> 
> class Crasher:
> 	def __getattr__(self, x): return self.x 
> 
> Crasher().bonk
> 
> Solving this the same way is likely to slow things down a bit, but
> would prevent the crash. OTOH, in a complex object hierarchy with
> tons of delegation and/or lookup dispatching, 500 nested calls is
> probably not enough. Or am I wondering too much? Opinions?

In your examples there's recursive Python code involved.  There's
*already* a generic recursion check for that, but the limit is too
high (the latter example segfaults for me too, while a simple def f():
f() gives a RuntimeError).

It seems better to tune the generic check than to special-case str,
repr, and getattr.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug 11 14:55:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 07:55:29 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi
 ng amount of data sent.
Message-ID: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>

I just noticed this.  Is this true?  Shouldn't we change send() to
raise an error instead of returning a small number?  (The number of
bytes written can be an attribute of the exception.)

Don't look at me for implementing this, sorry, no time...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

------- Forwarded Message

Date:    Thu, 10 Aug 2000 16:39:48 -0700
From:    noreply at sourceforge.net
To:      scott at chronis.pobox.com, 0 at delerium.i.sourceforge.net,
	 python-bugs-list at python.org
Subject: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi
	  ng amount of data sent.

Bug #111620, was updated on 2000-Aug-10 16:39
Here is a current snapshot of the bug.

Project: Python
Category: Library
Status: Open
Resolution: None
Bug Group: None
Priority: 5
Summary: lots of use of send() without verifying amount of data sent.

Details: a quick grep of the standard python library (below) shows that there
is lots of unchecked use of the send() function.  Every unix system I've every 
used states that send() returns the number of bytes sent, which can be < length
(<string>).  Using socket.send(s) without verifying that the return value is eq
ual to the length of s is careless and can result in loss of data.

I just submitted a patch for smtplib's use of send(), have patched a piece of Z
ope the same way, and get the feeling that it's becoming standard to call send(
) without checking 
that the amount of data sent is the intended amount.  While this is OK for a qu
ick script, I don't feel it's OK for library code or anything that might be use
d in production.

scott

For detailed info, follow this link:
http://sourceforge.net/bugs/?func=detailbug&bug_id=111620&group_id=5470

_______________________________________________
Python-bugs-list maillist  -  Python-bugs-list at python.org
http://www.python.org/mailman/listinfo/python-bugs-list

------- End of Forwarded Message




From gmcm at hypernet.com  Fri Aug 11 14:32:44 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 08:32:44 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
Message-ID: <1246125329-123433164@hypernet.com>

[bug report] 
> Details: a quick grep of the standard python library (below)
> shows that there is lots of unchecked use of the send() 
> function.
[Guido]
> I just noticed this.  Is this true?  Shouldn't we change send()
> to raise an error instead of returning a small number?  (The
> number of bytes written can be an attribute of the exception.)

No way! You'd break 90% of my sockets code! People who 
don't want to code proper sends / recvs can use that sissy 
makefile junk.

- Gordon



From thomas at xs4all.net  Fri Aug 11 14:31:43 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 14:31:43 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 07:55:29AM -0500
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
Message-ID: <20000811143143.G17171@xs4all.nl>

On Fri, Aug 11, 2000 at 07:55:29AM -0500, Guido van Rossum wrote:

> I just noticed this.  Is this true?  Shouldn't we change send() to
> raise an error instead of returning a small number?  (The number of
> bytes written can be an attribute of the exception.)

This would break a lot of code. (probably all that use send, with or without
return-code checking.) I would propose a 'send_all' or some such instead,
which would keep sending until either a real error occurs, or all data is
sent (possibly with a timeout ?). And most uses of send could be replaced by
send_all, both in the std. library and in user code.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 14:39:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 14:39:36 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111247.HAA03687@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 11, 2000 07:47:09 AM
Message-ID: <200008111239.OAA15818@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> It seems better to tune the generic check than to special-case str,
> repr, and getattr.

Right. This would be a step forward, at least for recursive Python code
(which is the most common complaint).  Reducing the current value
by half, i.e. setting MAX_RECURSION_DEPTH = 5000 works for me (Linux & AIX)

Agreement on 5000?

Doesn't solve the problem for C code (extensions) though...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 15:19:38 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 15:19:38 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <000401c00389$2fa577b0$060210ac@private> from "Barry Scott" at Aug 11, 2000 12:42:11 PM
Message-ID: <200008111319.PAA16192@python.inrialpes.fr>

Barry Scott wrote:
> 
> Why not set a limit in the intepreter? Fixing this for every call in object.c
> seems a lots of hard work and will always leave holes.

Indeed.

> 
> For embedding Python being able to control the recursion depth of the
> intepreter is very useful. I would want to be able to set, from C, the
> max call depth limit and the current call depth limit.

Except exporting MAX_RECURSION_DEPTH as a variable (Py_MaxRecursionDepth)
I don't see what you mean by current call depth limit.

> I'd expect Python to set a min call depth limit.

I don't understand this. Could you elaborate?
Are you implying the introduction of a public function
(ex. Py_SetRecursionDepth) that does some value checks?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From paul at prescod.net  Fri Aug 11 15:19:05 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 11 Aug 2000 08:19:05 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."            
		 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
		 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <3993FD49.C7E71108@prescod.net>

Just van Rossum wrote:
> 
> ...
>
>        for <index> indexing <element> in <seq>:
>            ...

 
Let me throw out another idea. What if sequences just had .items()
methods?

j=range(0,10)

for index, element in j.items():
    ...

While we wait for the sequence "base class" we could provide helper
functions that makes the implementation of both eager and lazy versions
easier.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Fri Aug 11 16:19:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 09:19:33 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:31:43 +0200."
             <20000811143143.G17171@xs4all.nl> 
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>  
            <20000811143143.G17171@xs4all.nl> 
Message-ID: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>

> > I just noticed this.  Is this true?  Shouldn't we change send() to
> > raise an error instead of returning a small number?  (The number of
> > bytes written can be an attribute of the exception.)
> 
> This would break a lot of code. (probably all that use send, with or without
> return-code checking.) I would propose a 'send_all' or some such instead,
> which would keep sending until either a real error occurs, or all data is
> sent (possibly with a timeout ?). And most uses of send could be replaced by
> send_all, both in the std. library and in user code.

Really?!?!

I just read the man page for send() (Red Hat linux 6.1) and it doesn't
mention sending fewer than all bytes at all.  In fact, while it says
that the return value is the number of bytes sent, it at least
*suggests* that it will return an error whenever not everything can be
sent -- even in non-blocking mode.

Under what circumstances can send() return a smaller number?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From paul at prescod.net  Fri Aug 11 15:25:27 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 11 Aug 2000 08:25:27 -0500
Subject: [Python-Dev] Winreg update
Message-ID: <3993FEC7.4E38B4F1@prescod.net>

I am in transit so I don't have time for a lot of back and forth email
relating to winreg. It also seems that there are a lot of people (let's
call them "back seat coders") who have vague ideas of what they want but
don't want to spend a bunch of time in a long discussion about registry
arcana. Therefore I am endevouring to make it as easy and fast to
contribute to the discussion as possible. 

I'm doing this through a Python Module Proposal format. This can also
serve as the basis of documentation.

This is really easy so I want
some real feedback this time. Distutils people, this means you! Mark! I
would love to hear Bill Tutt, Greg Stein and anyone else who claims some
knowledge of Windows!

If you're one of the people who has asked for winreg in the core then
you should respond. It isn't (IMO) sufficient to put in a hacky API to
make your life easier. You need to give something to get something. You
want windows registry support in the core -- fine, let's do it properly.

Even people with a minimal understanding of the registry should be able
to contribute: the registry isn't rocket surgery. I'll include a short
primer in this email.

All you need to do is read this email and comment on whether you agree
with the overall principle and then give your opinion on fifteen
possibly controversial issues. The "overall principle" is to steal
shamelessly from Microsoft's new C#/VB/OLE/Active-X/CRL API instead of
innovating for Python. That allows us to avoid starting the debate from
scratch. It also eliminates the feature that Mark complained about
(which was a Python-specific innovation).

The fifteen issues are mostly extensions to the API to make it easier
(convenience extensions) or more powerful (completeness extensions).
Many of them are binary: "do this, don't do that." Others are choices:
e.g. "Use tuples", "Use lists", "Use an instance".

I will try to make sense of the various responses. Some issues will have
strong consensus and I'll close those quickly. Others will require more
(hopefully not much!) discussion.

Windows Registry Primer:
========================

There are things called "keys". They aren't like Python keys so don't
think of them that way. Keys have a list of subkeys indexed by name.
Keys also have a list of "values". Values have names. Every value has a
type. In some type-definition syntax:

key is (name: string, 
     subkeys: (string : key), 
     values: (string : value ))

value is ( name: string,
       type: enumeration,
       data: (depends on enumeration) )

That's the basic model. There are various helper facilities provided by
the APIs, but really, the model is as above.

=========================================================================
Python Module Proposal
Title: Windows registry
Version: $Revision: 1.0$
Owner: paul at prescod.net (Paul Prescod)
Python-Version: 2.0
Status: Incomplete

Overview

    It is convenient for Windows users to know that a Python module to
    access the registry is always available whenever Python is installed
    on Windows.  This is especially useful for installation programs.
    There is a Windows registry module from the win32 extensions to
    Python. It is based directly on the original Microsoft APIs. This
    means that there are many backwards compatibility hacks, "reserved"
    parameters and other legacy features that are not interesting to
    most Python programmers. Microsoft is moving to a higher level API
    for languages other than C, as part of Microsoft's Common Runtime
    Library (CRL) initiative. This newer, higher level API serves as
    the basis for the module described herein.

    This higher level API would be implemented in Python and based upon 
    the low-level API. They would not be in competition: a user would 
    choose based on their preferences and needs.

Module Exports

    These are taken directly from the Common Runtime Library:

    ClassesRoot     The Windows Registry base key HKEY_CLASSES_ROOT.
    CurrentConfig   The Windows Registry base key HKEY_CURRENT_CONFIG.
    CurrentUser     The Windows Registry base key HKEY_CURRENT_USER.
    LocalMachine    The Windows Registry base key HKEY_LOCAL_MACHINE.
    CurrentUser     The Windows Registry base key HKEY_CURRENT_USER.
    DynData         The Windows Registry base key HKEY_DYN_DATA.
    PerformanceData The Windows Registry base key HKEY_PERFORMANCE_DATA.
    Users           The Windows Registry base key HKEY_USERS.

    RegistryKey     Registry key class (important class in module)

RegistryKey class Data Members

    These are taken directly from the Common Runtime Library:

    Name            Retrieves the name of the key. 
                    [Issue: full path or just name within parent?]
    SubKeyCount     Retrieves the count of subkeys.
    ValueCount      Retrieves the count of values in the key.

RegistryKey Methods

    These are taken directly from the Common Runtime Library:

    Close()
        Closes this key and flushes it to disk if the contents have 
        been modified.

    CreateSubKey( subkeyname )
        Creates a new subkey or opens an existing subkey.

     [Issue: SubKey_full_path]: Should it be possible to create a subkey 
        deeply:
        >>> LocalMachine.CreateSubKey( r"foo\bar\baz" )

        Presumably the result of this issue would also apply to every
        other method that takes a subkey parameter.

        It is not clear what the CRL API says yet (Mark?). If it says
        "yes" then we would follow it of course. If it says "no" then
        we could still consider the feature as an extension.

       [Yes] allow subkey parameters to be full paths
       [No]  require them to be a single alphanumeric name, no slashes

    DeleteSubKey( subkeyname )
        Deletes the specified subkey. To delete subkeys and all their 
        children (recursively), use DeleteSubKeyTree.

    DeleteSubKeyTree( subkeyname )
        Recursively deletes a subkey and any child subkeys. 

    DeleteValue( valuename )
        Deletes the specified value from this key.

    __cmp__( other )
	Determines whether the specified key is the same key as the
	current key.

    GetSubKeyNames()
        Retrieves an array of strings containing all the subkey names.

    GetValue( valuename )
        Retrieves the specified value.

     Registry types are converted according to the following table:

         REG_NONE: None
         REG_SZ: UnicodeType
         REG_MULTI_SZ: [UnicodeType, UnicodeType, ...]
         REG_DWORD: IntegerType
         REG_DWORD_LITTLE_ENDIAN: IntegerType
         REG_DWORD_BIG_ENDIAN: IntegerType
         REG_EXPAND_SZ: Same as REG_SZ
         REG_RESOURCE_LIST: Same as REG_BINARY
         REG_FULL_RESOURCE_DESCRIPTOR: Same as REG_BINARY
         REG_RESOURCE_REQUIREMENTS_LIST: Same as REG_BINARY
         REG_LINK: Same as REG_BINARY??? [Issue: more info needed!]

         REG_BINARY: StringType or array.array( 'c' )

     [Issue: REG_BINARY Representation]:
         How should binary data be represented as Python data?

         [String] The win32 module uses "string".
         [Array] I propose that an array of bytes would be better.

         One benefit of "binary" is that allows SetValue to detect
         string data as REG_SZ and array.array('c') as REG_BINARY

    [Issue: Type getting method]
         Should there be a companion method called GetType that fetches 
         the type of a registry value? Otherwise client code would not
         be able to distinguish between (e.g.) REG_SZ and 
         REG_SZ_BINARY.

         [Yes] Add GetType( string )
         [No]  Do not add GetType

    GetValueNames()
        Retrieves a list of strings containing all the value names.

    OpenRemoteBaseKey( machinename, name )
        Opens a new RegistryKey that represents the requested key on a 
        foreign machine.

    OpenSubKey( subkeyname )
        Retrieves a subkey.

    SetValue( keyname, value )
        Sets the specified value

	Types are automatically mapped according to the following
	algorithm:

          None: REG_NONE
          String: REG_SZ
          UnicodeType: REG_SZ
          [UnicodeType, UnicodeType, ...]: REG_MULTI_SZ
          [StringType, StringType, ...]: REG_MULTI_SZ
          IntegerType: REG_DWORD
          array.array('c'): REG_BINARY

       [Issue: OptionalTypeParameter]

          Should there be an optional parameter that allows you to
          specify the type explicitly? Presume that the types are 
          constants in the winreg modules (perhaps strings or 
          integers).

          [Yes] Allow other types to be specified
          [No]  People who want more control should use the underlying 
                win32 module.

Proposed Extensions

    The API above is a direct transliteration of the .NET API. It is
    somewhat underpowered in some senses and also is not entirely
    Pythonic. It is a good start as a basis for consensus, however,
    and these proposed extensions can be voted up or down individually.

    Two extensions are just the convenience functions (OpenRemoteKey
    and the top-level functions). Other extensions attempt to extend
    the API to support ALL features of the underlying API so that users
    never have to switch from one API to another to get a particular
    feature.

    Convenience Extension: OpenRemoteKey

        It is not clear to me why Microsoft restricts remote key opening
        to base keys. Why does it not allow a full path like this:

        >>> winreg.OpenRemoteKey( "machinename", 
                             r"HKEY_LOCAL_MACHINE\SOFTWARE\Python" )

        [Issue: Add_OpenRemoteKey]: 
              [Yes] add RemoteKey 
              [No] do not add?

        [Issue: Remove_OpenRemoteBaseKey]
              [Remove] It's redundant!
              [Retain] For backwards compatibility

    Convenience Extension: Top-level Functions

        A huge number of registry-manipulating programs treat the
        registry namespace as "flat" and go directly to the interesting
        registry key.  These top-level functions allow the Python user
        to skip all of the OO key object and get directly to what
        they want:

        key=OpenKey( keypath, machinename=None )
        key=CreateKey( keypath, machinename=None )
        DeleteKey( keypath, machinename=None )
        val=GetValue( keypath, valname, machinename=None )
        SetValue( keypath, valname, valdata, machinename=None )

        [Yes] Add these functions
        [No] Do not add
        [Variant] I like the idea but would change the function
                  signatures


    Completeness Extension: Type names

        If the type extensions are added to SetValue and GetValue then
        we need to decide how to represent types. It is fairly clear
        that they should be represented as constants in the module. The
        names of those constants could be the cryptic (but standard)
        Microsoft names or more descriptive, conventional names.

	Microsoft Names:

            REG_NONE
            REG_SZ
            REG_EXPAND_SZ
            REG_BINARY
            REG_DWORD
            REG_DWORD_LITTLE_ENDIAN
            REG_DWORD_BIG_ENDIAN
            REG_LINK
            REG_MULTI_SZ
            REG_RESOURCE_LIST
            REG_FULL_RESOURCE_DESCRIPTOR
            REG_RESOURCE_REQUIREMENTS_LIST

	Proposed Descriptive Names:

            NONE
            STRING
            EXPANDABLE_TEMPLATE_STRING
            BINARY_DATA
            INTEGER
            LITTLE_ENDIAN_INTEGER
            BIG_ENDIAN_INTEGER
            LINK
            STRING_LIST
            RESOURCE_LIST
            FULL_RESOURCE_DESCRIPTOR
            RESOURCE_REQUIREMENTS_LIST
             
        We could also allow both. One set would be aliases for the
        other.

        [Issue: TypeNames]:
            [MS Names]: Use the Microsoft names
            [Descriptive Names]: Use the more descriptive names
            [Both]: Use both

    Completeness Extension: Type representation

        No matter what the types are called, they must have values.

	The simplest thing would be to use the integers provided by the
	Microsoft header files.  Unfortunately integers are not at all
	self-describing so getting from the integer value to something
	human readable requires some sort of switch statement or mapping.
 
        An alternative is to use strings and map them internally to the 
        Microsoft integer constants.

        A third option is to use object instances. These instances would
        be useful for introspection and would have the following 
        attributes:

            msname (e.g. REG_SZ)
            friendlyname (e.g. String)
            msinteger (e.g. 6 )

        They would have only the following method:

            def __repr__( self ):
                "Return a useful representation of the type object"
                return "<RegType %d: %s %s>" % \
                  (self.msinteger, self.msname, self.friendlyname )

        A final option is a tuple with the three attributes described
        above.

        [Issue: Type_Representation]:
            [Integers]: Use Microsoft integers
            [Strings]: Use string names
            [Instances]: Use object instances with three introspective 
                         attributes
            [Tuples]: Use 3-tuples

    Completeness Extension: Type Namespace

        Should the types be declared in the top level of the module 
        (and thus show up in a "dir" or "from winreg import *") or 
        should they live in their own dictionary, perhaps called 
        "types" or "regtypes". They could also be attributes of some 
        instance.

        [Issue: Type_Namespace]:
            [Module]: winreg.REG_SZ
            [Dictionary]: winreg.types["REG_SZ"]
            [Instance]: winreg.types["REG_SZ"]

    Completeness Extension: Saving/Loading Keys

        The underlying win32 registry API allows the loading and saving
        of keys to filenames. Therefore these could be implemented
        easily as methods:

            def save( self, filename ):
                "Save a key to a filename"
                _winreg.SaveKey( self.keyobj, filename )

            def load( self, subkey, filename ):
                "Load a key from a filename"
                return _winreg.RegLoadKey( self.handle, subkey, 
                                           filename )

            >>> key.OpenSubKey("Python").save( "Python.reg" )
            >>> key.load( "Python", "Python.reg" )

        [Issue: Save_Load_Keys]
            [Yes] Support the saving and loading of keys
            [No]  Do not add these methods

    Completeness Extension: Security Access Flags

        The underlying win32 registry API allows security flags to be
        applied to the OpenKey method. The flags are:

             "KEY_ALL_ACCESS"
             "KEY_CREATE_LINK"
             "KEY_CREATE_SUB_KEY"
             "KEY_ENUMERATE_SUB_KEYS"
             "KEY_EXECUTE"
             "KEY_NOTIFY"
             "KEY_QUERY_VALUE"
             "KEY_READ"
             "KEY_SET_VALUE"

        These are not documented in the underlying API but should be for
        this API. This documentation would be derived from the Microsoft
        documentation. They would be represented as integer or string
        constants in the Python API and used something like this:

        key=key.OpenKey( subkeyname, winreg.KEY_READ )

        [Issue: Security_Access_Flags]
             [Yes] Allow the specification of security access flags.
             [No]  Do not allow this specification.

        [Issue: Security_Access_Flags_Representation]
             [Integer] Use the Microsoft integers
             [String]  Use string values
             [Tuples] Use (string, integer) tuples
             [Instances] Use instances with "name", "msinteger"
                         attributes

        [Issue: Security_Access_Flags_Location]
             [Top-Level] winreg.KEY_READ
             [Dictionary] winreg.flags["KEY_READ"]
             [Instance] winreg.flags.KEY_READ

    Completeness Extension: Flush

        The underlying win32 registry API has a flush method for keys.
        The documentation is as follows:

            """Writes all the attributes of a key to the registry.

            It is not necessary to call RegFlushKey to change a key.
            Registry changes are flushed to disk by the registry using
            its lazy flusher.  Registry changes are also flushed to
            disk at system shutdown.  Unlike \function{CloseKey()}, the
            \function{FlushKey()} method returns only when all the data
            has been written to the registry.  An application should
            only call \function{FlushKey()} if it requires absolute
            certainty that registry changes are on disk."""

    If all completeness extensions are implemented, the author believes
    that this API will be as complete as the underlying API so
    programmers can choose which to use based on familiarity rather 
    than feature-completeness.


-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Fri Aug 11 16:28:09 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 09:28:09 -0500
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:39:36 +0200."
             <200008111239.OAA15818@python.inrialpes.fr> 
References: <200008111239.OAA15818@python.inrialpes.fr> 
Message-ID: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>

> > It seems better to tune the generic check than to special-case str,
> > repr, and getattr.
> 
> Right. This would be a step forward, at least for recursive Python code
> (which is the most common complaint).  Reducing the current value
> by half, i.e. setting MAX_RECURSION_DEPTH = 5000 works for me (Linux & AIX)
> 
> Agreement on 5000?

No, the __getattr__ example still dumps core for me.  With 4000 it is
fine, but this indicates that this is totally the wrong approach: I
can change the available stack size with ulimit -s and cause a core
dump anyway.  Or there could be a loger path through the C code where
more C stack is used per recursion.

We could set the maximum to 1000 and assume a "reasonable" stack size,
but that doesn't make me feel comfortable either.

It would be good if there was a way to sense the remaining available
stack, even if it wasn't portable.  Any Linux experts out there?

> Doesn't solve the problem for C code (extensions) though...

That wasn't what started this thread.  Bugs in extensions are just that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gvwilson at nevex.com  Fri Aug 11 15:39:38 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Fri, 11 Aug 2000 09:39:38 -0400 (EDT)
Subject: [Python-Dev] PEP 0211: Linear Algebra Operators
Message-ID: <Pine.LNX.4.10.10008110936390.13482-200000@akbar.nevex.com>

Hi, everyone.  Please find attached the latest version of PEP-0211,
"Adding New Linear Algebra Operators to Python".  As I don't have write
access to the CVS repository, I'd be grateful if someone could check this
in for me.  Please send comments directly to me (gvwilson at nevex.com); I'll
summarize, update the PEP, and re-post.

Thanks,
Greg
-------------- next part --------------
PEP: 211
Title: Adding New Linear Algebra Operators to Python
Version: $Revision$
Owner: gvwilson at nevex.com (Greg Wilson)
Python-Version: 2.1
Created: 15-Jul-2000
Status: Draft
Post-History:


Introduction

    This PEP describes a proposal to add linear algebra operators to
    Python 2.0.  It discusses why such operators are desirable, and
    alternatives that have been considered and discarded.  This PEP
    summarizes discussions held in mailing list forums, and provides
    URLs for further information, where appropriate.  The CVS revision
    history of this file contains the definitive historical record.


Proposal

    Add a single new infix binary operator '@' ("across"), and
    corresponding special methods "__across__()" and "__racross__()".
    This operator will perform mathematical matrix multiplication on
    NumPy arrays, and generate cross-products when applied to built-in
    sequence types.  No existing operator definitions will be changed.


Background

    Computers were invented to do arithmetic, as was the first
    high-level programming language, Fortran.  While Fortran was a
    great advance on its machine-level predecessors, there was still a
    very large gap between its syntax and the notation used by
    mathematicians.  The most influential effort to close this gap was
    APL [1]:

        The language [APL] was invented by Kenneth E. Iverson while at
        Harvard University. The language, originally titled "Iverson
        Notation", was designed to overcome the inherent ambiguities
        and points of confusion found when dealing with standard
        mathematical notation. It was later described in 1962 in a
        book simply titled "A Programming Language" (hence APL).
        Towards the end of the sixties, largely through the efforts of
        IBM, the computer community gained its first exposure to
        APL. Iverson received the Turing Award in 1980 for this work.

    APL's operators supported both familiar algebraic operations, such
    as vector dot product and matrix multiplication, and a wide range
    of structural operations, such as stitching vectors together to
    create arrays.  Its notation was exceptionally cryptic: many of
    its symbols did not exist on standard keyboards, and expressions
    had to be read right to left.

    Most subsequent work on numerical languages, such as Fortran-90,
    MATLAB, and Mathematica, has tried to provide the power of APL
    without the obscurity.  Python's NumPy [2] has most of the
    features that users of such languages expect, but these are
    provided through named functions and methods, rather than
    overloaded operators.  This makes NumPy clumsier than its
    competitors.

    One way to make NumPy more competitive is to provide greater
    syntactic support in Python itself for linear algebra.  This
    proposal therefore examines the requirements that new linear
    algebra operators in Python must satisfy, and proposes a syntax
    and semantics for those operators.


Requirements

    The most important requirement is that there be minimal impact on
    the existing definition of Python.  The proposal must not break
    existing programs, except possibly those that use NumPy.

    The second most important requirement is to be able to do both
    elementwise and mathematical matrix multiplication using infix
    notation.  The nine cases that must be handled are:

        |5 6| *   9   = |45 54|      MS: matrix-scalar multiplication
        |7 8|           |63 72|

          9   * |5 6| = |45 54|      SM: scalar-matrix multiplication
                |7 8|   |63 72|

        |2 3| * |4 5| = |8 15|       VE: vector elementwise multiplication


        |2 3| *  |4|  =   23         VD: vector dot product
                 |5|

         |2|  * |4 5| = | 8 10|      VO: vector outer product
         |3|            |12 15|

        |1 2| * |5 6| = | 5 12|      ME: matrix elementwise multiplication
        |3 4|   |7 8|   |21 32|

        |1 2| * |5 6| = |19 22|      MM: mathematical matrix multiplication
        |3 4|   |7 8|   |43 50|

        |1 2| * |5 6| = |19 22|      VM: vector-matrix multiplication
                |7 8|

        |5 6| *  |1|  =   |17|       MV: matrix-vector multiplication
        |7 8|    |2|      |23|

    Note that 1-dimensional vectors are treated as rows in VM, as
    columns in MV, and as both in VD and VO.  Both are special cases
    of 2-dimensional matrices (Nx1 and 1xN respectively).  It may
    therefore be reasonable to define the new operator only for
    2-dimensional arrays, and provide an easy (and efficient) way for
    users to convert 1-dimensional structures to 2-dimensional.
    Behavior of a new multiplication operator for built-in types may
    then:

    (a) be a parsing error (possible only if a constant is one of the
        arguments, since names are untyped in Python);

    (b) generate a runtime error; or

    (c) be derived by plausible extension from its behavior in the
        two-dimensional case.

    Third, syntactic support should be considered for three other
    operations:

                         T
    (a) transposition:  A   => A[j, i] for A[i, j]

                         -1
    (b) inverse:        A   => A' such that A' * A = I (the identity matrix)

    (c) solution:       A/b => x  such that A * x = b
                        A\b => x  such that x * A = b

    With regard to (c), it is worth noting that the two syntaxes used
    were invented by programmers, not mathematicians.  Mathematicians
    do not have a standard, widely-used notation for matrix solution.

    It is also worth noting that dozens of matrix inversion and
    solution algorithms are widely used.  MATLAB and its kin bind
    their inversion and/or solution operators to one which is
    reasonably robust in most cases, and require users to call
    functions or methods to access others.

    Fourth, confusion between Python's notation and those of MATLAB
    and Fortran-90 should be avoided.  In particular, mathematical
    matrix multiplication (case MM) should not be represented as '.*',
    since:

    (a) MATLAB uses prefix-'.' forms to mean 'elementwise', and raw
        forms to mean "mathematical" [4]; and

    (b) even if the Python parser can be taught how to handle dotted
        forms, '1.*A' will still be visually ambiguous [4].

    One anti-requirement is that new operators are not needed for
    addition, subtraction, bitwise operations, and so on, since
    mathematicians already treat them elementwise.


Proposal:

    The meanings of all existing operators will be unchanged.  In
    particular, 'A*B' will continue to be interpreted elementwise.
    This takes care of the cases MS, SM, VE, and ME, and ensures
    minimal impact on existing programs.

    A new operator '@' (pronounced "across") will be added to Python,
    along with two special methods, "__across__()" and
    "__racross__()", with the usual semantics.

    NumPy will overload "@" to perform mathematical multiplication of
    arrays where shapes permit, and to throw an exception otherwise.
    The matrix class's implementation of "@" will treat built-in
    sequence types as if they were column vectors.  This takes care of
    the cases MM and MV.

    An attribute "T" will be added to the NumPy array type, such that
    "m.T" is:

    (a) the transpose of "m" for a 2-dimensional array

    (b) the 1xN matrix transpose of "m" if "m" is a 1-dimensional
        array; or

    (c) a runtime error for an array with rank >= 3.

    This attribute will alias the memory of the base object.  NumPy's
    "transpose()" function will be extended to turn built-in sequence
    types into row vectors.  This takes care of the VM, VD, and VO
    cases.  We propose an attribute because:

    (a) the resulting notation is similar to the 'superscript T' (at
        least, as similar as ASCII allows), and

    (b) it signals that the transposition aliases the original object.

    No new operators will be defined to mean "solve a set of linear
    equations", or "invert a matrix".  Instead, NumPy will define a
    value "inv", which will be recognized by the exponentiation
    operator, such that "A ** inv" is the inverse of "A".  This is
    similar in spirit to NumPy's existing "newaxis" value.

    (Optional) When applied to sequences, the operator will return a
    list of tuples containing the cross-product of their elements in
    left-to-right order:

    >>> [1, 2] @ (3, 4)
    [(1, 3), (1, 4), (2, 3), (2, 4)]

    >>> [1, 2] @ (3, 4) @ (5, 6)
    [(1, 3, 5), (1, 3, 6), 
     (1, 4, 5), (1, 4, 6),
     (2, 3, 5), (2, 3, 6),
     (2, 4, 5), (2, 4, 6)]

    This will require the same kind of special support from the parser
    as chained comparisons (such as "a<b<c<=d").  However, it would
    permit the following:

    >>> for (i, j) in [1, 2] @ [3, 4]:
    >>>     print i, j
    1 3
    1 4
    2 3
    2 4

    as a short-hand for the common nested loop idiom:

    >>> for i in [1, 2]:
    >>>    for j in [3, 4]:
    >>>        print i, j

    Response to the 'lockstep loop' questionnaire [5] indicated that
    newcomers would be comfortable with this (so comfortable, in fact,
    that most of them interpreted most multi-loop 'zip' syntaxes [6]
    as implementing single-stage nesting).


Alternatives:

    01. Don't add new operators --- stick to functions and methods.

    Python is not primarily a numerical language.  It is not worth
    complexifying the language for this special case --- NumPy's
    success is proof that users can and will use functions and methods
    for linear algebra.

    On the positive side, this maintains Python's simplicity.  Its
    weakness is that support for real matrix multiplication (and, to a
    lesser extent, other linear algebra operations) is frequently
    requested, as functional forms are cumbersome for lengthy
    formulas, and do not respect the operator precedence rules of
    conventional mathematics.  In addition, the method form is
    asymmetric in its operands.

    02. Introduce prefixed forms of existing operators, such as "@*"
        or "~*", or used boxed forms, such as "[*]" or "%*%".

    There are (at least) three objections to this.  First, either form
    seems to imply that all operators exist in both forms.  This is
    more new entities than the problem merits, and would require the
    addition of many new overloadable methods, such as __at_mul__.

    Second, while it is certainly possible to invent semantics for
    these new operators for built-in types, this would be a case of
    the tail wagging the dog, i.e. of letting the existence of a
    feature "create" a need for it.

    Finally, the boxed forms make human parsing more complex, e.g.:

        A[*] = B    vs.    A[:] = B

    03. (From Moshe Zadka [7], and also considered by Huaiyu Zhou [8]
        in his proposal [9]) Retain the existing meaning of all
        operators, but create a behavioral accessor for arrays, such
        that:

            A * B

        is elementwise multiplication (ME), but:

            A.m() * B.m()

        is mathematical multiplication (MM).  The method "A.m()" would
        return an object that aliased A's memory (for efficiency), but
        which had a different implementation of __mul__().

    The advantage of this method is that it has no effect on the
    existing implementation of Python: changes are localized in the
    Numeric module.  The disadvantages are:

    (a) The semantics of "A.m() * B", "A + B.m()", and so on would
        have to be defined, and there is no "obvious" choice for them.

    (b) Aliasing objects to trigger different operator behavior feels
        less Pythonic than either calling methods (as in the existing
        Numeric module) or using a different operator.  This PEP is
        primarily about look and feel, and about making Python more
        attractive to people who are not already using it.

    04. (From a proposal [9] by Huaiyu Zhou [8]) Introduce a "delayed
        inverse" attribute, similar to the "transpose" attribute
        advocated in the third part of this proposal.  The expression
        "a.I" would be a delayed handle on the inverse of the matrix
        "a", which would be evaluated in context as required.  For
        example, "a.I * b" and "b * a.I" would solve sets of linear
        equations, without actually calculating the inverse.

    The main drawback of this proposal it is reliance on lazy
    evaluation, and even more on "smart" lazy evaluation (i.e. the
    operation performed depends on the context in which the evaluation
    is done).  The BDFL has so far resisted introducing LE into
    Python.


Related Proposals

    0203 :  Augmented Assignments

            If new operators for linear algebra are introduced, it may
            make sense to introduce augmented assignment forms for
            them.

    0207 :  Rich Comparisons

            It may become possible to overload comparison operators
            such as '<' so that an expression such as 'A < B' returns
            an array, rather than a scalar value.

    0209 :  Adding Multidimensional Arrays

            Multidimensional arrays are currently an extension to
            Python, rather than a built-in type.


Acknowledgments:

    I am grateful to Huaiyu Zhu [8] for initiating this discussion,
    and for some of the ideas and terminology included below.


References:

    [1] http://www.acm.org/sigapl/whyapl.htm
    [2] http://numpy.sourceforge.net
    [3] PEP-0203.txt "Augmented Assignments".
    [4] http://bevo.che.wisc.edu/octave/doc/octave_9.html#SEC69
    [5] http://www.python.org/pipermail/python-dev/2000-July/013139.html
    [6] PEP-0201.txt "Lockstep Iteration"
    [7] Moshe Zadka is 'moshez at math.huji.ac.il'.
    [8] Huaiyu Zhu is 'hzhu at users.sourceforge.net'
    [9] http://www.python.org/pipermail/python-list/2000-August/112529.html


Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

From fredrik at pythonware.com  Fri Aug 11 15:55:01 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 11 Aug 2000 15:55:01 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>             <20000811143143.G17171@xs4all.nl>  <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
Message-ID: <016d01c0039b$bfb99a40$0900a8c0@SPIFF>

guido wrote:
> I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> mention sending fewer than all bytes at all.  In fact, while it says
> that the return value is the number of bytes sent, it at least
> *suggests* that it will return an error whenever not everything can be
> sent -- even in non-blocking mode.
> 
> Under what circumstances can send() return a smaller number?

never, it seems:

    The length of the message to be sent is specified by the
    length argument. If the message is too long to pass through
    the underlying protocol, send() fails and no data is transmitted.

    Successful completion of a call to send() does not guarantee
    delivery of the message. A return value of -1 indicates only
    locally-detected errors.

    If space is not available at the sending socket to hold the message
    to be transmitted and the socket file descriptor does not have
    O_NONBLOCK set, send() blocks until space is available. If space
    is not available at the sending socket to hold the message to be
    transmitted and the socket file descriptor does have O_NONBLOCK
    set, send() will fail.

    (from SUSv2)

iow, it either blocks or fails.

</F>




From fredrik at pythonware.com  Fri Aug 11 16:01:17 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 11 Aug 2000 16:01:17 +0200
Subject: [Python-Dev] Preventing recursion core dumps
References: <000401c00389$2fa577b0$060210ac@private>
Message-ID: <018a01c0039c$9f1949b0$0900a8c0@SPIFF>

barry wrote:
> For embedding Python being able to control the recursion depth of the
intepreter
> is very useful. I would want to be able to set, from C, the max call depth
limit
> and the current call depth limit. I'd expect Python to set a min call
depth limit.

+1 (on concept, at least).

</F>




From thomas at xs4all.net  Fri Aug 11 16:08:51 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:08:51 +0200
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 09:28:09AM -0500
References: <200008111239.OAA15818@python.inrialpes.fr> <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <20000811160851.H17171@xs4all.nl>

On Fri, Aug 11, 2000 at 09:28:09AM -0500, Guido van Rossum wrote:

> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

getrlimit and getrusage do what you want to, I think. getrusage() fills a
struct rusage:


            struct rusage
            {
                 struct timeval ru_utime; /* user time used */
                 struct timeval ru_stime; /* system time used */
                 long ru_maxrss;          /* maximum resident set size */
                 long ru_ixrss;      /* integral shared memory size */
                 long ru_idrss;      /* integral unshared data size */
                 long ru_isrss;      /* integral unshared stack size */
                 long ru_minflt;          /* page reclaims */
                 long ru_majflt;          /* page faults */
                 long ru_nswap;      /* swaps */
                 long ru_inblock;         /* block input operations */
                 long ru_oublock;         /* block output operations */
                 long ru_msgsnd;          /* messages sent */
                 long ru_msgrcv;          /* messages received */
                 long ru_nsignals;        /* signals received */
                 long ru_nvcsw;      /* voluntary context switches */
                 long ru_nivcsw;          /* involuntary context switches */
            };

and you can get the actual stack limit with getrlimit(). The availability of
getrusage/getrlimit is already checked by configure, and there's the
resource module which wraps those functions and structs for Python code.
Note that Linux isn't likely to be a problem, most linux distributions have
liberal limits to start with (still the 'single-user OS' ;)

BSDI, for instance, has very strict default limits -- the standard limits
aren't even enough to start 'pine' on a few MB of mailbox. (But BSDI has
rusage/rlimit, so we can 'autodetect' this.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Fri Aug 11 16:13:13 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 17:13:13 +0300 (IDT)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008111708210.3449-100000@sundial>

On Fri, 11 Aug 2000, Guido van Rossum wrote:

> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

I'm far from an expert, but I might have an idea. The question is: must
this works for embedded version of Python, or can I fool around with
main()?

Here's the approach:

 - In main(), get the address of some local variable. Call this
        min
 - Call getrlimit, and see the stack size. Call max = min+ (<stack size )
 - When checking for "too much recursion", take the address of a local 
   variable and compare it against max. If it's higher, stop.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From just at letterror.com  Fri Aug 11 17:14:40 2000
From: just at letterror.com (Just van Rossum)
Date: Fri, 11 Aug 2000 16:14:40 +0100
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <l03102808b5b9c74f316e@[193.78.237.168]>

> > Agreement on 5000?
>
> No, the __getattr__ example still dumps core for me.  With 4000 it is
> fine, but this indicates that this is totally the wrong approach: I
> can change the available stack size with ulimit -s and cause a core
> dump anyway.  Or there could be a loger path through the C code where
> more C stack is used per recursion.
>
> We could set the maximum to 1000 and assume a "reasonable" stack size,
> but that doesn't make me feel comfortable either.
>
> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

Gordon, how's that Stackless PEP coming along?

Sorry, I couldn't resist ;-)

Just





From thomas at xs4all.net  Fri Aug 11 16:21:09 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:21:09 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <016d01c0039b$bfb99a40$0900a8c0@SPIFF>; from fredrik@pythonware.com on Fri, Aug 11, 2000 at 03:55:01PM +0200
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>
Message-ID: <20000811162109.I17171@xs4all.nl>

On Fri, Aug 11, 2000 at 03:55:01PM +0200, Fredrik Lundh wrote:
> guido wrote:
> > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> > mention sending fewer than all bytes at all.  In fact, while it says
> > that the return value is the number of bytes sent, it at least
> > *suggests* that it will return an error whenever not everything can be
> > sent -- even in non-blocking mode.

> > Under what circumstances can send() return a smaller number?

> never, it seems:

[snip manpage]

Indeed. I didn't actually check the story, since Guido was apparently
convinced by its validity. I was just operating under the assumption that
send() did behave like write(). I won't blindly believe Guido anymore ! :)

Someone set the patch to 'rejected' and tell the submittor that 'send'
doesn't return the number of bytes written ;-P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 16:32:45 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 16:32:45 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <Pine.GSO.4.10.10008111708210.3449-100000@sundial> from "Moshe Zadka" at Aug 11, 2000 05:13:13 PM
Message-ID: <200008111432.QAA16648@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> On Fri, 11 Aug 2000, Guido van Rossum wrote:
> 
> > It would be good if there was a way to sense the remaining available
> > stack, even if it wasn't portable.  Any Linux experts out there?
> 
> I'm far from an expert, but I might have an idea. The question is: must
> this works for embedded version of Python, or can I fool around with
> main()?

Probably not main(), but Py_Initialize() for sure.

> 
> Here's the approach:
> 
>  - In main(), get the address of some local variable. Call this
>         min
>  - Call getrlimit, and see the stack size. Call max = min+ (<stack size )
>  - When checking for "too much recursion", take the address of a local 
>    variable and compare it against max. If it's higher, stop.

Sounds good. If getrlimit is not available, we can always fallback to
some (yet to be computed) constant, i.e. the current state.

[Just]
> Gordon, how's that Stackless PEP coming along?
> Sorry, I couldn't resist ;-)

Ah, in this case, we'll get a memory error after filling the whole disk
with frames <wink>

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From akuchlin at mems-exchange.org  Fri Aug 11 16:33:35 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 10:33:35 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811162109.I17171@xs4all.nl>; from thomas@xs4all.net on Fri, Aug 11, 2000 at 04:21:09PM +0200
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl>
Message-ID: <20000811103335.B20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 04:21:09PM +0200, Thomas Wouters wrote:
>Someone set the patch to 'rejected' and tell the submittor that 'send'
>doesn't return the number of bytes written ;-P

What about reviving the idea of raising an exception, then?

--amk



From moshez at math.huji.ac.il  Fri Aug 11 16:40:10 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 17:40:10 +0300 (IDT)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111432.QAA16648@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008111736300.3449-100000@sundial>

On Fri, 11 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > On Fri, 11 Aug 2000, Guido van Rossum wrote:
> > 
> > > It would be good if there was a way to sense the remaining available
> > > stack, even if it wasn't portable.  Any Linux experts out there?
> > 
> > I'm far from an expert, but I might have an idea. The question is: must
> > this works for embedded version of Python, or can I fool around with
> > main()?
> 
> Probably not main(), but Py_Initialize() for sure.

Py_Initialize() isn't good enough -- I can put an upper bound on the
difference between "min" and the top of the stack: I can't do so
for the call to Py_Initialize(). Well, I probably can in some *really*
ugly way. I'll have to think about it some more.

> Sounds good. If getrlimit is not available, we can always fallback to
> some (yet to be computed) constant, i.e. the current state.

Well, since Guido asked for a non-portable Linuxish way, I think we
can assume getrusage() is there.

[Vladimir]
> Ah, in this case, we'll get a memory error after filling the whole disk
> with frames <wink>

Which is great! Python promises to always throw an exception....

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Fri Aug 11 16:43:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:43:49 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811103335.B20646@kronos.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Aug 11, 2000 at 10:33:35AM -0400
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl> <20000811103335.B20646@kronos.cnri.reston.va.us>
Message-ID: <20000811164349.J17171@xs4all.nl>

On Fri, Aug 11, 2000 at 10:33:35AM -0400, Andrew Kuchling wrote:
> On Fri, Aug 11, 2000 at 04:21:09PM +0200, Thomas Wouters wrote:
> >Someone set the patch to 'rejected' and tell the submittor that 'send'
> >doesn't return the number of bytes written ;-P

> What about reviving the idea of raising an exception, then?

static PyObject *
PySocketSock_send(PySocketSockObject *s, PyObject *args)
{
        char *buf;
        int len, n, flags = 0;
        if (!PyArg_ParseTuple(args, "s#|i:send", &buf, &len, &flags))
                return NULL;
        Py_BEGIN_ALLOW_THREADS
        n = send(s->sock_fd, buf, len, flags);
        Py_END_ALLOW_THREADS
        if (n < 0)
                return PySocket_Err();
        return PyInt_FromLong((long)n);
}

(PySocket_Err() creates an error.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Aug 11 17:56:06 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 10:56:06 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: Your message of "Fri, 11 Aug 2000 16:21:09 +0200."
             <20000811162109.I17171@xs4all.nl> 
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>  
            <20000811162109.I17171@xs4all.nl> 
Message-ID: <200008111556.KAA05068@cj20424-a.reston1.va.home.com>

> On Fri, Aug 11, 2000 at 03:55:01PM +0200, Fredrik Lundh wrote:
> > guido wrote:
> > > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> > > mention sending fewer than all bytes at all.  In fact, while it says
> > > that the return value is the number of bytes sent, it at least
> > > *suggests* that it will return an error whenever not everything can be
> > > sent -- even in non-blocking mode.
> 
> > > Under what circumstances can send() return a smaller number?
> 
> > never, it seems:
> 
> [snip manpage]
> 
> Indeed. I didn't actually check the story, since Guido was apparently
> convinced by its validity.

I wasn't convinced!  I wrote "is this true?" in my message!!!

> I was just operating under the assumption that
> send() did behave like write(). I won't blindly believe Guido anymore ! :)

I bgelieve they do behave the same: in my mind, write() doesn't write
fewer bytes than you tell it either!  (Except maybe to a tty device
when interrupted by a signal???)

> Someone set the patch to 'rejected' and tell the submittor that 'send'
> doesn't return the number of bytes written ;-P

Done.

Note that send() *does* return the number of bytes written.  It's just
always (supposed to be) the same as the length of the argument string.

Since this is now established, should we change the send() method to
raise an exception when it returns a smaller number?  (The exception
probably should be a subclass of socket.error and should carry the
number of bytes written

Could there be a signal interrupt issue here too?  E.g. I send() a
megabyte, which takes a while due to TCP buffer limits; before I'm
done a signal handler interrupts the system call.  Will send() now:

(1) return a EINTR error
(2) continue
(3) return the number of bytes already written

???

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 17:58:45 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 17:58:45 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 11, 2000 09:28:09 AM
Message-ID: <200008111558.RAA16953@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> We could set the maximum to 1000 and assume a "reasonable" stack size,
> but that doesn't make me feel comfortable either.

Nor me, but it's more comfortable than a core dump, and is the only
easy solution, solving most problems & probably breaking some code...
After all, a max of 1024 seems to be a good suggestion.

> 
> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

On a second thought, I think this would be a bad idea, even if
we manage to tweak the stack limits on most platforms. We would
loose determinism = loose control -- no good. A depth-first algorithm
may succeed on one machine, and fail on another.

I strongly prefer to know that I'm limited to 1024 recursions ("reasonable"
stack size assumptions included) and change my algorithm if it doesn't fly
with my structure, than stumble subsequently on the fact that my algorithm
works half the time.

Changing this now *may* break such scripts, and there doesn't seem
to be an immediate easy solution. But if I were to choose between
breaking some scripts and preventing core dumps, well...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Fri Aug 11 18:12:21 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 19:12:21 +0300 (IDT)
Subject: [Python-Dev] Cookie.py
Message-ID: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>

This is a continuation of a previous server-side cookie support.
There is a liberally licensed (old-Python license) framework called
Webware, which includes Cookie.py, (apparently the same one by Timothy
O'Malley). How about taking that Cookie.py?

Webware can be found at http://webware.sourceforge.net/
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From gmcm at hypernet.com  Fri Aug 11 18:25:18 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 12:25:18 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
References: Your message of "Fri, 11 Aug 2000 14:31:43 +0200."             <20000811143143.G17171@xs4all.nl> 
Message-ID: <1246111375-124272508@hypernet.com>

[Guido]
> I just read the man page for send() (Red Hat linux 6.1) and it
> doesn't mention sending fewer than all bytes at all.  In fact,
> while it says that the return value is the number of bytes sent,
> it at least *suggests* that it will return an error whenever not
> everything can be sent -- even in non-blocking mode.

It says (at least on RH 5.2): "If the message is too long to 
pass atomically through the underlying protocol...". Hey guys, 
TCP/IP is a stream protocol! For TCP/IP this is all completely 
misleading.

Yes, it returns the number of bytes sent. For TCP/IP it is *not* 
an error to send less than the argument. It's only an error if the 
other end dies at the time of actual send.

Python has been behaving properly all along. The bug report is 
correct. It's the usage of send in the std lib that is improper 
(though with a nearly infinitessimal chance of breaking, since 
it's almost all single threaded blocking usage of sockets).
 
> Under what circumstances can send() return a smaller number?

Just open a TCP/IP connection and send huge (64K or so) 
buffers. Current Python behavior is no different than C on 
Linux, HPUX and Windows.

Look it up in Stevens if you don't believe me. Or try it.

- Gordon



From akuchlin at mems-exchange.org  Fri Aug 11 18:26:08 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 12:26:08 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 11, 2000 at 07:12:21PM +0300
References: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>
Message-ID: <20000811122608.F20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 07:12:21PM +0300, Moshe Zadka wrote:
>This is a continuation of a previous server-side cookie support.
>There is a liberally licensed (old-Python license) framework called
>Webware, which includes Cookie.py, (apparently the same one by Timothy
>O'Malley). How about taking that Cookie.py?

O'Malley got in touch with me and let me know that the license has
been changed to the 1.5.2 license with his departure from BBN.  He
hasn't sent me a URL where the current version can be downloaded,
though.  I don't know if WebWare has the most current version; it
seems not, since O'Malley's was dated 06/21 and WebWare's was checked
in on May 23.

By the way, I'd suggest adding Cookie.py to a new 'web' package, and
taking advantage of the move to break backward compatibility and
remove the automatic usage of pickle (assuming it's still there).

--amk



From nascheme at enme.ucalgary.ca  Fri Aug 11 18:37:01 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 11 Aug 2000 10:37:01 -0600
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111558.RAA16953@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Aug 11, 2000 at 05:58:45PM +0200
References: <200008111428.JAA04464@cj20424-a.reston1.va.home.com> <200008111558.RAA16953@python.inrialpes.fr>
Message-ID: <20000811103701.A25386@keymaster.enme.ucalgary.ca>

On Fri, Aug 11, 2000 at 05:58:45PM +0200, Vladimir Marangozov wrote:
> On a second thought, I think this would be a bad idea, even if
> we manage to tweak the stack limits on most platforms. We would
> loose determinism = loose control -- no good. A depth-first algorithm
> may succeed on one machine, and fail on another.

So what?  We don't limit the amount of memory you can allocate on all
machines just because your program may run out of memory on some
machine.  It seems like the same thing to me.

  Neil



From moshez at math.huji.ac.il  Fri Aug 11 18:40:31 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 19:40:31 +0300 (IDT)
Subject: [Python-Dev] Cookie.py
In-Reply-To: <20000811122608.F20646@kronos.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>

On Fri, 11 Aug 2000, Andrew Kuchling wrote:

> O'Malley got in touch with me and let me know that the license has
> been changed to the 1.5.2 license with his departure from BBN.  He
> hasn't sent me a URL where the current version can be downloaded,
> though.  I don't know if WebWare has the most current version; it
> seems not, since O'Malley's was dated 06/21 and WebWare's was checked
> in on May 23.

Well, as soon as you get a version, let me know: I've started working
on documentation.

> By the way, I'd suggest adding Cookie.py to a new 'web' package, and
> taking advantage of the move to break backward compatibility and
> remove the automatic usage of pickle (assuming it's still there).

Well, depends on what you mean there:

There are now three classes

a) SimpleCookie -- never uses pickle
b) SerilizeCookie -- always uses pickle
c) SmartCookie -- uses pickle based on old heuristic.

About web package: I'm +0. Fred has to think about how to document
things in packages (we never had to until now). Well, who cares <wink>

What is more important is working on documentation (which I'm doing),
and on a regression test (for which the May 23 version is probably good 
enough).

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Fri Aug 11 18:44:07 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 18:44:07 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <200008111556.KAA05068@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 10:56:06AM -0500
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl> <200008111556.KAA05068@cj20424-a.reston1.va.home.com>
Message-ID: <20000811184407.A14470@xs4all.nl>

On Fri, Aug 11, 2000 at 10:56:06AM -0500, Guido van Rossum wrote:

> > Indeed. I didn't actually check the story, since Guido was apparently
> > convinced by its validity.

> I wasn't convinced!  I wrote "is this true?" in my message!!!

I appologize... It's been a busy day for me, I guess I wasn't paying enough
attention. I'll try to keep quiet when that happens, next time :P

> > I was just operating under the assumption that
> > send() did behave like write(). I won't blindly believe Guido anymore ! :)

> I bgelieve they do behave the same: in my mind, write() doesn't write
> fewer bytes than you tell it either!  (Except maybe to a tty device
> when interrupted by a signal???)

Hm, I seem to recall write() could return after less than a full write, but
I might be mistaken. I thought I was confusing send with write, but maybe
I'm confusing both with some other function :-) I'm *sure* there is a
function that behaves that way :P

> Note that send() *does* return the number of bytes written.  It's just
> always (supposed to be) the same as the length of the argument string.

> Since this is now established, should we change the send() method to
> raise an exception when it returns a smaller number?  (The exception
> probably should be a subclass of socket.error and should carry the
> number of bytes written

Ahh, now it's starting to get clear to me. I'm not sure if it's worth it...
It would require a different (non-POSIX) socket layer to return on
'incomplete' writes, and that is likely to break a number of other things,
too. (Lets hope it does... a socket layer which has the same API but a
different meaning would be very confusing !)

> Could there be a signal interrupt issue here too?

No, I don't think so.

> E.g. I send() a megabyte, which takes a while due to TCP buffer limits;
> before I'm done a signal handler interrupts the system call.  Will send()
> now:

> (1) return a EINTR error

Yes. From the manpage:

       If  the  message  is  too  long  to pass atomically
       through the underlying protocol,  the  error  EMSGSIZE  is
       returned, and the message is not transmitted.

[..]

ERRORS

       EINTR   A signal occurred.

[..]

Because send() either completely succeeds or completely fails, I didn't see
why you wanted an exception generated :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 11 18:45:13 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 12:45:13 -0400 (EDT)
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
References: <20000811122608.F20646@kronos.cnri.reston.va.us>
	<Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>

Moshe Zadka writes:
 > About web package: I'm +0. Fred has to think about how to document
 > things in packages (we never had to until now). Well, who cares <wink>

  I'm not aware of any issues with documenting packages; the curses
documentation seems to be coming along nicely, and that's a package.
If you think I've missed something, we can (and should) deal with it
in the Doc-SIG.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From akuchlin at mems-exchange.org  Fri Aug 11 18:48:11 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 12:48:11 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 11, 2000 at 07:40:31PM +0300
References: <20000811122608.F20646@kronos.cnri.reston.va.us> <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <20000811124811.G20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 07:40:31PM +0300, Moshe Zadka wrote:
>There are now three classes
>a) SimpleCookie -- never uses pickle
>b) SerilizeCookie -- always uses pickle
>c) SmartCookie -- uses pickle based on old heuristic.

Ah, good; never mind, then.

>About web package: I'm +0. Fred has to think about how to document
>things in packages (we never had to until now). Well, who cares <wink>

Hmm... the curses.ascii module is already documented, so documenting
packages shouldn't be a problem.

--amk



From esr at thyrsus.com  Fri Aug 11 19:03:01 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 13:03:01 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 11, 2000 at 12:45:13PM -0400
References: <20000811122608.F20646@kronos.cnri.reston.va.us> <Pine.GSO.4.10.10008111936060.5259-100000@sundial> <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>
Message-ID: <20000811130301.A7354@thyrsus.com>

Fred L. Drake, Jr. <fdrake at beopen.com>:
>   I'm not aware of any issues with documenting packages; the curses
> documentation seems to be coming along nicely, and that's a package.
> If you think I've missed something, we can (and should) deal with it
> in the Doc-SIG.

The curses documentation is basically done.  I've fleshed out the
library reference and overhauled the HOWTO.  I shipped the latter to
amk yesterday because I couldn't beat CVS into checking out py-howtos
for me.

The items left on my to-do list are drafting PEP002 and doing something
constructive about the Berkeley DB mess.  I doubt I'll get to these 
things before LinuxWorld.  Anybody else going?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

It would be thought a hard government that should tax its people one tenth 
part.
	-- Benjamin Franklin



From rushing at nightmare.com  Fri Aug 11 18:59:07 2000
From: rushing at nightmare.com (Sam Rushing)
Date: Fri, 11 Aug 2000 09:59:07 -0700 (PDT)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <284209072@toto.iv>
Message-ID: <14740.12507.587044.121462@seattle.nightmare.com>

Guido van Rossum writes:
 > Really?!?!
 > 
 > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
 > mention sending fewer than all bytes at all.  In fact, while it says
 > that the return value is the number of bytes sent, it at least
 > *suggests* that it will return an error whenever not everything can be
 > sent -- even in non-blocking mode.
 > 
 > Under what circumstances can send() return a smaller number?

It's a feature of Linux... it will send() everything.  Other unixen
act in the classic fashion (it bit me on FreeBSD), and send only what
fits right into the buffer that awaits.

I think this could safely be added to the send method in
socketmodule.c.  Linux users wouldn't even notice.  IMHO this is the
kind of feature that people come to expect from programming in a HLL.
Maybe disable the feature if it's a non-blocking socket?

-Sam




From billtut at microsoft.com  Fri Aug 11 19:01:44 2000
From: billtut at microsoft.com (Bill Tutt)
Date: Fri, 11 Aug 2000 10:01:44 -0700
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
	!)
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>

This is an alternative approach that we should certainly consider. We could
use ANTLR (www.antlr.org) as our parser generator, and have it generate Java
for JPython, and C++ for CPython.  This would be a good chunk of work, and
it's something I really don't have time to pursue. I don't even have time to
pursue the idea about moving keyword recognition into the lexer.

I'm just not sure if you want to bother introducing C++ into the Python
codebase solely to only have one parser for CPython and JPython.

Bill

 -----Original Message-----
From: 	bwarsaw at beopen.com [mailto:bwarsaw at beopen.com] 
Sent:	Thursday, August 10, 2000 8:01 PM
To:	Guido van Rossum
Cc:	Mark Hammond; python-dev at python.org
Subject:	Re: [Python-Dev] Python keywords (was Lockstep iteration -
eureka!)


>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Alas, I'm not sure how easy it will be.  The parser generator
    GvR> will probably have to be changed to allow you to indicate not
    GvR> to do a resword lookup at certain points in the grammar.  I
    GvR> don't know where to start. :-(

Yet another reason why it would be nice to (eventually) merge the
parsing technology in CPython and JPython.

i-don't-wanna-work-i-jes-wanna-bang-on-my-drum-all-day-ly y'rs,
-Barry

_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://www.python.org/mailman/listinfo/python-dev



From gmcm at hypernet.com  Fri Aug 11 19:04:26 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 13:04:26 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <1246111375-124272508@hypernet.com>
References: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
Message-ID: <1246109027-124413737@hypernet.com>

[I wrote, about send()]
> Yes, it returns the number of bytes sent. For TCP/IP it is *not*
> an error to send less than the argument. It's only an error if
> the other end dies at the time of actual send.

[and...]
> Just open a TCP/IP connection and send huge (64K or so) 
> buffers. Current Python behavior is no different than C on 
> Linux, HPUX and Windows.

And I just demonstrated it. Strangely enough, sending from Windows 
(where the dos say "send returns the total number of bytes sent, 
which can be less than the number indicated by len") it always 
sent the whole buffer, even when that was 1M on a non-
blocking socket. (I select()'ed the socket first, to make sure it 
could send something).

But from Linux, the largest buffer sent was 54,020 and typical 
was 27,740. No errors.


- Gordon



From thomas at xs4all.net  Fri Aug 11 19:04:37 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 19:04:37 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <14740.12507.587044.121462@seattle.nightmare.com>; from rushing@nightmare.com on Fri, Aug 11, 2000 at 09:59:07AM -0700
References: <284209072@toto.iv> <14740.12507.587044.121462@seattle.nightmare.com>
Message-ID: <20000811190437.C17176@xs4all.nl>

On Fri, Aug 11, 2000 at 09:59:07AM -0700, Sam Rushing wrote:

> It's a feature of Linux... it will send() everything.  Other unixen
> act in the classic fashion (it bit me on FreeBSD), and send only what
> fits right into the buffer that awaits.

Ahhh, the downsides of working on the Most Perfect OS (writing this while
our Technical Manager, a FreeBSD fan, is looking over my shoulder ;)
Thanx for clearing that up. I was slowly going insane ;-P

> I think this could safely be added to the send method in
> socketmodule.c.  Linux users wouldn't even notice.  IMHO this is the
> kind of feature that people come to expect from programming in a HLL.
> Maybe disable the feature if it's a non-blocking socket?

Hm, I'm not sure if that's the 'right' thing to do, though disabling it for
non-blocking sockets is a nice idea. It shouldn't break anything, but it
doesn't feel too 'right'. The safe option would be to add a function that
resends as long as necessary, and point everyone to that function. But I'm
not sure what the name should be -- send is just so obvious ;-) 

Perhaps you're right, perhaps we should consider this a job for the type of
VHLL that Python is, and provide the opposite function separate instead: a
non-resending send(), for those that really want it. But in the eyes of the
Python programmer, socket.send() would just magically accept and send any
message size you care to give it, so it shouldn't break things. I think ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 11 19:16:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 13:16:43 -0400 (EDT)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811190437.C17176@xs4all.nl>
References: <284209072@toto.iv>
	<14740.12507.587044.121462@seattle.nightmare.com>
	<20000811190437.C17176@xs4all.nl>
Message-ID: <14740.13563.466035.477406@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Perhaps you're right, perhaps we should consider this a job for the type of
 > VHLL that Python is, and provide the opposite function separate instead: a
 > non-resending send(), for those that really want it. But in the eyes of the
 > Python programmer, socket.send() would just magically accept and send any
 > message size you care to give it, so it shouldn't break things. I think ;)

  This version receives my +1.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gmcm at hypernet.com  Fri Aug 11 19:38:01 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 13:38:01 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <20000811190437.C17176@xs4all.nl>
References: <14740.12507.587044.121462@seattle.nightmare.com>; from rushing@nightmare.com on Fri, Aug 11, 2000 at 09:59:07AM -0700
Message-ID: <1246107013-124534915@hypernet.com>

Thomas Wouters wrote:
> On Fri, Aug 11, 2000 at 09:59:07AM -0700, Sam Rushing wrote:
> 
> > It's a feature of Linux... it will send() everything.  Other
> > unixen act in the classic fashion (it bit me on FreeBSD), and
> > send only what fits right into the buffer that awaits.
...
> > I think this could safely be added to the send method in
> > socketmodule.c.  Linux users wouldn't even notice.  IMHO this
> > is the kind of feature that people come to expect from
> > programming in a HLL. Maybe disable the feature if it's a
> > non-blocking socket?
> 
> Hm, I'm not sure if that's the 'right' thing to do, though
> disabling it for non-blocking sockets is a nice idea. 

It's absolutely vital that it be disabled for non-blocking 
sockets. Otherwise you've just made it into a blocking socket.

With that in place, I would be neutral on the change. I still feel 
that Python is already doing the right thing. The fact that 
everyone misunderstood the man page is not a good reason to 
change Python to match that misreading.

> It
> shouldn't break anything, but it doesn't feel too 'right'. The
> safe option would be to add a function that resends as long as
> necessary, and point everyone to that function. But I'm not sure
> what the name should be -- send is just so obvious ;-) 

I've always thought that was why there was a makefile method.
 


- Gordon



From guido at beopen.com  Sat Aug 12 00:05:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:05:32 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: Your message of "Fri, 11 Aug 2000 13:04:26 -0400."
             <1246109027-124413737@hypernet.com> 
References: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>  
            <1246109027-124413737@hypernet.com> 
Message-ID: <200008112205.RAA01218@cj20424-a.reston1.va.home.com>

[Gordon]
> [I wrote, about send()]
> > Yes, it returns the number of bytes sent. For TCP/IP it is *not*
> > an error to send less than the argument. It's only an error if
> > the other end dies at the time of actual send.
> 
> [and...]
> > Just open a TCP/IP connection and send huge (64K or so) 
> > buffers. Current Python behavior is no different than C on 
> > Linux, HPUX and Windows.
> 
> And I just demonstrated it. Strangely enough, sending from Windows 
> (where the dos say "send returns the total number of bytes sent, 
> which can be less than the number indicated by len") it always 
> sent the whole buffer, even when that was 1M on a non-
> blocking socket. (I select()'ed the socket first, to make sure it 
> could send something).
> 
> But from Linux, the largest buffer sent was 54,020 and typical 
> was 27,740. No errors.

OK.  So send() can do a partial write, but only on a stream
connection.  And most standard library code doesn't check for that
condition, nor does (probably) much other code that used the standard
library as an example.  Worse, it seems that on some platforms send()
*never* does a partial write (I couldn't reproduce it on Red Hat 6.1
Linux), so even stress testing may not reveal the lurking problem.

Possible solutions:

1. Do nothing.  Pro: least work.  Con: subtle bugs remain.

2. Fix all code that's broken in the standard library, and try to
encourage others to fix their code.  Book authors need to be
encouraged to add a warning.  Pro: most thorough.  Con: hard to fix
every occurrence, especially in 3rd party code.

3. Fix the socket module to raise an exception when less than the
number of bytes sent occurs.  Pro: subtle bug exposed when it
happens.  Con: breaks code that did the right thing!

4. Fix the socket module to loop back on a partial send to send the
remaining bytes.  Pro: no more short writes.  Con: what if the first
few send() calls succeed and then an error is returned?  Note: code
that checks for partial writes will be redundant!

I'm personally in favor of (4), despite the problem with errors after
the first call.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sat Aug 12 00:14:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:14:23 -0500
Subject: [Python-Dev] missing mail
Message-ID: <200008112214.RAA01257@cj20424-a.reston1.va.home.com>

Just a note to you all.  It seems I'm missing a lot of mail to
python-dev.  I noticed because I got a few mails cc'ed to me and never
saw the copy sent via the list (which normally shows up within a
minute).  I looked in the archives and there were more messages that I
hadn't seen at all (e.g. the entire Cookie thread).

I don't know where the problem is (I *am* getting other mail to
guido at python.org as well as to guido at beopen.com) and I have no time to
research this right now.  I'm going to be mostly off line this weekend
and also all of next week.  (I'll be able to read mail occasionally
but I'll be too busy to keep track of everything.)

So if you need me to reply, please cc me directly -- and please be
patient!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From huaiyu_zhu at yahoo.com  Fri Aug 11 23:13:17 2000
From: huaiyu_zhu at yahoo.com (Huaiyu Zhu)
Date: Fri, 11 Aug 2000 14:13:17 -0700 (PDT)
Subject: [Python-Dev] Re: PEP 0211: Linear Algebra Operators
In-Reply-To: <Pine.LNX.4.10.10008110936390.13482-200000@akbar.nevex.com>
Message-ID: <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com>

As the PEP posted by Greg is substantially different from the one floating
around in c.l.py, I'd like to post the latter here, which covers several
weeks of discussions by dozens of discussants.  I'd like to encourage Greg
to post his version to python-list to seek comments.

I'd be grateful to hear any comments.


        Python Extension Proposal: Adding new math operators 
                Huaiyu Zhu <hzhu at users.sourceforge.net>
                         2000-08-11, draft 3


Introduction
------------

This PEP describes a proposal to add new math operators to Python, and
summarises discussions in the news group comp.lang.python on this topic.
Issues discussed here include:

1. Background.
2. Description of proposed operators and implementation issues.
3. Analysis of alternatives to new operators.
4. Analysis of alternative forms.
5. Compatibility issues
6. Description of wider extensions and other related ideas.

A substantial portion of this PEP describes ideas that do not go into the
proposed extension.  They are presented because the extension is essentially
syntactic sugar, so its adoption must be weighed against various possible
alternatives.  While many alternatives may be better in some aspects, the
current proposal appears to be overall advantageous.



Background
----------

Python provides five basic math operators, + - * / **.  (Hereafter
generically represented by "op").  They can be overloaded with new semantics
for user-defined classes.  However, for objects composed of homogeneous
elements, such as arrays, vectors and matrices in numerical computation,
there are two essentially distinct flavors of semantics.  The objectwise
operations treat these objects as points in multidimensional spaces.  The
elementwise operations treat them as collections of individual elements.
These two flavors of operations are often intermixed in the same formulas,
thereby requiring syntactical distinction.

Many numerical computation languages provide two sets of math operators.
For example, in Matlab, the ordinary op is used for objectwise operation
while .op is used for elementwise operation.  In R, op stands for
elementwise operation while %op% stands for objectwise operation.

In python, there are other methods of representation, some of which already
used by available numerical packages, such as

1. function:   mul(a,b)
2. method:     a.mul(b)
3. casting:    a.E*b 

In several aspects these are not as adequate as infix operators.  More
details will be shown later, but the key points are

1. Readability: Even for moderately complicated formulas, infix operators
   are much cleaner than alternatives.
2. Familiarity: Users are familiar with ordinary math operators.  
3. Implementation: New infix operators will not unduly clutter python
   syntax.  They will greatly ease the implementation of numerical packages.

While it is possible to assign current math operators to one flavor of
semantics, there is simply not enough infix operators to overload for the
other flavor.  It is also impossible to maintain visual symmetry between
these two flavors if one of them does not contain symbols for ordinary math
operators.  



Proposed extension
------------------

1.  New operators ~+ ~- ~* ~/ ~** ~+= ~-= ~*= ~/= ~**= are added to core
    Python.  They parallel the existing operators + - * / ** and the (soon
    to be added) += -= *= /= **= operators.

2.  Operator ~op retains the syntactical properties of operator op,
    including precedence.

3.  Operator ~op retains the semantical properties of operator op on
    built-in number types.  They raise syntax error on other types.

4.  These operators are overloadable in classes with names that prepend
    "alt" to names of ordinary math operators.  For example, __altadd__ and
    __raltadd__ work for ~+ just as __add__ and __radd__ work for +.

5.  As with standard math operators, the __r*__() methods are invoked when
    the left operand does not provide the appropriate method.

The symbol ~ is already used in Python as the unary "bitwise not" operator.
Currently it is not allowed for binary operators.  To allow it as part of
binary operators, the tokanizer would treat ~+ as one token.  This means
that currently valid expression ~+1 would be tokenized as ~+ 1 instead of ~
+ 1.  The compiler would then treat ~+ as composite of ~ +.  

The proposed implementation is to patch several files relating to the parser
and compiler to duplicate the functionality of existing math operators as
necessary.  All new semantics are to be implemented in the application that
overloads them, but they are recommended to be conceptually similar to
existing math operators.

It is not specified which version of operators stands for elementwise or
objectwise operations, leaving the decision to applications.

A prototype implementation already exists.



Alternatives to adding new operators
------------------------------------

Some of the leading alternatives, using the multiplication as an example.

1. Use function mul(a,b).

   Advantage:
   -  No need for new operators.
  
   Disadvantage: 
   - Prefix forms are cumbersome for composite formulas.
   - Unfamiliar to the intended users.
   - Too verbose for the intended users.
   - Unable to use natural precedence rules.
 
2. Use method call a.mul(b)

   Advantage:
   - No need for new operators.
   
   Disadvantage:
   - Asymmetric for both operands.
   - Unfamiliar to the intended users.
   - Too verbose for the intended users.
   - Unable to use natural precedence rules.


3. Use "shadow classes".  For matrix class define a shadow array class
   accessible through a method .E, so that for matrices a and b, a.E*b would
   be a matrix object that is elementwise_mul(a,b). 

   Likewise define a shadow matrix class for arrays accessible through a
   method .M so that for arrays a and b, a.M*b would be an array that is
   matrixwise_mul(a,b).

   Advantage:
   - No need for new operators.
   - Benefits of infix operators with correct precedence rules.
   - Clean formulas in applications.
   
   Disadvantage:
   - Hard to maintain in current Python because ordinary numbers cannot have
     user defined class methods.  (a.E*b will fail if a is a pure number.)
   - Difficult to implement, as this will interfere with existing method
     calls, like .T for transpose, etc.
   - Runtime overhead of object creation and method lookup.
   - The shadowing class cannot replace a true class, because it does not
     return its own type.  So there need to be a M class with shadow E class,
     and an E class with shadow M class.
   - Unnatural to mathematicians.

4. Implement matrixwise and elementwise classes with easy casting to the
   other class.  So matrixwise operations for arrays would be like a.M*b.M
   and elementwise operations for matrices would be like a.E*b.E.  For error
   detection a.E*b.M would raise exceptions.

   Advantage:
   - No need for new operators.
   - Similar to infix notation with correct precedence rules.

   Disadvantage:
   - Similar difficulty due to lack of user-methods for pure numbers.
   - Runtime overhead of object creation and method lookup.
   - More cluttered formulas
   - Switching of flavor of objects to facilitate operators becomes
     persistent.  This introduces long range context dependencies in
     application code that would be extremely hard to maintain.

5. Using mini parser to parse formulas written in arbitrary extension placed
   in quoted strings.

   Advantage:
   - Pure Python, without new operators

   Disadvantage:
   - The actual syntax is within the quoted string, which does not resolve
     the problem itself.
   - Introducing zones of special syntax.
   - Demanding on the mini-parser.

Among these alternatives, the first and second are used in current
applications to some extent, but found inadequate.  The third is the most
favorite for applications, but it will incur huge implementation complexity.
The fourth would make applications codes very contex-sensitive and hard to
maintain.  These two alternatives also share significant implementational
difficulties due to current type/class split.  The fifth appears to create
more problems than it would solve.



Alternative forms of infix operators
------------------------------------

Two major forms and several minor variants of new infix operators were
discussed:

1. Bracketed form

   (op)
   [op]
   {op}
   <op>
   :op:
   ~op~
   %op%

2. Meta character form

   .op
   @op
   ~op
   
   Alternatively the meta character is put after the operator.

3. Less consistent variations of these themes.   These are considered
   unfavorably.  For completeness some are listed here
   - Use @/ and /@ for left and right division
   - Use [*] and (*) for outer and inner products

4. Use __call__ to simulate multiplication.
   a(b)  or (a)(b)


Criteria for choosing among the representations include:

   - No syntactical ambiguities with existing operators.  

   - Higher readability in actual formulas.  This makes the bracketed forms
     unfavorable.  See examples below.

   - Visually similar to existing math operators.

   - Syntactically simple, without blocking possible future extensions.


With these criteria the overall winner in bracket form appear to be {op}.  A
clear winner in the meta character form is ~op.  Comparing these it appears
that ~op is the favorite among them all.  

Some analysis are as follows:

   - The .op form is ambiguous: 1.+a would be different from 1 .+a.

   - The bracket type operators are most favorable when standing alone, but
     not in formulas, as they interfere with visual parsing of parenthesis
     for precedence and function argument.  This is so for (op) and [op],
     and somewhat less so for {op} and <op>.

   - The <op> form has the potential to be confused with < > and =.

   - The @op is not favored because @ is visually heavy (dense, more like a
     letter): a at +b is more readily read as a@ + b than a @+ b.

   - For choosing meta-characters: Most of existing ASCII symbols have
     already been used.  The only three unused are @ $ ?.



Semantics of new operators
--------------------------

There are convincing arguments for using either set of operators as
objectwise or elementwise.  Some of them are listed here:

1. op for element, ~op for object

   - Consistent with current multiarray interface of Numeric package
   - Consistent with some other languages
   - Perception that elementwise operations are more natural
   - Perception that elementwise operations are used more frequently

2. op for object, ~op for element

   - Consistent with current linear algebra interface of MatPy package
   - Consistent with some other languages
   - Perception that objectwise operations are more natural
   - Perception that objectwise operations are used more frequently
   - Consistent with the current behavior of operators on lists
   - Allow ~ to be a general elementwise meta-character in future extensions.

It is generally agreed upon that 

   - there is no absolute reason to favor one or the other
   - it is easy to cast from one representation to another in a sizable
     chunk of code, so the other flavor of operators is always minority
   - there are other semantic differences that favor existence of
     array-oriented and matrix-oriented packages, even if their operators
     are unified.
   - whatever the decision is taken, codes using existing interfaces should
     not be broken for a very long time.

Therefore not much is lost, and much flexibility retained, if the semantic
flavors of these two sets of operators are not dictated by the core
language.  The application packages are responsible for making the most
suitable choice.  This is already the case for NumPy and MatPy which use
opposite semantics.  Adding new operators will not break this.  See also
observation after subsection 2 in the Examples below.

The issue of numerical precision was raised, but if the semantics is left to
the applications, the actual precisions should also go there.



Examples
--------

Following are examples of the actual formulas that will appear using various
operators or other representations described above.

1. The matrix inversion formula:

   - Using op for object and ~op for element:
     
     b = a.I - a.I * u / (c.I + v/a*u) * v / a

     b = a.I - a.I * u * (c.I + v*a.I*u).I * v * a.I

   - Using op for element and ~op for object:
   
     b = a.I @- a.I @* u @/ (c.I @+ v@/a@*u) @* v @/ a

     b = a.I ~- a.I ~* u ~/ (c.I ~+ v~/a~*u) ~* v ~/ a

     b = a.I (-) a.I (*) u (/) (c.I (+) v(/)a(*)u) (*) v (/) a

     b = a.I [-] a.I [*] u [/] (c.I [+] v[/]a[*]u) [*] v [/] a

     b = a.I <-> a.I <*> u </> (c.I <+> v</>a<*>u) <*> v </> a

     b = a.I {-} a.I {*} u {/} (c.I {+} v{/}a{*}u) {*} v {/} a

   Observation: For linear algebra using op for object is preferable.

   Observation: The ~op type operators look better than (op) type in
   complicated formulas.

   - using named operators

     b = a.I @sub a.I @mul u @div (c.I @add v @div a @mul u) @mul v @div a

     b = a.I ~sub a.I ~mul u ~div (c.I ~add v ~div a ~mul u) ~mul v ~div a

   Observation: Named operators are not suitable for math formulas.


2. Plotting a 3d graph

   - Using op for object and ~op for element:

     z = sin(x~**2 ~+ y~**2);    plot(x,y,z)

   - Using op for element and ~op for object:

     z = sin(x**2 + y**2);   plot(x,y,z)

    Observation: Elementwise operations with broadcasting allows much more
    efficient implementation than Matlab.

    Observation: Swapping the semantics of op and ~op (by casting the
    objects) is often advantageous, as the ~op operators would only appear
    in chunks of code where the other flavor dominate.


3. Using + and - with automatic broadcasting

     a = b - c;  d = a.T*a

   Observation: This would silently produce hard-to-trace bugs if one of b
   or c is row vector while the other is column vector.



Miscellaneous issues:
---------------------

1. Need for the ~+ ~- operators.  The objectwise + - are important because
   they provide important sanity checks as per linear algebra.  The
   elementwise + - are important because they allow broadcasting that are
   very efficient in applications.

2. Left division (solve).  For matrix, a*x is not necessarily equal to x*a.
   The solution of a*x==b, denoted x=solve(a,b), is therefore different from
   the solution of x*a==b, denoted x=div(b,a).  There are discussions about
   finding a new symbol for solve.  [Background: Matlab use b/a for div(b,a)
   and a\b for solve(a,b).]

   It is recognized that Python provides a better solution without requiring
   a new symbol: the inverse method .I can be made to be delayed so that
   a.I*b and b*a.I are equivalent to Matlab's a\b and b/a.  The
   implementation is quite simple and the resulting application code clean.

3. Power operator.  Python's use of a**b as pow(a,b) has two perceived
   disadvantages:
   - Most mathematicians are more familiar with a^b for this purpose.
   - It results in long augmented assignment operator ~**=.
   However, this issue is distinct from the main issue here.

4. Additional multiplication operators.  Several forms of multiplications
   are used in (multi-)linear algebra.  Most can be seen as variations of
   multiplication in linear algebra sense (such as Kronecker product).  But
   two forms appear to be more fundamental: outer product and inner product.
   However, their specification includes indices, which can be either

   - associated with the operator, or
   - associated with the objects.

   The latter (the Einstein notation) is used extensively on paper, and is
   also the easier one to implement.  By implementing a tensor-with-indices
   class, a general form of multiplication would cover both outer and inner
   products, and specialize to linear algebra multiplication as well.  The
   index rule can be defined as class methods, like,

     a = b.i(1,2,-1,-2) * c.i(4,-2,3,-1)   # a_ijkl = b_ijmn c_lnkm

   Therefore one objectwise multiplication is sufficient.

5. Bitwise operators.  Currently Python assigns six operators to bitwise
   operations: and (&), or (|), xor (^), complement (~), left shift (<<) and
   right shift (>>), with their own precedence levels.  This has some
   barings on the new math operators in several ways:

   - The proposed new math operators use the symbol ~ that is "bitwise not"
     operator.  This poses no compatibility problem but somewhat complicates
     implementation.

   - The symbol ^ might be better used for pow than bitwise xor.  But this
     depends on the future of bitwise operators.  It does not immediately
     impact on the proposed math operator.

   - The symbol | was suggested to be used for matrix solve.  But the new
     solution of using delayed .I is better in several ways.

   - The bitwise operators assign special syntactical and semantical
     structures to operations, which could be more consistently regarded as
     elementwise lattice operators. (see below) Most of their usage could be
     replaced by a bitwise module with named functions.  Removing ~ as a
     single operator could also allow notations that link them to logical
     operators (see below).  However, this issue is separate from the
     current proposed extension.

6. Lattice operators.  It was suggested that similar operators be combined
   with bitwise operators to represent lattice operations.  For example, ~|
   and ~& could represent "lattice or" and "lattice and".  But these can
   already be achieved by overloading existing logical or bitwise operators.
   On the other hand, these operations might be more deserving for infix
   operators than the built-in bitwise operators do (see below).

7. Alternative to special operator names used in definition,

   def "+"(a, b)      in place of       def __add__(a, b)

   This appears to require greater syntactical change, and would only be
   useful when arbitrary additional operators are allowed.

8. There was a suggestion to provide a copy operator :=, but this can
   already be done by a=b.copy.



Impact on possible future extensions:
-------------------------------------

More general extensions could lead from the current proposal. Although they
would be distinct proposals, they might have syntactical or semantical
implications on each other.  It is prudent to ensure that the current
extension do not restrict any future possibilities.


1. Named operators. 

The news group discussion made it generally clear that infix operators is a
scarce resource in Python, not only in numerical computation, but in other
fields as well.  Several proposals and ideas were put forward that would
allow infix operators be introduced in ways similar to named functions.

The idea of named infix operators is essentially this: Choose a meta
character, say @, so that for any identifier "opname", the combination
"@opname" would be a binary infix operator, and

a @opname b == opname(a,b)

Other representations mentioned include .name ~name~ :name: (.name) %name%
and similar variations.  The pure bracket based operators cannot be used
this way.

This requires a change in the parser to recognize @opname, and parse it into
the same structure as a function call.  The precedence of all these
operators would have to be fixed at one level, so the implementation would
be different from additional math operators which keep the precedence of
existing math operators.

The current proposed extension do not limit possible future extensions of
such form in any way.


2. More general symbolic operators.

One additional form of future extension is to use meta character and
operator symbols (symbols that cannot be used in syntactical structures
other than operators).  Suppose @ is the meta character.  Then

      a + b,    a @+ b,    a @@+ b,  a @+- b

would all be operators with a hierarchy of precedence, defined by

   def "+"(a, b)
   def "@+"(a, b)
   def "@@+"(a, b)
   def "@+-"(a, b)

One advantage compared with named operators is greater flexibility for
precedences based on either the meta character or the ordinary operator
symbols.  This also allows operator composition.  The disadvantage is that
they are more like "line noise".  In any case the current proposal does not
impact its future possibility.

These kinds of future extensions may not be necessary when Unicode becomes
generally available.


3. Object/element dichotomy for other types of objects.

The distinction between objectwise and elementwise operations are meaningful
in other contexts as well, where an object can be conceptually regarded as a
collection of homogeneous elements.  Several examples are listed here:
   
   - List arithmetics
   
      [1, 2] + [3, 4]        # [1, 2, 3, 4]
      [1, 2] ~+ [3, 4]       # [4, 6]
                             
      ['a', 'b'] * 2         # ['a', 'b', 'a', 'b']
      'ab' * 2               # 'abab'
      ['a', 'b'] ~* 2        # ['aa', 'bb']
      [1, 2] ~* 2            # [2, 4]

     It is also consistent to Cartesian product

      [1,2]*[3,4]            # [(1,3),(1,4),(2,3),(2,4)]
   
   - Tuple generation
   
      [1, 2, 3], [4, 5, 6]   # ([1,2, 3], [4, 5, 6])
      [1, 2, 3]~,[4, 5, 6]   # [(1,4), (2, 5), (3,6)]
   
      This has the same effect as the proposed zip function.
   
   - Bitwise operation (regarding integer as collection of bits, and
      removing the dissimilarity between bitwise and logical operators)
   
      5 and 6       # 6
      5 or 6        # 5
                    
      5 ~and 6      # 4
      5 ~or 6       # 7
   
   - Elementwise format operator (with broadcasting)
   
      a = [1,2,3,4,5]
      print ["%5d "] ~% a     # print ("%5s "*len(a)) % tuple(a)
      a = [[1,2],[3,4]]
      print ["%5d "] ~~% a
   
   - Using ~ as generic elementwise meta-character to replace map
   
      ~f(a, b)      # map(f, a, b)
      ~~f(a, b)     # map(lambda *x:map(f, *x), a, b)
   
      More generally,
   
      def ~f(*x): return map(f, *x)
      def ~~f(*x): return map(~f, *x)
      ...

    - Rich comparison

      [1, 2, 3, 4]  ~< [4, 3, 2, 1]  # [1, 1, 0, 0]
   
   There are probably many other similar situations.  This general approach
   seems well suited for most of them, in place of several separated
   proposals for each of them (parallel and cross iteration, list
   comprehension, rich comparison, and some others).

   Of course, the sementics of "elementwise" depends on applications.  For
   example an element of matrix is two levels down from list of list point
   of view.  In any case, the current proposal will not negatively impact on
   future possibilities of this nature.

Note that this section discusses compatibility of the proposed extension
with possible future extensions.  The desirability or compatibility of these
other extensions themselves are specifically not considered here.




-- 
Huaiyu Zhu                       hzhu at users.sourceforge.net
Matrix for Python Project        http://MatPy.sourceforge.net 





From trentm at ActiveState.com  Fri Aug 11 23:30:31 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 11 Aug 2000 14:30:31 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
Message-ID: <20000811143031.A13790@ActiveState.com>

These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files. Why,
then, do we treat them as binary files.

Would it not be preferable to have those files be handled like a normal text
files, i.e. check it out on Unix and it uses Unix line terminators, check it
out on Windows and it uses DOS line terminators.

This way you are using the native line terminator format and text processing
tools you use on them a less likely to screw them up. (Anyone see my
motivation?).

Does anybody see any problems treating them as text files? And, if not, who
knows how to get rid of the '-kb' sticky tag on those files.

Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From gmcm at hypernet.com  Fri Aug 11 23:34:54 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 17:34:54 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008112205.RAA01218@cj20424-a.reston1.va.home.com>
References: Your message of "Fri, 11 Aug 2000 13:04:26 -0400."             <1246109027-124413737@hypernet.com> 
Message-ID: <1246092799-125389828@hypernet.com>

[Guido]
> OK.  So send() can do a partial write, but only on a stream
> connection.  And most standard library code doesn't check for
> that condition, nor does (probably) much other code that used the
> standard library as an example.  Worse, it seems that on some
> platforms send() *never* does a partial write (I couldn't
> reproduce it on Red Hat 6.1 Linux), so even stress testing may
> not reveal the lurking problem.

I'm quite sure you can force it with a non-blocking socket (on 
RH 5.2  64K blocks did it - but that's across a 10baseT 
ethernet connection).
 
> Possible solutions:
> 
> 1. Do nothing.  Pro: least work.  Con: subtle bugs remain.

Yes, but they're application-level bugs, even if they're in the 
std lib. They're not socket-support level bugs.
 
> 2. Fix all code that's broken in the standard library, and try to
> encourage others to fix their code.  Book authors need to be
> encouraged to add a warning.  Pro: most thorough.  Con: hard to
> fix every occurrence, especially in 3rd party code.

As far as I can tell, Linux and Windows can't fail with the std 
lib code (it's all blocking sockets). Sam says BSDI could fail, 
and as I recall HPUX could too.
 
> 3. Fix the socket module to raise an exception when less than the
> number of bytes sent occurs.  Pro: subtle bug exposed when it
> happens.  Con: breaks code that did the right thing!
> 
> 4. Fix the socket module to loop back on a partial send to send
> the remaining bytes.  Pro: no more short writes.  Con: what if
> the first few send() calls succeed and then an error is returned?
>  Note: code that checks for partial writes will be redundant!

If you can exempt non-blocking sockets, either 3 or 4 
(preferably 4) is OK. But if you can't exempt non-blocking 
sockets, I'll scream bloody murder. It would mean you could 
not write high performance socket code in Python (which it 
currently is very good for). For one thing, you'd break Medusa.
 
> I'm personally in favor of (4), despite the problem with errors
> after the first call.

The sockets HOWTO already documents the problem. Too 
bad I didn't write it before that std lib code got written <wink>.

I still prefer leaving it alone and telling people to use makefile if 
they can't deal with it. I'll vote +0 on 4 if and only if it exempts 
nonblocking sockets.

- Gordon



From nowonder at nowonder.de  Sat Aug 12 01:48:20 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 11 Aug 2000 23:48:20 +0000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <399490C4.F234D68A@nowonder.de>

Bill Tutt wrote:
> 
> This is an alternative approach that we should certainly consider. We could
> use ANTLR (www.antlr.org) as our parser generator, and have it generate Java
> for JPython, and C++ for CPython.  This would be a good chunk of work, and
> it's something I really don't have time to pursue. I don't even have time to
> pursue the idea about moving keyword recognition into the lexer.

<disclaimer val="I have only used ANTLR to generate Java code and not
for
 a parser but for a Java source code checker that tries to catch
possible
 runtime errors.">

ANTLR is a great tool. Unfortunately - although trying hard to change
it this morning in order to suppress keyword lookup in certain places -
I don't know anything about the interface between Python and its
parser. Is there some documentation on that (or can some divine deity
guide me with a few hints where to look in Parser/*)?

> I'm just not sure if you want to bother introducing C++ into the Python
> codebase solely to only have one parser for CPython and JPython.

Which compilers/platforms would this affect? VC++/Windows
won't be a problem, I guess; gcc mostly comes with g++,
but not always as a default. Probably more problematic.

don't-know-about-VMS-and-stuff-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From guido at beopen.com  Sat Aug 12 00:56:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:56:23 -0500
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:30:31 MST."
             <20000811143031.A13790@ActiveState.com> 
References: <20000811143031.A13790@ActiveState.com> 
Message-ID: <200008112256.RAA01675@cj20424-a.reston1.va.home.com>

> These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files. Why,
> then, do we treat them as binary files.

DevStudio doesn't (or at least 5.x didn't) like it if not all lines
used CRLF terminators.

> Would it not be preferable to have those files be handled like a normal text
> files, i.e. check it out on Unix and it uses Unix line terminators, check it
> out on Windows and it uses DOS line terminators.

I think I made them binary during the period when I was mounting the
Unix source directory on a Windows machine.  I don't do that any more
and I don't know anyone who does, so I think it's okay to change.

> This way you are using the native line terminator format and text processing
> tools you use on them a less likely to screw them up. (Anyone see my
> motivation?).
> 
> Does anybody see any problems treating them as text files? And, if not, who
> knows how to get rid of the '-kb' sticky tag on those files.

cvs admin -kkv file ...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sat Aug 12 01:00:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 18:00:29 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: Your message of "Fri, 11 Aug 2000 17:34:54 -0400."
             <1246092799-125389828@hypernet.com> 
References: Your message of "Fri, 11 Aug 2000 13:04:26 -0400." <1246109027-124413737@hypernet.com>  
            <1246092799-125389828@hypernet.com> 
Message-ID: <200008112300.SAA01726@cj20424-a.reston1.va.home.com>

> > 4. Fix the socket module to loop back on a partial send to send
> > the remaining bytes.  Pro: no more short writes.  Con: what if
> > the first few send() calls succeed and then an error is returned?
> >  Note: code that checks for partial writes will be redundant!
> 
> If you can exempt non-blocking sockets, either 3 or 4 
> (preferably 4) is OK. But if you can't exempt non-blocking 
> sockets, I'll scream bloody murder. It would mean you could 
> not write high performance socket code in Python (which it 
> currently is very good for). For one thing, you'd break Medusa.

Of course.  Don't worry.

> > I'm personally in favor of (4), despite the problem with errors
> > after the first call.
> 
> The sockets HOWTO already documents the problem. Too 
> bad I didn't write it before that std lib code got written <wink>.
> 
> I still prefer leaving it alone and telling people to use makefile if 
> they can't deal with it. I'll vote +0 on 4 if and only if it exempts 
> nonblocking sockets.

Understood.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Sat Aug 12 00:21:18 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 18:21:18 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <399490C4.F234D68A@nowonder.de>
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
	<399490C4.F234D68A@nowonder.de>
Message-ID: <14740.31838.336790.710005@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > parser. Is there some documentation on that (or can some divine deity
 > guide me with a few hints where to look in Parser/*)?

  Not that I'm aware of!  Feel free to write up any overviews you
think appropriate, and it can become part of the standard
documentation or be a README in the Parser/ directory.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Sat Aug 12 02:59:22 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 11 Aug 2000 20:59:22 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000811143031.A13790@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>

[Trent Mick]
> These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files.
> Why, then, do we treat them as binary files.
>
> Would it not be preferable to have those files be handled like
> a normal text files, i.e. check it out on Unix and it uses Unix
> line terminators, check it out on Windows and it uses DOS line
> terminators.
>
> This way you are using the native line terminator format and text
> processing tools you use on them a less likely to screw them up.
> (Anyone see my motivation?).

Not really.  They're not human-editable!  Leave 'em alone.  Keeping them in
binary mode is a clue to people that they aren't *supposed* to go mucking
with them via text processing tools.

> Does anybody see any problems treating them as text files? And,
> if not, who knows how to get rid of the '-kb' sticky tag on those
> files.

Well, whatever you did didn't work.  I'm dead in the water on Windows now --
VC6 refuses to open the new & improved .dsw and .dsp files.  I *imagine*
it's because they've got Unix line-ends now, but haven't yet checked.  Can
you fix it or back it out?





From skip at mojam.com  Sat Aug 12 03:07:31 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 11 Aug 2000 20:07:31 -0500 (CDT)
Subject: [Python-Dev] list comprehensions
Message-ID: <14740.41811.590487.13187@beluga.mojam.com>

I believe the latest update to the list comprehensions patch by Ping
resolved the last concert the BDFL(*) had.  As the owner of the patch is it my
responsibility to check it in or do I need to assign it to Guido for final
dispensation.

Skip

(*) Took me a week or so to learn what BDFL meant.  :-) I tried a lot of
"somewhat inaccurate" expansions before seeing it expanded in a message from
Barry...



From esr at thyrsus.com  Sat Aug 12 04:50:17 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 22:50:17 -0400
Subject: [Python-Dev] Stop the presses!
Message-ID: <20000811225016.A18449@thyrsus.com>

The bad news: I've found another reproducible core-dump bug in
Python-2.0 under Linux.  Actually I found it in 1.5.2 while making
some changes to CML2, and just verified that the CVS snapshot of
Python 2.0 bombs identically.

The bad news II: it really seems to be in the Python core, not one of
the extensions like Tkinter.  My curses and Tk front ends both
segfault in the same place, the guard of an innocuous-looking if
statement.

The good news: the patch to go from code-that-runs to code-that-bombs
is pretty small and clean.  I suspect anybody who really knows the ceval
internals will be able to use it to nail this bug fairly quickly.

Damn, seems like I found the core dump in Pickle just yesterday.  This
is getting to be a habit I don't enjoy much :-(.

I'm putting together a demonstration package now.  Stay tuned; I'll 
ship it tonight.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"One of the ordinary modes, by which tyrants accomplish their purposes
without resistance, is, by disarming the people, and making it an
offense to keep arms."
        -- Constitutional scholar and Supreme Court Justice Joseph Story, 1840



From ping at lfw.org  Sat Aug 12 04:56:50 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 19:56:50 -0700 (PDT)
Subject: [Python-Dev] Stop the presses!
In-Reply-To: <20000811225016.A18449@thyrsus.com>
Message-ID: <Pine.LNX.4.10.10008111956070.2615-100000@localhost>

On Fri, 11 Aug 2000, Eric S. Raymond wrote:
> The good news: the patch to go from code-that-runs to code-that-bombs
> is pretty small and clean.  I suspect anybody who really knows the ceval
> internals will be able to use it to nail this bug fairly quickly.
[...]
> I'm putting together a demonstration package now.  Stay tuned; I'll 
> ship it tonight.

Oooh, i can't wait.  How exciting!  Post it, post it!  :)


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From fdrake at beopen.com  Sat Aug 12 05:30:23 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 23:30:23 -0400 (EDT)
Subject: [Python-Dev] list comprehensions
In-Reply-To: <14740.41811.590487.13187@beluga.mojam.com>
References: <14740.41811.590487.13187@beluga.mojam.com>
Message-ID: <14740.50383.386575.806754@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:

 > I believe the latest update to the list comprehensions patch by
 > Ping resolved the last concert the BDFL(*) had.  As the owner of
 > the patch is it my responsibility to check it in or do I need to
 > assign it to Guido for final dispensation.

  Given the last comment added to the patch, check it in and close the
patch.  Then finish the PEP so we don't have to explain it over and
over and ...  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From esr at thyrsus.com  Sat Aug 12 05:56:33 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 23:56:33 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
Message-ID: <20000811235632.A19358@thyrsus.com>

Here are the directions to reproduce the core dump.

1. Download and unpack CML2 version 0.7.6 from 
   <http://www.tuxedo.org/~esr/kbuild/>.  Change directory into it.

2. Do `cmlcompile.py kernel-rules.cml' to generate a pickled rulebase.

3. Run `make xconfig'.  Ignore the error message about the arch defconfig.

4. Set NETDEVICES on the main menu to 'y'.
5. Select the "Network Device Configuration" menu that appears below.
6. Set PPP to 'y'.
7. Select the "PPP options" menu that appears below it.
8. Set PPP_BSDCOMP to 'y'.

9. Observe and dismiss the pop-up window.  Quit the configurator using the
   "File" menu on the menu bar.

10. Now apply the attached patch.

11. Repeat steps 2-10.  

12. Observe the core dump.  If you look near cmlsystem.py:770, you'll see
    that the patch inserted two print statements that bracket the apparent
    point of the core dump.

13. To verify that this core dump is neither a Tkinter nor an ncurses problem,
    run `make menuconfig'.

14. Repeat steps 2-8.  To set symbols in the curses interface, use the arrow
    keys to select each one and type "y".  To select a menu, use the arrow
    keys and type a space or Enter when the selection bar is over the entry.

15. Observe the core dump at the same spot.

This bug bites both a stock 1.5.2 and today's CVS snapshoot of 2.0.

--- cml.py	2000/08/12 03:21:40	1.97
+++ cml.py	2000/08/12 03:25:45
@@ -111,6 +111,21 @@
         res = res + self.dump()
         return res[:-1] + "}"
 
+class Requirement:
+    "A requirement, together with a message to be shown if it's violated."
+    def __init__(self, wff, message=None):
+        self.predicate = wff
+        self.message = message
+
+    def str(self):
+        return display_expression(self.predicate)
+
+    def __repr__(self):
+        if self.message:
+            return self.message
+        else:
+            return str(self)
+
 # This describes an entire configuration.
 
 class CMLRulebase:
--- cmlcompile.py	2000/08/10 16:22:39	1.131
+++ cmlcompile.py	2000/08/12 03:24:31
@@ -12,7 +12,7 @@
 
 _keywords = ('symbols', 'menus', 'start', 'unless', 'suppress',
 	    'dependent', 'menu', 'choices', 'derive', 'default',
-	    'range', 'require', 'prohibit', 'private', 'debug',
+	    'range', 'require', 'prohibit', 'explanation', 'private', 'debug',
 	    'helpfile', 'prefix', 'banner', 'icon', 'condition',
 	    'trits', 'on', 'warndepend')
 
@@ -432,7 +432,14 @@
             expr = parse_expr(input)
             if leader.type == "prohibit":
                 expr = ('==', expr, cml.n.value)
-	    requirements.append(expr)	    
+            msg = None
+            #next = input.lex_token()
+            #if next.type != 'explanation':
+            #    input.push_token(next)
+            #    continue
+            #else:
+            #    msg = input.demand("word")
+	    requirements.append(cml.Requirement(expr, msg))	    
 	    bool_tests.append((expr, input.infile, input.lineno))
 	elif leader.type == "default":
 	    symbol = input.demand("word")
@@ -746,7 +753,7 @@
             entry.visibility = resolve(entry.visibility)
 	if entry.default:
 	    entry.default = resolve(entry.default)
-    requirements = map(resolve, requirements)
+    requirements = map(lambda x: cml.Requirement(resolve(x.predicate), x.message), requirements)
     if bad_symbols:
 	sys.stderr.write("cmlcompile: %d symbols could not be resolved:\n"%(len(bad_symbols),))
 	sys.stderr.write(`bad_symbols.keys()` + "\n")
@@ -868,7 +875,7 @@
     # rule file are not consistent, it's not likely the user will make
     # a consistent one.
     for wff in requirements:
-	if not cml.evaluate(wff, debug):
+	if not cml.evaluate(wff.predicate, debug):
 	    print "cmlcompile: constraint violation:", wff
 	    errors = 1
 
--- cmlsystem.py	2000/07/25 04:24:53	1.98
+++ cmlsystem.py	2000/08/12 03:29:21
@@ -28,6 +28,7 @@
     "INCAUTOGEN":"/*\n * Automatically generated, don't edit\n */\n",
     "INCDERIVED":"/*\n * Derived symbols\n */\n",
     "ISNOTSET":"# %s is not set\n",
+    "NOTRITS":"Trit values are not currently allowed.",
     "RADIOINVIS":"    Query of choices menu %s elided, button pressed",
     "READING":"Reading configuration from %s",
     "REDUNDANT":"    Redundant assignment forced by %s", 
@@ -100,10 +101,10 @@
         "Assign constraints to their associated symbols."
         for entry in self.dictionary.values():
             entry.constraints = []
-        for wff in self.constraints:
-            for symbol in cml.flatten_expr(wff):
-                if not wff in symbol.constraints:
-                    symbol.constraints.append(wff)
+        for requirement in self.constraints:
+            for symbol in cml.flatten_expr(requirement.predicate):
+                if not requirement.predicate in symbol.constraints:
+                    symbol.constraints.append(requirement)
         if self.debug:
             cc = dc = tc = 0
             for symbol in self.dictionary.values():
@@ -436,8 +437,8 @@
         if symbol.constraints:
             self.set_symbol(symbol, value)
             for constraint in symbol.constraints:
-                if not cml.evaluate(constraint, self.debug):
-                    self.debug_emit(1, self.lang["CONSTRAINT"] % (value, symbol.name, constraint))
+                if not cml.evaluate(constraint.predicate, self.debug):
+                    self.debug_emit(1, self.lang["CONSTRAINT"] % (value, symbol.name, str(constraint)))
                     self.rollback()
                     return 0
             self.rollback()
@@ -544,7 +545,7 @@
         # be unfrozen.  Simplify constraints to remove newly frozen variables.
         # Then rerun optimize_constraint_access.
         if freeze:
-            self.constraints = map(lambda wff, self=self: self.simplify(wff), self.constraints)
+            self.constraints = map(lambda requirement, self=self: cml.Requirement(self.simplify(requirement.predicate), requirement.message), self.constraints)
             self.optimize_constraint_access()
             for entry in self.dictionary.values():
                 if self.bindcheck(entry, self.newbindings) and entry.menu and entry.menu.type=="choices":
@@ -559,7 +560,7 @@
         violations = []
         # First, check the explicit constraints.
         for constraint in self.constraints:
-            if not cml.evaluate(constraint, self.debug):
+            if not cml.evaluate(constraint.predicate, self.debug):
                 violations.append(constraint);
                 self.debug_emit(1, self.lang["FAILREQ"] % (constraint,))
         # If trits are disabled, any variable having a trit value is wrong.
@@ -570,7 +571,7 @@
                     mvalued = ('and', ('!=', entry,cml.m), mvalued)
             if mvalued != cml.y:
                mvalued = self.simplify(mvalued)
-               violations.append(('implies', ('==', self.trit_tie, cml.n), mvalued))
+               violations.append(cml.Requirement(('implies', ('==', self.trit_tie, cml.n), mvalued), self.lang["NOTRITS"]))
         return violations
 
     def set_symbol(self, symbol, value, source=None):
@@ -631,10 +632,10 @@
         dups = {}
         relevant = []
         for csym in touched:
-            for wff in csym.constraints:
-                if not dups.has_key(wff):
-                    relevant.append(wff)
-                    dups[wff] = 1
+            for requirement in csym.constraints:
+                if not dups.has_key(requirement.predicate):
+                    relevant.append(requirement.predicate)
+                    dups[requirement.predicate] = 1
         # Now loop through the constraints, simplifying out assigned
         # variables and trying to freeze more variables each time.
         # The outer loop guarantees that as long as the constraints
@@ -765,7 +766,9 @@
                     else:
                         self.set_symbol(left, cml.n.value, source)
                         return 1
+                print "Just before the core-dump point"
                 if right_mutable and left == cml.n.value:
+                    print "Just after the core-dump point"
                     if rightnonnull == cml.n.value:
                         self.debug_emit(1, self.lang["REDUNDANT"] % (wff,))
                         return 0

End of diffs,

-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders, give
orders, cooperate, act alone, solve equations, analyze a new problem,
pitch manure, program a computer, cook a tasty meal, fight efficiently,
die gallantly. Specialization is for insects.
	-- Robert A. Heinlein, "Time Enough for Love"



From nhodgson at bigpond.net.au  Sat Aug 12 06:16:54 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Sat, 12 Aug 2000 14:16:54 +1000
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net>
Message-ID: <045901c00414$27a67010$8119fea9@neil>

> .... It also seems that there are a lot of people (let's
> call them "back seat coders") who have vague ideas of what they want but
> don't want to spend a bunch of time in a long discussion about registry
> arcana. Therefore I am endevouring to make it as easy and fast to
> contribute to the discussion as possible.

   I think a lot of the registry using people are unwilling to spend too
much energy on this because, while it looks useless, its not really going to
be a problem so long as the low level module is available.

> If you're one of the people who has asked for winreg in the core then
> you should respond. It isn't (IMO) sufficient to put in a hacky API to
> make your life easier. You need to give something to get something. You
> want windows registry support in the core -- fine, let's do it properly.

   Hacky API only please.

   The registry is just not important enough to have this much attention or
work.

> All you need to do is read this email and comment on whether you agree
> with the overall principle and then give your opinion on fifteen
> possibly controversial issues. The "overall principle" is to steal
> shamelessly from Microsoft's new C#/VB/OLE/Active-X/CRL API instead of
> innovating for Python. That allows us to avoid starting the debate from
> scratch. It also eliminates the feature that Mark complained about
> (which was a Python-specific innovation).

   The Microsoft.Win32.Registry* API appears to be a hacky legacy API to me.
Its there for compatibility during the transition to the
System.Configuration API. Read the blurb for ConfigManager to understand the
features of System.Configuration. Its all based on XML files. What a
surprise.

   Neil




From ping at lfw.org  Sat Aug 12 06:52:54 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 21:52:54 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000811235632.A19358@thyrsus.com>
Message-ID: <Pine.LNX.4.10.10008112149280.2615-100000@localhost>

On Fri, 11 Aug 2000, Eric S. Raymond wrote:
> Here are the directions to reproduce the core dump.

I have successfully reproduced the core dump.


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From ping at lfw.org  Sat Aug 12 06:57:02 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 21:57:02 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008112149280.2615-100000@localhost>
Message-ID: <Pine.LNX.4.10.10008112156180.2615-100000@localhost>

On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> I have successfully reproduced the core dump.

I'm investigating.  Top of the stack looks like:

#0  0x40061e39 in __pthread_lock (lock=0x0, self=0x40067f20) at spinlock.c:41
#1  0x4005f8aa in __pthread_mutex_lock (mutex=0xbfe0277c) at mutex.c:92
#2  0x400618cb in __flockfile (stream=0xbfe02794) at lockfile.c:32
#3  0x400d2955 in _IO_vfprintf (s=0xbfe02794, 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'", 
    ap=0xbfe02a54) at vfprintf.c:1041
#4  0x400e00b3 in _IO_vsprintf (string=0xbfe02850 "?/??", 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'", 
    args=0xbfe02a54) at iovsprintf.c:47
#5  0x80602c5 in PyErr_Format (exception=0x819783c, 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'")
    at errors.c:377
#6  0x806eac4 in instance_getattr1 (inst=0x84ecdd4, name=0x81960a8)
    at classobject.c:594
#7  0x806eb97 in instance_getattr (inst=0x84ecdd4, name=0x81960a8)
    at classobject.c:639
#8  0x807b445 in PyObject_GetAttrString (v=0x84ecdd4, name=0x80d306b "__str__")
    at object.c:595
#9  0x807adf8 in PyObject_Str (v=0x84ecdd4) at object.c:291
#10 0x8097d1e in builtin_str (self=0x0, args=0x85adc3c) at bltinmodule.c:2034
#11 0x805a490 in call_builtin (func=0x81917e0, arg=0x85adc3c, kw=0x0)
    at ceval.c:2369


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From tim_one at email.msn.com  Sat Aug 12 08:29:42 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 02:29:42 -0400
Subject: [Python-Dev] list comprehensions
In-Reply-To: <14740.41811.590487.13187@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFPGPAA.tim_one@email.msn.com>

[Skip Montanaro]
> I believe the latest update to the list comprehensions patch by Ping
> resolved the last concert the BDFL(*) had.  As the owner of the
> patch is it my responsibility to check it in or do I need to assign
> it to Guido for final dispensation.

As the owner of the listcomp PEP, I both admonish you to wait until the PEP
is complete, and secretly encourage you to check it in anyway (unlike most
PEPs, this one is pre-approved no matter what I write <0.5 wink> -- better
to get the code out there now!  if anything changes due to the PEP, should
be easy to twiddle).

acting-responsibly-despite-appearances-ly y'rs  - tim





From tim_one at email.msn.com  Sat Aug 12 09:32:17 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 03:32:17 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000811235632.A19358@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>

[Eric S. Raymond, with a lot of code that dies in a lot of pain]

Eric, as I'm running on a Windows laptop right now, there's not much I can
do to try to run this code.  However, something struck me in your patch "by
eyeball", and here's a self-contained program that crashes under Windows:

# This is esr's new class.
class Requirement:
    "A requirement, together with a message to be shown if it's violated."
    def __init__(self, wff, message=None):
        self.predicate = wff
        self.message = message

    def str(self):
        return display_expression(self.predicate)

    def __repr__(self):
        if self.message:
            return self.message
        else:
            return str(self)

# This is my driver.
r = Requirement("trust me, I'm a wff!")
print r


Could that be related to your problem?  I think you really wanted to name
"str" as "__str__" in this class (or if not, comment in extreme detail why
you want to confuse the hell out of the reader <wink>).  As is, my

    print r

attempts to get look up r.__str__, which isn't found, so Python falls back
to using r.__repr__.  That *is* found, but r.message is None, so
Requirement.__repr__ executes

    return str(self)

And then we go thru the whole "no __str__" -> "try __repr__" -> "message is
None" -> "return str(self)" business again, and end up with unbounded
recursion.  The top of the stacktrace Ping posted *does* show that the code
is failing to find a "__str__" attr, so that's consistent with the scenario
described here.

If this is the problem, note that ways to detect such kinds of unbounded
recursion have been discussed here within the last week.  You're a clever
enough fellow that I have to suspect you concocted this test case as a way
to support the more extreme of those proposals without just saying "+1"
<wink>.





From ping at lfw.org  Sat Aug 12 10:09:53 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 12 Aug 2000 01:09:53 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008112156180.2615-100000@localhost>
Message-ID: <Pine.LNX.4.10.10008120107140.2615-100000@localhost>

On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> > I have successfully reproduced the core dump.
> 
> I'm investigating.  Top of the stack looks like:

This chunk of stack repeats lots and lots of times.
The problem is due to infinite recursion in your __repr__ routine:

    class Requirement:
        "A requirement, together with a message to be shown if it's violated."
        def __init__(self, wff, message=None):
            self.predicate = wff
            self.message = message

        def str(self):
            return display_expression(self.predicate)

        def __repr__(self):
            if self.message:
                return self.message
            else:
                return str(self)

Notice that Requirement.__repr__ calls str(self), which triggers
Requirement.__repr__ again because there is no __str__ method.

If i change "def str(self)" to "def __str__(self)", the problem goes
away and everything works properly.

With a reasonable stack depth limit in place, this would produce
a run-time error rather than a segmentation fault.


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From ping at lfw.org  Sat Aug 12 10:22:40 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 12 Aug 2000 01:22:40 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>

On Sat, 12 Aug 2000, Tim Peters wrote:
> Could that be related to your problem?  I think you really wanted to name
> "str" as "__str__" in this class

Oops.  I guess i should have just read the code before going
through the whole download procedure.

Uh, yeah.  What he said.  :)  That wise man with the moustache over there.


One thing i ran into as a result of trying to run it under the
debugger, though: turning on cursesmodule was slightly nontrivial.
There's no cursesmodule.c; it's _cursesmodule.c instead; but
Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
wasn't sufficient; i had to edit and insert the underscores by hand
to get curses to work.


-- ?!ng




From effbot at telia.com  Sat Aug 12 11:12:19 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 11:12:19 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>              <20000811162109.I17171@xs4all.nl>  <200008111556.KAA05068@cj20424-a.reston1.va.home.com>
Message-ID: <00d301c0043d$7eb0b540$f2a6b5d4@hagrid>

guido wrote:
> > Indeed. I didn't actually check the story, since Guido was apparently
> > convinced by its validity.
> 
> I wasn't convinced!  I wrote "is this true?" in my message!!!
> 
> > I was just operating under the assumption that
> > send() did behave like write(). I won't blindly believe Guido anymore ! :)
> 
> I bgelieve they do behave the same: in my mind, write() doesn't write
> fewer bytes than you tell it either!  (Except maybe to a tty device
> when interrupted by a signal???)

SUSv2 again:

    If a write() requests that more bytes be written than there
    is room for (for example, the ulimit or the physical end of a
    medium), only as many bytes as there is room for will be
    written. For example, suppose there is space for 20 bytes
    more in a file before reaching a limit. A write of 512 bytes
    will return 20. The next write of a non-zero number of bytes
    will give a failure return (except as noted below)  and the
    implementation will generate a SIGXFSZ signal for the thread. 

    If write() is interrupted by a signal before it writes any data,
    it will return -1 with errno set to [EINTR]. 

    If write() is interrupted by a signal after it successfully writes
    some data, it will return the number of bytes written. 

sockets are an exception:

    If fildes refers to a socket, write() is equivalent to send() with
    no flags set.

fwiw, if "send" may send less than the full buffer in blocking
mode on some platforms (despite what the specification implies),
it's quite interesting that *nobody* has ever noticed before...

</F>




From effbot at telia.com  Sat Aug 12 11:13:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 11:13:45 +0200
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
References: <20000811143031.A13790@ActiveState.com>  <200008112256.RAA01675@cj20424-a.reston1.va.home.com>
Message-ID: <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>

guido wrote:
> I think I made them binary during the period when I was mounting the
> Unix source directory on a Windows machine.  I don't do that any more
> and I don't know anyone who does

we do.

trent wrote:
> > Does anybody see any problems treating them as text files?

developer studio 5.0 does:

    "This makefile was not generated by Developer Studio"

    "Continuing will create a new Developer Studio project to
    wrap this makefile. You will be prompted to save after the
    new project has been created".

    "Do you want to continue"

    (click yes)

    "The options file (.opt) for this workspace specified a project
    configuration "... - Win32 Alpha Release" that no longer exists.
    The configuration will be set to "... - Win32 Debug"

    (click OK)

    (click build)

    "MAKE : fatal error U1052: file '....mak' not found"

</F>




From thomas at xs4all.net  Sat Aug 12 11:21:19 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 11:21:19 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>; from ping@lfw.org on Sat, Aug 12, 2000 at 01:22:40AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost>
Message-ID: <20000812112119.C14470@xs4all.nl>

On Sat, Aug 12, 2000 at 01:22:40AM -0700, Ka-Ping Yee wrote:

> One thing i ran into as a result of trying to run it under the
> debugger, though: turning on cursesmodule was slightly nontrivial.
> There's no cursesmodule.c; it's _cursesmodule.c instead; but
> Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> wasn't sufficient; i had to edit and insert the underscores by hand
> to get curses to work.

You should update your Setup file, then ;) Compare it with Setup.in and see
what else changed since the last time you configured Python.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From martin at loewis.home.cs.tu-berlin.de  Sat Aug 12 11:29:25 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 12 Aug 2000 11:29:25 +0200
Subject: [Python-Dev] Processing CVS diffs
Message-ID: <200008120929.LAA01434@loewis.home.cs.tu-berlin.de>

While looking at the comments for Patch #100654, I noticed a complaint
about the patch being a CVS diff, which is not easily processed by
patch.

There is a simple solution to that: process the patch with the script
below. It will change the patch in-place, and it works well for me
even though it is written in the Evil Language :-)

Martin

#! /usr/bin/perl -wi
# Propagate the full pathname from the Index: line in CVS output into
# the diff itself so that patch will use it.
#  Thrown together by Jason Merrill <jason at cygnus.com>

while (<>)
{
  if (/^Index: (.*)/) 
    {
      $full = $1;
      print;
      for (1..7)
	{
	  $_ = <>;
	  s/ [^\t]+\t/ $full\t/;
	  print;
	}
    }
  else
    {
      print;
    }
}





From mal at lemburg.com  Sat Aug 12 11:48:25 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 12 Aug 2000 11:48:25 +0200
Subject: [Python-Dev] Python Announcements ???
Message-ID: <39951D69.45D01703@lemburg.com>

Could someone at BeOpen please check what happened to the
python-announce mailing list ?!

Messages to that list don't seem to show up anywhere and I've
been getting strange reports from the mail manager software in
the past when I've tried to post there.

Also, what happened to the idea of hooking that list onto
the c.l.p.a newsgroup. I don't remember the details of
how this is done (had to do something with adding some
approved header), but this would be very helpful.

The Python community currently has no proper way of
announcing new projects, software or gigs. A post to
c.l.p which has grown to be a >4K posts/month list does
not have the same momentum as pushing it through c.l.p.a
had in the past.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From just at letterror.com  Sat Aug 12 13:51:31 2000
From: just at letterror.com (Just van Rossum)
Date: Sat, 12 Aug 2000 12:51:31 +0100
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <l03102803b5bae5eb2fe1@[193.78.237.125]>

(Sorry for the late reply, that's what you get when you don't Cc me...)

Vladimir Marangozov wrote:
> [Just]
> > Gordon, how's that Stackless PEP coming along?
> > Sorry, I couldn't resist ;-)
>
> Ah, in this case, we'll get a memory error after filling the whole disk
> with frames <wink>

No matter how much we wink to each other, that was a cheap shot; especially
since it isn't true: Stackless has a MAX_RECURSION_DEPTH value. Someone who
has studied Stackless "in detail" (your words ;-) should know that.

Admittedly, that value is set way too high in the last stackless release
(123456 ;-), but that doesn't change the principle that Stackless could
solve the problem discussed in this thread in a reliable and portable
manner.

Of course there's be work to do:
- MAX_RECURSION_DEPTH should be changeable at runtime
- __str__ (and a bunch of others) isn't yet stackless
- ...

But the hardest task seems to be to get rid of the hostility and prejudices
against Stackless :-(

Just





From esr at thyrsus.com  Sat Aug 12 13:22:55 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:22:55 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 03:32:17AM -0400
References: <20000811235632.A19358@thyrsus.com> <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>
Message-ID: <20000812072255.C20109@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> If this is the problem, note that ways to detect such kinds of unbounded
> recursion have been discussed here within the last week.  You're a clever
> enough fellow that I have to suspect you concocted this test case as a way
> to support the more extreme of those proposals without just saying "+1"
> <wink>.

I may be that clever, but I ain't that devious.  I'll try the suggested
fix.  Very likely you're right, though the location of the core dump
is peculiar if this is the case.  It's inside bound_from_constraint(),
whereas in your scenario I'd expect it to be in the Requirement method code.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The day will come when the mystical generation of Jesus by the Supreme
Being as his father, in the womb of a virgin, will be classed with the
fable of the generation of Minerva in the brain of Jupiter.
	-- Thomas Jefferson, 1823



From esr at thyrsus.com  Sat Aug 12 13:34:19 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:34:19 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>; from ping@lfw.org on Sat, Aug 12, 2000 at 01:22:40AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost>
Message-ID: <20000812073419.D20109@thyrsus.com>

Ka-Ping Yee <ping at lfw.org>:
> One thing i ran into as a result of trying to run it under the
> debugger, though: turning on cursesmodule was slightly nontrivial.
> There's no cursesmodule.c; it's _cursesmodule.c instead; but
> Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> wasn't sufficient; i had to edit and insert the underscores by hand
> to get curses to work.

Your Setup is out of date.

But this reminds me.  There's way too much hand-hacking in the Setup
mechanism.  It wouldn't be hard to enhance the Setup format to support
#if/#endif so that config.c generation could take advantage of
configure tests.  That way, Setup could have constructs in it like
this:

#if defined(CURSES)
#if defined(linux)
_curses _cursesmodule.c -lncurses
#else
_curses _cursesmodule.c -lcurses -ltermcap
#endif
#endif

I'm willing to do and test this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The right of the citizens to keep and bear arms has justly been considered as
the palladium of the liberties of a republic; since it offers a strong moral
check against usurpation and arbitrary power of rulers; and will generally,
even if these are successful in the first instance, enable the people to resist
and triumph over them."
        -- Supreme Court Justice Joseph Story of the John Marshall Court



From esr at snark.thyrsus.com  Sat Aug 12 13:44:54 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:44:54 -0400
Subject: [Python-Dev] Core dump is dead, long live the core dump
Message-ID: <200008121144.HAA20230@snark.thyrsus.com>

Tim's diagnosis of fatal recursion was apparently correct; apologies,
all.  This still leaves the question of why the core dump happened so
far from the actual scene of the crime.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

In every country and in every age, the priest has been hostile to
liberty. He is always in alliance with the despot, abetting his abuses
in return for protection to his own.
	-- Thomas Jefferson, 1814



From mal at lemburg.com  Sat Aug 12 13:36:14 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 12 Aug 2000 13:36:14 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com>
Message-ID: <399536AE.309D456C@lemburg.com>

"Eric S. Raymond" wrote:
> 
> Ka-Ping Yee <ping at lfw.org>:
> > One thing i ran into as a result of trying to run it under the
> > debugger, though: turning on cursesmodule was slightly nontrivial.
> > There's no cursesmodule.c; it's _cursesmodule.c instead; but
> > Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> > wasn't sufficient; i had to edit and insert the underscores by hand
> > to get curses to work.
> 
> Your Setup is out of date.
> 
> But this reminds me.  There's way too much hand-hacking in the Setup
> mechanism.  It wouldn't be hard to enhance the Setup format to support
> #if/#endif so that config.c generation could take advantage of
> configure tests.  That way, Setup could have constructs in it like
> this:
> 
> #if defined(CURSES)
> #if defined(linux)
> _curses _cursesmodule.c -lncurses
> #else
> _curses _cursesmodule.c -lcurses -ltermcap
> #endif
> #endif
> 
> I'm willing to do and test this.

This would be a *cool* thing to have :-) 

Definitely +1 from me if it's done in a portable way.

(Not sure how you would get this to run without the C preprocessor
though -- and Python's Makefile doesn't provide any information
on how to call it in a platform independent way. It's probably
cpp on most platforms, but you never know...)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From esr at thyrsus.com  Sat Aug 12 13:50:57 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:50:57 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <399536AE.309D456C@lemburg.com>; from mal@lemburg.com on Sat, Aug 12, 2000 at 01:36:14PM +0200
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com> <399536AE.309D456C@lemburg.com>
Message-ID: <20000812075056.A20245@thyrsus.com>

M.-A. Lemburg <mal at lemburg.com>:
> > But this reminds me.  There's way too much hand-hacking in the Setup
> > mechanism.  It wouldn't be hard to enhance the Setup format to support
> > #if/#endif so that config.c generation could take advantage of
> > configure tests.  That way, Setup could have constructs in it like
> > this:
> > 
> > #if defined(CURSES)
> > #if defined(linux)
> > _curses _cursesmodule.c -lncurses
> > #else
> > _curses _cursesmodule.c -lcurses -ltermcap
> > #endif
> > #endif
> > 
> > I'm willing to do and test this.
> 
> This would be a *cool* thing to have :-) 
> 
> Definitely +1 from me if it's done in a portable way.
> 
> (Not sure how you would get this to run without the C preprocessor
> though -- and Python's Makefile doesn't provide any information
> on how to call it in a platform independent way. It's probably
> cpp on most platforms, but you never know...)

Ah.  The Makefile may not provide this information -- but I believe 
configure can be made to!
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Ideology, politics and journalism, which luxuriate in failure, are
impotent in the face of hope and joy.
	-- P. J. O'Rourke



From thomas at xs4all.net  Sat Aug 12 13:53:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 13:53:46 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000812073419.D20109@thyrsus.com>; from esr@thyrsus.com on Sat, Aug 12, 2000 at 07:34:19AM -0400
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com>
Message-ID: <20000812135346.D14470@xs4all.nl>

On Sat, Aug 12, 2000 at 07:34:19AM -0400, Eric S. Raymond wrote:

> But this reminds me.  There's way too much hand-hacking in the Setup
> mechanism.  It wouldn't be hard to enhance the Setup format to support
> #if/#endif so that config.c generation could take advantage of
> configure tests.  That way, Setup could have constructs in it like
> this:

> #if defined(CURSES)
> #if defined(linux)
> _curses _cursesmodule.c -lncurses
> #else
> _curses _cursesmodule.c -lcurses -ltermcap
> #endif
> #endif

Why go through that trouble ? There already is a 'Setup.config' file, which
is used to pass Setup info for the thread and gc modules. It can easily be
extended to include information on all other locatable modules, leaving
'Setup' or 'Setup.local' for people who have their modules in strange
places. What would be a cool idea as well would be a configuration tool. Not
as complex as the linux kernel config tool, but something to help people
select the modules they want. Though it might not be necessary if configure
finds out what modules can be safely built.

I'm willing to write some autoconf tests to locate modules as well, if this
is deemed a good idea.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Sat Aug 12 13:54:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 13:54:31 +0200
Subject: [Python-Dev] Core dump is dead, long live the core dump
In-Reply-To: <200008121144.HAA20230@snark.thyrsus.com>; from esr@snark.thyrsus.com on Sat, Aug 12, 2000 at 07:44:54AM -0400
References: <200008121144.HAA20230@snark.thyrsus.com>
Message-ID: <20000812135431.E14470@xs4all.nl>

On Sat, Aug 12, 2000 at 07:44:54AM -0400, Eric S. Raymond wrote:

> Tim's diagnosis of fatal recursion was apparently correct; apologies,
> all.  This still leaves the question of why the core dump happened so
> far from the actual scene of the crime.

Blame it on your stack :-) It could have been that the appropriate error was
generated, and that the stack overflowed *again* during the processing of
that error :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Sat Aug 12 15:16:47 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sat, 12 Aug 2000 09:16:47 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <00d301c0043d$7eb0b540$f2a6b5d4@hagrid>
Message-ID: <1246036275-128789882@hypernet.com>

Fredrik wrote:

> fwiw, if "send" may send less than the full buffer in blocking
> mode on some platforms (despite what the specification implies),
> it's quite interesting that *nobody* has ever noticed before...

I noticed, but I expected it, so had no reason to comment. The 
Linux man pages are the only specification of send that I've 
seen that don't make a big deal out it. And clearly I'm not the 
only one, otherwise there would never have been a bug report 
(he didn't experience it, he just noticed sends without checks).

- Gordon



From guido at beopen.com  Sat Aug 12 16:48:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 12 Aug 2000 09:48:11 -0500
Subject: [Python-Dev] Re: PEP 0211: Linear Algebra Operators
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:13:17 MST."
             <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com> 
References: <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com> 
Message-ID: <200008121448.JAA03545@cj20424-a.reston1.va.home.com>

> As the PEP posted by Greg is substantially different from the one floating
> around in c.l.py, I'd like to post the latter here, which covers several
> weeks of discussions by dozens of discussants.  I'd like to encourage Greg
> to post his version to python-list to seek comments.

A procedural suggestion: let's have *two* PEPs, one for Huaiyu's
proposal, one for Greg's.  Each PEP should in its introduction briefly
mention the other as an alternative.  I don't generally recommend that
alternative proposals develop separate PEPs, but in this case the
potential impact on Python is so large that I think it's the only way
to proceed that doesn't give one group an unfair advantage over the
other.

I haven't had the time to read either proposal yet, so I can't comment
on their (in)compatibility, but I would surmise that at most one can
be accepted -- with the emphasis on *at most* (possibly neither is
ready for prime time), and with the understanding that each proposal
may be able to borrow ideas or code from the other anyway.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 16:21:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 16:21:50 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000811103701.A25386@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Aug 11, 2000 10:37:01 AM
Message-ID: <200008121421.QAA20095@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Aug 11, 2000 at 05:58:45PM +0200, Vladimir Marangozov wrote:
> > On a second thought, I think this would be a bad idea, even if
> > we manage to tweak the stack limits on most platforms. We would
> > loose determinism = loose control -- no good. A depth-first algorithm
> > may succeed on one machine, and fail on another.
> 
> So what?

Well, the point is that people like deterministic behavior and tend to
really dislike unpredictable systems, especially when the lack of
determinism is due to platform heterogeneity.

> We don't limit the amount of memory you can allocate on all
> machines just because your program may run out of memory on some
> machine.

We don't because we can't do it portably. But if we could, this would
have been a very useful setting -- there has been demand for Python on
embedded systems where memory size is a constraint. And note that after
the malloc cleanup, we *can* do this with a specialized Python malloc
(control how much memory is allocated from Python).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Sat Aug 12 16:29:57 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 16:29:57 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
References: <1246036275-128789882@hypernet.com>
Message-ID: <001301c00469$cb380fe0$f2a6b5d4@hagrid>

gordon wrote:
> Fredrik wrote:
> 
> > fwiw, if "send" may send less than the full buffer in blocking
> > mode on some platforms (despite what the specification implies),
> > it's quite interesting that *nobody* has ever noticed before...
> 
> I noticed, but I expected it, so had no reason to comment. The 
> Linux man pages are the only specification of send that I've 
> seen that don't make a big deal out it. And clearly I'm not the 
> only one, otherwise there would never have been a bug report 
> (he didn't experience it, he just noticed sends without checks).

I meant "I wonder why my script fails" rather than "that piece
of code looks funky".

:::

fwiw, I still haven't found a single reference (SUSv2 spec, man-
pages, Stevens, the original BSD papers) that says that a blocking
socket may do anything but sending all the data, or fail.

if that's true, I'm not sure we really need to "fix" anything here...

</F>




From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 16:46:40 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 16:46:40 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <l03102803b5bae5eb2fe1@[193.78.237.125]> from "Just van Rossum" at Aug 12, 2000 12:51:31 PM
Message-ID: <200008121446.QAA20112@python.inrialpes.fr>

Just van Rossum wrote:
> 
> (Sorry for the late reply, that's what you get when you don't Cc me...)
> 
> Vladimir Marangozov wrote:
> > [Just]
> > > Gordon, how's that Stackless PEP coming along?
> > > Sorry, I couldn't resist ;-)
> >
> > Ah, in this case, we'll get a memory error after filling the whole disk
> > with frames <wink>
> 
> No matter how much we wink to each other, that was a cheap shot;

I can't say that yours was more expensive <wink>.

> especially since it isn't true: Stackless has a MAX_RECURSION_DEPTH value.
> Someone who has studied Stackless "in detail" (your words ;-) should know
> that.

As I said - it has been years ago. Where's that PEP draft?
Please stop dreaming about hostility <wink>. I am all for Stackless, but
the implementation wasn't mature enough at the time when I looked at it.
Now I hear it has evolved and does not allow graph cycles. Okay, good --
tell me more in a PEP and submit a patch.

> 
> Admittedly, that value is set way too high in the last stackless release
> (123456 ;-), but that doesn't change the principle that Stackless could
> solve the problem discussed in this thread in a reliable and portable
> manner.

Indeed, if it didn't reduce the stack dependency in a portable way, it
couldn't have carried the label "Stackless" for years. BTW, I'm more
interested in the stackless aspect than the call/cc aspect of the code.

> 
> Of course there's be work to do:
> - MAX_RECURSION_DEPTH should be changeable at runtime
> - __str__ (and a bunch of others) isn't yet stackless
> - ...

Tell me more in the PEP.

> 
> But the hardest task seems to be to get rid of the hostility and prejudices
> against Stackless :-(

Dream on <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Sat Aug 12 19:56:23 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 12 Aug 2000 12:56:23 -0500 (CDT)
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
Message-ID: <14741.36807.101870.221890@beluga.mojam.com>

With Thomas's patch to the top-level Makefile that makes Grammar a more
first-class directory, are the generated graminit.h and graminit.c files
needed any longer?

Skip



From guido at beopen.com  Sat Aug 12 21:12:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 12 Aug 2000 14:12:23 -0500
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
In-Reply-To: Your message of "Sat, 12 Aug 2000 12:56:23 EST."
             <14741.36807.101870.221890@beluga.mojam.com> 
References: <14741.36807.101870.221890@beluga.mojam.com> 
Message-ID: <200008121912.OAA00807@cj20424-a.reston1.va.home.com>

> With Thomas's patch to the top-level Makefile that makes Grammar a more
> first-class directory, are the generated graminit.h and graminit.c files
> needed any longer?

I still like to keep them around.  Most people don't hack the grammar.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From trentm at ActiveState.com  Sat Aug 12 20:39:00 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 11:39:00 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>; from effbot@telia.com on Sat, Aug 12, 2000 at 11:13:45AM +0200
References: <20000811143031.A13790@ActiveState.com> <200008112256.RAA01675@cj20424-a.reston1.va.home.com> <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>
Message-ID: <20000812113900.D3528@ActiveState.com>

On Sat, Aug 12, 2000 at 11:13:45AM +0200, Fredrik Lundh wrote:
> guido wrote:
> > I think I made them binary during the period when I was mounting the
> > Unix source directory on a Windows machine.  I don't do that any more
> > and I don't know anyone who does
> 
> we do.
> 
> trent wrote:
> > > Does anybody see any problems treating them as text files?
> 
> developer studio 5.0 does:
> 
>     "This makefile was not generated by Developer Studio"
> 
>     "Continuing will create a new Developer Studio project to
>     wrap this makefile. You will be prompted to save after the
>     new project has been created".
> 
>     "Do you want to continue"
> 
>     (click yes)
> 
>     "The options file (.opt) for this workspace specified a project
>     configuration "... - Win32 Alpha Release" that no longer exists.
>     The configuration will be set to "... - Win32 Debug"
> 
>     (click OK)
> 
>     (click build)
> 
>     "MAKE : fatal error U1052: file '....mak' not found"
> 
> </F>

I admit that I have not tried a clean checkout and used DevStudio 5 (I will
try at home alter today). Howver, I *do* think that the problem here is that
you grabbed in the short iterim before patch:

http://www.python.org/pipermail/python-checkins/2000-August/007072.html


I hope I hope.
If it is broken for MSVC 5 when I tried in a little bit I will back out.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Sat Aug 12 20:47:19 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 11:47:19 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 11, 2000 at 08:59:22PM -0400
References: <20000811143031.A13790@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>
Message-ID: <20000812114719.E3528@ActiveState.com>

On Fri, Aug 11, 2000 at 08:59:22PM -0400, Tim Peters wrote:
> Not really.  They're not human-editable!  Leave 'em alone.  Keeping them in
> binary mode is a clue to people that they aren't *supposed* to go mucking
> with them via text processing tools.

I think that putting them in binary mode is a misleading clue that people
should not muck with them. The *are* text files. Editable or not the are not
binary. I shouldn't go mucking with 'configure' either, because it is a generated
file, but we shouldn't call it binary.

Yes, I agree, people should not muck with .dsp files. I am not suggesting
that we do. The "text-processing" I was referring to are my attempts to keep
a local repository of Python in our local SCM tool (Perforce) in sync with
Python-CVS. When I suck in Python-CVS on linux and them shove it in Perforce:
 - the .dsp's land on my linux box with DOS terminators
 - I check everything into Perforce
 - I check Python out of Perforce on a Windows box and the .dsp's are all
   terminated with \r\n\n. This is because the .dsp were not marked as binary
   in Perforce because I logically didn't think that they *should* be marked
   as binary.
Having them marked as binary is just misleading I think.
 
Anyway, as Guido said, this is not worth arguing over too much and it should
have been fixed for you about an hour after I broke it (sorry).

If it is still broken for you then I will back out.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From nascheme at enme.ucalgary.ca  Sat Aug 12 20:58:20 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 12:58:20 -0600
Subject: [Python-Dev] Lib/symbol.py needs update after listcomp
Message-ID: <20000812125820.A567@keymaster.enme.ucalgary.ca>

Someone needs to run:

    ./python Lib/symbol.py

and check in the changes.

  Neil



From akuchlin at mems-exchange.org  Sat Aug 12 21:09:44 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Sat, 12 Aug 2000 15:09:44 -0400
Subject: [Python-Dev] Lib/symbol.py needs update after listcomp
In-Reply-To: <20000812125820.A567@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Sat, Aug 12, 2000 at 12:58:20PM -0600
References: <20000812125820.A567@keymaster.enme.ucalgary.ca>
Message-ID: <20000812150944.A9653@kronos.cnri.reston.va.us>

On Sat, Aug 12, 2000 at 12:58:20PM -0600, Neil Schemenauer wrote:
>Someone needs to run:
>    ./python Lib/symbol.py
>and check in the changes.

Done.  

--amk



From tim_one at email.msn.com  Sat Aug 12 21:10:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 15:10:30 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000812113900.D3528@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>

Note that an update isn't enough to get you going again on Windows, and
neither is (the moral equivalent of) "rm *" in PCbuild followed by an
update.  But "rm -rf PCbuild" followed by an update was enough for me (I'm
working via phone modem -- a fresh full checkout is too time-consuming for
me).





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 21:16:17 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 21:16:17 +0200 (CEST)
Subject: [Python-Dev] minimal stackless
Message-ID: <200008121916.VAA20873@python.inrialpes.fr>

I'd like to clarify my position about the mythical Stackless issue.

I would be okay to evaluate a minimal stackless implementation of the
current VM, and eventually consider it for inclusion if it doesn't slow
down the interpreter (and if it does, I don't know yet how much would be
tolerable).

However, I would be willing to do this only if such implementation is
distilled from the call/cc stuff completely.

That is, a minimal stackless implementation which gives us an equivalent
VM as we have it today with the C stack. This is what I'd like to see
first in the stackless PEP too. No mixtures with continuations & co.

The call/cc issue is "application domain" for me -- it relies on top of
the minimal stackless and would come only as an exported interface to the
control flow management of the VM. Therefore, it must be completely
optional (both in terms of lazy decision on whether it should be included
someday).

So, if such distilled, minimal stackless implementation hits the
SourceForge shelves by the next week, I, at least, will give it a try
and will report impressions. By that time, it would also be nice to see a
clear summary of the frame management ideas in the 1st draft of the PEP.

If the proponents of Stackless are ready for the challenge, give it a go
(this seems to be a required first step in the right direction anyway).

I can't offer any immediate help though, given the list of Python-related
tasks I'd like to finish (as always, done in my spare minutes) and I'll
be almost, if not completely, unavailable the last week of August.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Sat Aug 12 21:22:58 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 12:22:58 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 03:10:30PM -0400
References: <20000812113900.D3528@ActiveState.com> <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>
Message-ID: <20000812122258.A4684@ActiveState.com>

On Sat, Aug 12, 2000 at 03:10:30PM -0400, Tim Peters wrote:
> Note that an update isn't enough to get you going again on Windows, and
> neither is (the moral equivalent of) "rm *" in PCbuild followed by an
> update.  But "rm -rf PCbuild" followed by an update was enough for me (I'm
> working via phone modem -- a fresh full checkout is too time-consuming for
> me).

Oh right. The '-kb' is sticky to you checked out version. I forgot
about that.

Thanks, Tim.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From esr at thyrsus.com  Sat Aug 12 21:37:42 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 15:37:42 -0400
Subject: [Python-Dev] minimal stackless
In-Reply-To: <200008121916.VAA20873@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sat, Aug 12, 2000 at 09:16:17PM +0200
References: <200008121916.VAA20873@python.inrialpes.fr>
Message-ID: <20000812153742.A25529@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:
> That is, a minimal stackless implementation which gives us an equivalent
> VM as we have it today with the C stack. This is what I'd like to see
> first in the stackless PEP too. No mixtures with continuations & co.
> 
> The call/cc issue is "application domain" for me -- it relies on top of
> the minimal stackless and would come only as an exported interface to the
> control flow management of the VM. Therefore, it must be completely
> optional (both in terms of lazy decision on whether it should be included
> someday).

I'm certainly among the call/cc fans, and I guess I'm weakly in the
"Stackless proponent" camp, and I agree.  These issues should be
separated.  If minimal stackless mods to ceval can solve (for example) the
stack overflow problem I just got bitten by, we ought to integrate
them for 2.0 and then give any new features a separate and thorough debate.

I too will be happy to test a minimal-stackless patch.  Come on, Christian,
the ball's in your court.  This is your best chance to get stackless
accepted.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

When only cops have guns, it's called a "police state".
        -- Claire Wolfe, "101 Things To Do Until The Revolution" 



From effbot at telia.com  Sat Aug 12 21:40:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 21:40:15 +0200
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
References: <14741.36807.101870.221890@beluga.mojam.com>
Message-ID: <002b01c00495$32df3120$f2a6b5d4@hagrid>

skip wrote:

> With Thomas's patch to the top-level Makefile that makes Grammar a more
> first-class directory, are the generated graminit.h and graminit.c files
> needed any longer?

yes please -- thomas' patch only generates those files on
unix boxes.  as long as we support other platforms too, the
files should be in the repository, and new versions should be
checked in whenever the grammar is changed.

</F>




From tim_one at email.msn.com  Sat Aug 12 21:39:03 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 15:39:03 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000812114719.E3528@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEGOGPAA.tim_one@email.msn.com>

[Trent Mick]
> I think that putting them in binary mode is a misleading clue that
> people should not muck with them. The *are* text files.

But you don't know that.  They're internal Microsoft files in an
undocumented, proprietary format.  You'll find nothing in MS docs
guaranteeing they're text files, but will find the direst warnings against
attempting to edit them.  MS routinely changes *scads* of things about
DevStudio-internal files across releases.

For all the rest, you created your own problems by insisting on telling
Perforce they're text files, despite that they're clearly marked binary
under CVS.

I'm unstuck now, but Fredrik will likely still have new problems
cross-mounting file systems between Windows and Linux (see his msg).  Since
nothing here *was* broken (except for your private and self-created problems
under Perforce), "fixing" it was simply a bad idea.  We're on a very tight
schedule, and the CVS tree isn't a playground.





From thomas at xs4all.net  Sat Aug 12 21:45:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 21:45:24 +0200
Subject: [Python-Dev] 're' crashes ?
Message-ID: <20000812214523.H14470@xs4all.nl>

I'm not trying to sound like Eric (though I don't mind if I do ;) but my
Python crashes. Or rather, test_re fails with a coredump, since this
afternoon or so. I'm fairly certain it was working fine yesterday, and it's
an almost-vanilla CVS tree (I was about to check-in the fixes to
Tools/compiler, and tried to use the compiler on the std lib and the test
suite, when I noticed the coredump.)

The coredump says this:

#0  eval_code2 (co=0x824ba50, globals=0x82239b4, locals=0x0, args=0x827e18c, 
    argcount=2, kws=0x827e194, kwcount=0, defs=0x82211c0, defcount=1, 
    owner=0x0) at ceval.c:1474
1474                                    Py_DECREF(w);

Which is part of the FOR_LOOP opcode:

1461                    case FOR_LOOP:
1462                            /* for v in s: ...
1463                               On entry: stack contains s, i.
1464                               On exit: stack contains s, i+1, s[i];
1465                               but if loop exhausted:
1466                                    s, i are popped, and we jump */
1467                            w = POP(); /* Loop index */
1468                            v = POP(); /* Sequence object */
1469                            u = loop_subscript(v, w);
1470                            if (u != NULL) {
1471                                    PUSH(v);
1472                                    x = PyInt_FromLong(PyInt_AsLong(w)+1);
1473                                    PUSH(x);
1474                                    Py_DECREF(w);
1475                                    PUSH(u);
1476                                    if (x != NULL) continue;
1477                            }
1478                            else {
1479                                    Py_DECREF(v);
1480                                    Py_DECREF(w);
1481                                    /* A NULL can mean "s exhausted"
1482                                       but also an error: */
1483                                    if (PyErr_Occurred())
1484                                            why = WHY_EXCEPTION;

I *think* this isn't caused by this code, but rather by a refcounting bug
somewhere. 'w' should be an int, and it's used on line 1472, and doesn't
cause an error there (unless passing a NULL pointer to PyInt_AsLong() isn't
an error ?) But it's NULL at line 1474. Is there an easy way to track an
error like this ? Otherwise I'll play around a bit using breakpoints and
such in gdb.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Sat Aug 12 22:03:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 16:03:20 -0400
Subject: [Python-Dev] Feature freeze!
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com>

The 2.0 release manager (Jeremy) is on vacation.  In his absence, here's a
reminder from the 2.0 release schedule:

    Aug. 14: All 2.0 PEPs finished / feature freeze

See the rest at:

    http://python.sourceforge.net/peps/pep-0200.html

Note that that's Monday!  Any new "new feature" patches submitted after
Sunday will be mindlessly assigned Postponed status.  New "new feature"
patches submitted after this instant but before Monday will almost certainly
be assigned Postponed status too -- just not *entirely* mindlessly <wink>.
"Sunday" and "Monday" are defined by wherever Guido happens to be.  "This
instant" is defined by me, and probably refers to some time in the past from
your POV; it's negotiable.





From akuchlin at mems-exchange.org  Sat Aug 12 22:06:28 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Sat, 12 Aug 2000 16:06:28 -0400
Subject: [Python-Dev] Location of compiler code
Message-ID: <E13NhY4-00087X-00@kronos.cnri.reston.va.us>

I noticed that Jeremy checked in his compiler code; however, it lives
in Tools/compiler/compiler.  Any reason it isn't in Lib/compiler?

--amk



From tim_one at email.msn.com  Sat Aug 12 22:11:50 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 16:11:50 -0400
Subject: [Python-Dev] Location of compiler code
In-Reply-To: <E13NhY4-00087X-00@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHBGPAA.tim_one@email.msn.com>

[Andrew Kuchling]
> I noticed that Jeremy checked in his compiler code; however, it lives
> in Tools/compiler/compiler.  Any reason it isn't in Lib/compiler?

Suggest waiting for Jeremy to return from vacation (22 Aug).





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 23:08:44 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 23:08:44 +0200 (CEST)
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 04:03:20 PM
Message-ID: <200008122108.XAA21412@python.inrialpes.fr>

Tim Peters wrote:
> 
> The 2.0 release manager (Jeremy) is on vacation.  In his absence, here's a
> reminder from the 2.0 release schedule:
> 
>     Aug. 14: All 2.0 PEPs finished / feature freeze
> 
> See the rest at:
> 
>     http://python.sourceforge.net/peps/pep-0200.html
> 
> Note that that's Monday!  Any new "new feature" patches submitted after
> Sunday will be mindlessly assigned Postponed status.  New "new feature"
> patches submitted after this instant but before Monday will almost certainly
> be assigned Postponed status too -- just not *entirely* mindlessly <wink>.
> "Sunday" and "Monday" are defined by wherever Guido happens to be.  "This
> instant" is defined by me, and probably refers to some time in the past from
> your POV; it's negotiable.

This reminder comes JIT!

Then please make coincide the above dates/instants with the status of
the open patches and take a stance on them: assign them to people, postpone,
whatever.

I deliberately postponed my object malloc patch.

PS: this is also JIT as per the stackless discussion -- I mentioned
"consider for inclusion" which was interpreted as "inclusion for 2.0"
<frown>. God knows that I tried to be very careful when writing my
position statement... OTOH, there's still a valid deadline for 2.0!

PPS: is the pep-0200.html referenced above up to date? For instance,
I see it mentions SET_LINENO pointing to old references, while a newer
postponed patch is at SourceForge.

A "last modified <date>" stamp would be nice.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Sat Aug 12 23:51:55 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 14:51:55 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
Message-ID: <20000812145155.A7629@ActiveState.com>

from Objects/listobject.c:

static int
ins1(PyListObject *self, int where, PyObject *v)
{
    int i;
    PyObject **items;
    if (v == NULL) {
        PyErr_BadInternalCall();
        return -1;
    }
    items = self->ob_item;
    NRESIZE(items, PyObject *, self->ob_size+1);
    if (items == NULL) {
        PyErr_NoMemory();
        return -1;
    }
    if (where < 0)
        where = 0;
    if (where > self->ob_size)
        where = self->ob_size;
    for (i = self->ob_size; --i >= where; )
        items[i+1] = items[i];
    Py_INCREF(v);
    items[where] = v;
    self->ob_item = items;
    self->ob_size++;         <-------------- can this overflow?
    return 0;
}


In the case of sizeof(int) < sizeof(void*), can this overflow. I have a small
patch to text self->ob_size against INT_MAX and I was going to submit it but
I am not so sure that overflow is not checked by some other mechanism for
list insert. Is it or was this relying on sizeof(ob_size) == sizeof(void*),
hence a list being able to hold as many items as there is addressable memory?

scared-to-patch-ly yours,
Trent


proposed patch:

*** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
--- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
***************
*** 149,155 ****
        Py_INCREF(v);
        items[where] = v;
        self->ob_item = items;
!       self->ob_size++;
        return 0;
  }

--- 149,159 ----
        Py_INCREF(v);
        items[where] = v;
        self->ob_item = items;
!       if (self->ob_size++ == INT_MAX) {
!               PyErr_SetString(PyExc_OverflowError,
!                       "cannot add more objects to list");
!               return -1;
!       }
        return 0;
  }




-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Sat Aug 12 23:52:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 23:52:47 +0200
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <200008122108.XAA21412@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sat, Aug 12, 2000 at 11:08:44PM +0200
References: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com> <200008122108.XAA21412@python.inrialpes.fr>
Message-ID: <20000812235247.I14470@xs4all.nl>

On Sat, Aug 12, 2000 at 11:08:44PM +0200, Vladimir Marangozov wrote:

> PPS: is the pep-0200.html referenced above up to date? For instance,
> I see it mentions SET_LINENO pointing to old references, while a newer
> postponed patch is at SourceForge.

I asked similar questions about PEP 200, in particular on which new features
were considered for 2.0 and what their status is (PEP 200 doesn't mention
augmented assignment, which as far as I know has been on Guido's "2.0" list
since 2.0 and 1.6 became different releases.) I apparently caught Jeremy
just before he left for his holiday, and directed me towards Guido regarding
those questions, and Guido has apparently been too busy (or he missed that
email as well as some python-dev email.)

All my PEPs are in, though, unless I should write a PEP on 'import as',
which I really think should go in 2.0. I'd be suprised if 'import as' needs
a PEP, since the worst vote on 'import as' was Eric's '+0', and there seems
little concern wrt. syntax or implementation. It's more of a fix for
overlooked syntax than it is a new feature<0.6 wink>.

I just assumed the PyLabs team (or at least 4/5th of it) were too busy with
getting 1.6 done and finished to be concerned with non-pressing 2.0 issues,
and didn't want to press them on these issues until 1.6 is truely finished.
Pity 1.6-beta-cycle and 2.0-feature-freeze overlap :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nascheme at enme.ucalgary.ca  Sun Aug 13 00:03:57 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 18:03:57 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
Message-ID: <20000812180357.A18816@acs.ucalgary.ca>

With all the recent proposed and accepted language changes, we
should be a careful to keep everything up to date.  The parser
module, Jeremy's compiler, and I suspect JPython have been left
behind by the recent changes.  In the past we have been blessed
by a very stable core language.  Times change. :)

I'm trying to keep Jeremy's compiler up to date.  Modifying the
parser module to understand list comprehensions seems to be none
trivial however.  Does anyone else have the time and expertise to
make these changes?  The compiler/transformer.py module will also
have to be modified to understand the new parse tree nodes.  That
part should be somewhat easier.

On a related note, I think the SyntaxError messages generated by
the compile() function and the parser module could be improved.
This is annoying:

    >>> compile("$x", "myfile.py", "eval")
    Traceback (most recent call last):
      File "<stdin>", line 1, in ?
      File "<string>", line 1
        $x
        ^
    SyntaxError: invalid syntax

Is there any reason why the error message does not say
"myfile.py" instead of "<string>"?  If not I can probably put
together a patch to fix it.

As far as I can tell, the parser ParserError exceptions are
almost useless.  At least a line number could be given.  I'm not
sure how much work that is to fix though.

  Neil



From nascheme at enme.ucalgary.ca  Sun Aug 13 00:06:07 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 18:06:07 -0400
Subject: [Python-Dev] compiler package in Lib?
Message-ID: <20000812180607.A18938@acs.ucalgary.ca>

Shouldn't the compiler package go in Lib instead of Tools?  The
AST used the by compiler should be very useful to things like
lint checkers, optimizers, and "refactoring" tools.  

  Neil



From Vladimir.Marangozov at inrialpes.fr  Sun Aug 13 00:24:39 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 00:24:39 +0200 (CEST)
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <20000812145155.A7629@ActiveState.com> from "Trent Mick" at Aug 12, 2000 02:51:55 PM
Message-ID: <200008122224.AAA21816@python.inrialpes.fr>

Trent Mick wrote:
>
> [listobject.c/ins1()]
> ...
>     self->ob_item = items;
>     self->ob_size++;         <-------------- can this overflow?
>     return 0;
> }
> 
> 
> In the case of sizeof(int) < sizeof(void*), can this overflow. I have a small
> patch to text self->ob_size against INT_MAX and I was going to submit it but
> I am not so sure that overflow is not checked by some other mechanism for
> list insert.

+0.

It could overflow but if it does, this is a bad sign about using a list
for such huge amount of data.

And this is the second time in a week that I see an attempt to introduce
a bogus counter due to post-increments embedded in an if statement!

> Is it or was this relying on sizeof(ob_size) == sizeof(void*),
> hence a list being able to hold as many items as there is addressable memory?
> 
> scared-to-patch-ly yours,
> Trent

And you're right <wink>

> 
> 
> proposed patch:
> 
> *** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
> --- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
> ***************
> *** 149,155 ****
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       self->ob_size++;
>         return 0;
>   }
> 
> --- 149,159 ----
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       if (self->ob_size++ == INT_MAX) {
> !               PyErr_SetString(PyExc_OverflowError,
> !                       "cannot add more objects to list");
> !               return -1;
> !       }
>         return 0;
>   }
> 
> 
> 
> 
> -- 
> Trent Mick
> TrentM at ActiveState.com
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From esr at thyrsus.com  Sun Aug 13 00:31:32 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 18:31:32 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812180357.A18816@acs.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Sat, Aug 12, 2000 at 06:03:57PM -0400
References: <20000812180357.A18816@acs.ucalgary.ca>
Message-ID: <20000812183131.A26660@thyrsus.com>

Neil Schemenauer <nascheme at enme.ucalgary.ca>:
> I'm trying to keep Jeremy's compiler up to date.  Modifying the
> parser module to understand list comprehensions seems to be none
> trivial however. 

Last I checked, list comprehensions hadn't been accepted.  I think
there's at least one more debate waiting there...
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

If a thousand men were not to pay their tax-bills this year, that would
... [be] the definition of a peaceable revolution, if any such is possible.
	-- Henry David Thoreau



From trentm at ActiveState.com  Sun Aug 13 00:33:12 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 15:33:12 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <200008122224.AAA21816@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sun, Aug 13, 2000 at 12:24:39AM +0200
References: <20000812145155.A7629@ActiveState.com> <200008122224.AAA21816@python.inrialpes.fr>
Message-ID: <20000812153312.B7629@ActiveState.com>

On Sun, Aug 13, 2000 at 12:24:39AM +0200, Vladimir Marangozov wrote:
> +0.
> 
> It could overflow but if it does, this is a bad sign about using a list
> for such huge amount of data.

Point taken.

> 
> And this is the second time in a week that I see an attempt to introduce
> a bogus counter due to post-increments embedded in an if statement!
>

If I read you correctly then I think that you are mistaking my intention. Do
you mean that I am doing the comparison *before* the increment takes place
here:

> > !       if (self->ob_size++ == INT_MAX) {
> > !               PyErr_SetString(PyExc_OverflowError,
> > !                       "cannot add more objects to list");
> > !               return -1;
> > !       }

That is my intention. You can increment up to INT_MAX but not over.....

... heh heh actually my code *is* wrong. But for a slightly different reason.
I trash the value of self->ob_size on overflow. You are right, I made a
mistake trying to be cute with autoincrement in an 'if' statement. I should
do the check and *then* increment if okay.

Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Sun Aug 13 00:34:43 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 13 Aug 2000 00:34:43 +0200
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812183131.A26660@thyrsus.com>; from esr@thyrsus.com on Sat, Aug 12, 2000 at 06:31:32PM -0400
References: <20000812180357.A18816@acs.ucalgary.ca> <20000812183131.A26660@thyrsus.com>
Message-ID: <20000813003443.J14470@xs4all.nl>

On Sat, Aug 12, 2000 at 06:31:32PM -0400, Eric S. Raymond wrote:
> Neil Schemenauer <nascheme at enme.ucalgary.ca>:
> > I'm trying to keep Jeremy's compiler up to date.  Modifying the
> > parser module to understand list comprehensions seems to be none
> > trivial however. 

> Last I checked, list comprehensions hadn't been accepted.  I think
> there's at least one more debate waiting there...

Check again, they're already checked in. The implementation may change
later, but the syntax has been decided (by Guido):

[(x, y) for y in something for x in somewhere if y in x]

The parentheses around the leftmost expression are mandatory. It's currently
implemented something like this:

L = []
__x__ = [].append
for y in something:
	for x in somewhere:
		if y in x:
			__x__((x, y))
del __x__

(where 'x' is a number, chosen to *probably* not conflict with any other
local vrbls or other (nested) list comprehensions, and the result of the
expression is L, which isn't actually stored anywhere during evaluation.)

See the patches list archive and the SF patch info about the patch (#100654)
for more information on how and why.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Sun Aug 13 01:01:54 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 19:01:54 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000813003443.J14470@xs4all.nl>; from thomas@xs4all.net on Sun, Aug 13, 2000 at 12:34:43AM +0200
References: <20000812180357.A18816@acs.ucalgary.ca> <20000812183131.A26660@thyrsus.com> <20000813003443.J14470@xs4all.nl>
Message-ID: <20000812190154.B26719@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> > Last I checked, list comprehensions hadn't been accepted.  I think
> > there's at least one more debate waiting there...
> 
> Check again, they're already checked in. The implementation may change
> later, but the syntax has been decided (by Guido):
> 
> [(x, y) for y in something for x in somewhere if y in x]

Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
to believe that having special syntax for this (rather than constructor
functions a la zip()) is a bad mistake.  I predict it's going to come
back to bite us hard in the future.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

I cannot undertake to lay my finger on that article of the
Constitution which grant[s] a right to Congress of expending, on
objects of benevolence, the money of their constituents.
	-- James Madison, 1794



From bckfnn at worldonline.dk  Sun Aug 13 01:29:14 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Sat, 12 Aug 2000 23:29:14 GMT
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812180357.A18816@acs.ucalgary.ca>
References: <20000812180357.A18816@acs.ucalgary.ca>
Message-ID: <3995dd8b.34665776@smtp.worldonline.dk>

[Neil Schemenauer]

>With all the recent proposed and accepted language changes, we
>should be a careful to keep everything up to date.  The parser
>module, Jeremy's compiler, and I suspect JPython have been left
>behind by the recent changes. 

WRT JPython, the list comprehensions have not yet been added. Then
again, the feature was only recently checked in.

You raise a good point however. There are many compilers/parsers that
have to be updated before we can claim that a feature is fully
implemented. 


[Thomas Wouters]

>[(x, y) for y in something for x in somewhere if y in x]
>
>The parentheses around the leftmost expression are mandatory. It's currently
>implemented something like this:
>
>L = []
>__x__ = [].append
>for y in something:
>	for x in somewhere:
>		if y in x:
>			__x__((x, y))
>del __x__

Thank you for the fine example. At least I now think that know what the
feature is about.

regards,
finn



From tim_one at email.msn.com  Sun Aug 13 01:37:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 19:37:14 -0400
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <20000812145155.A7629@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com>

[Trent Mick]
> from Objects/listobject.c:
>
> static int
> ins1(PyListObject *self, int where, PyObject *v)
> {
>     ...
>     self->ob_size++;         <-------------- can this overflow?
>     return 0;
> }

> ...
> Is it or was this relying on sizeof(ob_size) == sizeof(void*),
> hence a list being able to hold as many items as there is
> addressable memory?

I think it's more relying on the product of two other assumptions:  (a)
sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
elements in Python.  But you're right, sooner or later that's going to bite
us.

> proposed patch:
>
> *** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
> --- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
> ***************
> *** 149,155 ****
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       self->ob_size++;
>         return 0;
>   }
>
> --- 149,159 ----
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       if (self->ob_size++ == INT_MAX) {
> !               PyErr_SetString(PyExc_OverflowError,
> !                       "cannot add more objects to list");
> !               return -1;
> !       }
>         return 0;
>   }

+1 on catching it, -1 on this technique.  You noted later that this will
make trash out of ob_size if it triggers, but the list has already grown and
been shifted by this time too, so it's left in an insane state (to the user,
the last element will appear to vanish).

Suggest checking at the *start* of the routine instead:

       if (self->ob_size == INT_MAX) {
              PyErr_SetString(PyExc_OverflowError,
                      "cannot add more objects to list");
              return -1;
      }

Then the list isn't harmed.





From tim_one at email.msn.com  Sun Aug 13 01:57:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 19:57:29 -0400
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <200008122108.XAA21412@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHJGPAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> This reminder comes JIT!
>
> Then please make coincide the above dates/instants with the status of
> the open patches and take a stance on them: assign them to people,
> postpone, whatever.
>
> I deliberately postponed my object malloc patch.

I don't know why.  It's been there quite a while, and had non-trivial
support for inclusion in 2.0.  A chance to consider the backlog of patches
as a stable whole is why the two weeks between "feature freeze" and 2.0b1
exists!

> PS: this is also JIT as per the stackless discussion -- I mentioned
> "consider for inclusion" which was interpreted as "inclusion for 2.0"
> <frown>. God knows that I tried to be very careful when writing my
> position statement... OTOH, there's still a valid deadline for 2.0!

I doubt any variant of Stackless has a real shot for 2.0 at this point,
although if a patch shows up before Sunday ends I won't Postpone it without
reason (like, say, Guido tells me to).

> PPS: is the pep-0200.html referenced above up to date? For instance,
> I see it mentions SET_LINENO pointing to old references, while a newer
> postponed patch is at SourceForge.
>
> A "last modified <date>" stamp would be nice.

I agree, but yaaaawn <wink>.  CVS says it was last modified before Jeremy
went on vacation.  It's not up to date.  The designated release manager in
Jeremy's absence apparently didn't touch it.  I can't gripe about that,
though, because he's my boss <wink>.  He sent me email today saying "tag,
now you're it!" (Guido will be gone all next week).  My plate is already
full, though, and I won't get around to updating it today.

Yes, this is no way to run a release, and so I don't have any confidence
that the release dates in pep200 will be met.  Still, I was arguing for
feature freeze two months ago, and so as long as "I'm it" I'm not inclined
to slip the schedule on *that* part.  I bet it will be at least 3 weeks
before 2.0b1 hits the streets, though.

in-any-case-feature-freeze-is-on-the-critical-path-so-the-sooner-
    the-better-ly y'rs  - tim





From tim_one at email.msn.com  Sun Aug 13 02:11:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 20:11:30 -0400
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <20000812235247.I14470@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>

[Thomas Wouters]
> I asked similar questions about PEP 200, in particular on which
> new features were considered for 2.0 and what their status is
> (PEP 200 doesn't mention augmented assignment, which as far as I
> know has been on Guido's "2.0" list since 2.0 and 1.6 became
> different releases.)

Yes, augmented assignment is golden for 2.0.

> I apparently caught Jeremy just before he left for his holiday,
> and directed me towards Guido regarding those questions, and
> Guido has apparently been too busy (or he missed that email as
> well as some python-dev email.)

Indeed, we're never going to let Guido be Release Manager again <wink>.

> All my PEPs are in, though, unless I should write a PEP on 'import as',
> which I really think should go in 2.0. I'd be suprised if 'import
> as' needs a PEP, since the worst vote on 'import as' was Eric's '+0',
> and there seems little concern wrt. syntax or implementation. It's
> more of a fix for overlooked syntax than it is a new feature<0.6 wink>.

Why does everyone flee from the idea of writing a PEP?  Think of it as a
chance to let future generations know what a cool idea you had.  I agree
this change is too simple and too widely loved to *need* a PEP, but if you
write one anyway you can add it to your resume under your list of
peer-reviewed publications <wink>.

> I just assumed the PyLabs team (or at least 4/5th of it) were too
> busy with getting 1.6 done and finished to be concerned with non-
> pressing 2.0 issues, and didn't want to press them on these issues
> until 1.6 is truely finished.

Actually, Fred Drake has done almost everything in the 1.6 branch by
himself, while Guido has done all the installer and web-page work for that.
The rest of our time has been eaten away by largely invisible cruft, from
haggling over the license to haggling over where to put python.org next.
Lots of haggling!  You guys get to do the *fun* parts (btw, it's occurred to
me often that I did more work on Python proper when I had a speech
recognition job!).

> Pity 1.6-beta-cycle and 2.0-feature-freeze overlap :P

Ya, except it's too late to stop 1.6 now <wink>.

but-not-too-late-to-stop-2.0-ly y'rs  - tim





From MarkH at ActiveState.com  Sun Aug 13 03:02:36 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sun, 13 Aug 2000 11:02:36 +1000
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000812190154.B26719@thyrsus.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com>

ESR, responding to 

[(x, y) for y in something for x in somewhere if y in x]

for list comprehension syntax:

> Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
> to believe that having special syntax for this (rather than constructor
> functions a la zip()) is a bad mistake.  I predict it's going to come
> back to bite us hard in the future.

FWIW, these are my thoughts exactly (for this particular issue, anyway).

Wont-bother-voting-cos-nobody-is-counting ly,

Mark.




From trentm at ActiveState.com  Sun Aug 13 03:25:18 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 18:25:18 -0700
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <3995dd8b.34665776@smtp.worldonline.dk>; from bckfnn@worldonline.dk on Sat, Aug 12, 2000 at 11:29:14PM +0000
References: <20000812180357.A18816@acs.ucalgary.ca> <3995dd8b.34665776@smtp.worldonline.dk>
Message-ID: <20000812182518.B10528@ActiveState.com>

On Sat, Aug 12, 2000 at 11:29:14PM +0000, Finn Bock wrote:
> [Thomas Wouters]
> 
> >[(x, y) for y in something for x in somewhere if y in x]
> >
> >The parentheses around the leftmost expression are mandatory. It's currently
> >implemented something like this:
> >
> >L = []
> >__x__ = [].append
> >for y in something:
> >	for x in somewhere:
> >		if y in x:
> >			__x__((x, y))
> >del __x__
> 
> Thank you for the fine example. At least I now think that know what the
> feature is about.
> 

Maybe that example should get in the docs for list comprehensions.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Sun Aug 13 03:30:02 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 18:30:02 -0700
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 08:11:30PM -0400
References: <20000812235247.I14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>
Message-ID: <20000812183002.C10528@ActiveState.com>

On Sat, Aug 12, 2000 at 08:11:30PM -0400, Tim Peters wrote:
> You guys get to do the *fun* parts 

Go give Win64 a whirl for a while. <grumble>

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Sun Aug 13 03:33:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 21:33:43 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>

[ESR, responding to

  [(x, y) for y in something for x in somewhere if y in x]

 for list comprehension syntax:
]
> Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
> to believe that having special syntax for this (rather than constructor
> functions a la zip()) is a bad mistake.  I predict it's going to come
> back to bite us hard in the future.

[Mark Hammond]
> FWIW, these are my thoughts exactly (for this particular issue,
> anyway).
>
> Wont-bother-voting-cos-nobody-is-counting ly,

Counting, no; listening, yes; persuaded, no.  List comprehensions are one of
the best-loved features of Haskell (really!), and Greg/Skip/Ping's patch
implements as an exact a parallel to Haskell's syntax and semantics as is
possible in Python.  Predictions of doom thus need to make a plausible case
for why a rousing success in Haskell is going to be a disaster in Python.
The only basis I can see for such a claim (and I have to make one up myself
because nobody else has <wink>) is that Haskell is lazy, while Python is
eager.  I can't get from there to "disaster", though, or even "plausible
regret".

Beyond that, Guido dislikes the way Lisp spells most things, so it's this or
nothing.  I'm certain I'll use it, and with joy.  Do an update and try it.

C:\Code\python\dist\src\PCbuild>python
Python 2.0b1 (#0, Aug 12 2000, 14:57:27) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> [x**2 for x in range(10) if x & 1]
[1, 9, 25, 49, 81]
>>> [x**2 if 3]
[81]
>>>

Now even as a fan, I'll admit that last line sucks <wink>.

bug-in-the-grammar-ly y'rs  - tim





From thomas at xs4all.net  Sun Aug 13 09:53:57 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 13 Aug 2000 09:53:57 +0200
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 09:33:43PM -0400
References: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com> <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>
Message-ID: <20000813095357.K14470@xs4all.nl>

On Sat, Aug 12, 2000 at 09:33:43PM -0400, Tim Peters wrote:

[ ESR and Mark griping about list comprehensions syntax, which I can relate
to, so I'll bother to try and exlain what bothers *me* wrt list
comprehensions. Needn't be the same as what bothers them, though ]

> List comprehensions are one of the best-loved features of Haskell
> (really!), and Greg/Skip/Ping's patch implements as an exact a parallel to
> Haskell's syntax and semantics as is possible in Python.

I don't see "it's cool in language X" as a particular good reason to include
a feature... We don't add special syntax for regular expressions, support
for continuations or direct access to hardware because of that, do we ?

> Predictions of doom thus need to make a plausible case for why a rousing
> success in Haskell is going to be a disaster in Python. The only basis I
> can see for such a claim (and I have to make one up myself because nobody
> else has <wink>) is that Haskell is lazy, while Python is eager.  I can't
> get from there to "disaster", though, or even "plausible regret".

My main beef with the syntax is that it is, in my eyes, unpythonic. It has
an alien, forced feel to it, much more so than the 'evil' map/filter/reduce.
It doesn't 'fit' into Python the way most of the other features do; it's
simply syntactic sugar for a specific kind of for-loop. It doesn't add any
extra functionality, and for that large a syntactic change, I guess that
scares me.

Those doubts were why I was glad you were going to write the PEP. I was
looking forward to you explaining why I had those doubts and giving sound
arguments against them :-)

> Beyond that, Guido dislikes the way Lisp spells most things, so it's this or
> nothing.  I'm certain I'll use it, and with joy.  Do an update and try it.

Oh, I've tried it. It's not included in the 'heavily patched Python 2.0b1' I
have running on a couple of machines to impress my colleagues, (which
includes the obmalloc patch, augmented assignment, range literals, import
as, indexing-for, and extended-slicing-on-lists) but that's mostly
because I was expecting, like ESR, a huge debate on its syntax. Lets say
that most my doubts arose after playing with it for a while. I fear people
will start using it in odd construct, and in odd ways, expecting other
aspects of for-loops to be included in list comprehensions (break, else,
continue, etc.) And then there's the way it's hard to parse because of the
lack of punctuation in it:

[((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in S] for b in
[b for b in B if mean(b)] for b,c in C for a,d in D for e in [Egg(a, b, c,
d, e) for e in E]]

I hope anyone writing something like that (notice the shadowing of some of
the outer vrbls in the inner loops) will either add some newlines and
indentation by themselves, or will be hunted down and shot (or at least
winged) by the PSU.

I'm not arguing to remove list comprehensions. I think they are cool
features that can replace map/filter, I just don't think they're that much
better than the use of map/filter.

Write-that-PEP-Tim-it-will-look-good-on-your-resume-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Sun Aug 13 10:13:40 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 13 Aug 2000 04:13:40 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>; from thomas@xs4all.net on Sun, Aug 13, 2000 at 09:53:57AM +0200
References: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com> <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com> <20000813095357.K14470@xs4all.nl>
Message-ID: <20000813041340.B27949@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> My main beef with the syntax is that it is, in my eyes, unpythonic. It has
> an alien, forced feel to it, much more so than the 'evil' map/filter/reduce.
> It doesn't 'fit' into Python the way most of the other features do; it's
> simply syntactic sugar for a specific kind of for-loop. It doesn't add any
> extra functionality, and for that large a syntactic change, I guess that
> scares me.

I agree 100% with all of this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"This country, with its institutions, belongs to the people who
inhabit it. Whenever they shall grow weary of the existing government,
they can exercise their constitutional right of amending it or their
revolutionary right to dismember it or overthrow it."
	-- Abraham Lincoln, 4 April 1861



From moshez at math.huji.ac.il  Sun Aug 13 10:15:15 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 13 Aug 2000 11:15:15 +0300 (IDT)
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEGOGPAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008131114190.20886-100000@sundial>

On Sat, 12 Aug 2000, Tim Peters wrote:

> [Trent Mick]
> > I think that putting them in binary mode is a misleading clue that
> > people should not muck with them. The *are* text files.
> 
> But you don't know that.  They're internal Microsoft files in an
> undocumented, proprietary format.  You'll find nothing in MS docs
> guaranteeing they're text files, but will find the direst warnings against
> attempting to edit them.  MS routinely changes *scads* of things about
> DevStudio-internal files across releases.

Hey, I parsed those beasts, and edited them by hand. 

of-course-my-co-workers-hated-me-for-that-ly y'rs, Z.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From Vladimir.Marangozov at inrialpes.fr  Sun Aug 13 11:16:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 11:16:50 +0200 (CEST)
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 07:37:14 PM
Message-ID: <200008130916.LAA29139@python.inrialpes.fr>

Tim Peters wrote:
> 
> I think it's more relying on the product of two other assumptions:  (a)
> sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
> elements in Python.  But you're right, sooner or later that's going to bite
> us.

+1 on your patch, but frankly, if we reach a situation to be bitten
by this overflow, chances are that we've already dumped core or will
be very soon -- billions of objects = soon to be overflowing
ob_refcnt integer counters. Py_None looks like a fine candidate for this.

Now I'm sure you're going to suggest again making the ob_refcnt a long,
as you did before <wink>.


> Suggest checking at the *start* of the routine instead:
> 
>        if (self->ob_size == INT_MAX) {
>               PyErr_SetString(PyExc_OverflowError,
>                       "cannot add more objects to list");
>               return -1;
>       }
> 
> Then the list isn't harmed.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Sun Aug 13 11:32:25 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 11:32:25 +0200 (CEST)
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHJGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 07:57:29 PM
Message-ID: <200008130932.LAA29181@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov]
> > This reminder comes JIT!
> >
> > Then please make coincide the above dates/instants with the status of
> > the open patches and take a stance on them: assign them to people,
> > postpone, whatever.
> >
> > I deliberately postponed my object malloc patch.
> 
> I don't know why.  It's been there quite a while, and had non-trivial
> support for inclusion in 2.0.  A chance to consider the backlog of patches
> as a stable whole is why the two weeks between "feature freeze" and 2.0b1
> exists!

Because the log message reads that I'm late with the stat interface
which shows what the situation is with and without it. If I want to
finish that part, I'll need to block my Sunday afternoon. Given that
now it's 11am, I have an hour to think what to do about it -- resurrect
or leave postponed.

> I doubt any variant of Stackless has a real shot for 2.0 at this point,
> although if a patch shows up before Sunday ends I won't Postpone it without
> reason (like, say, Guido tells me to).

I'm doubtful too, but if there's a clean & solid minimal implementation
which removes the stack dependency -- I'll have a look.

> 
> > PPS: is the pep-0200.html referenced above up to date? For instance,
> > I see it mentions SET_LINENO pointing to old references, while a newer
> > postponed patch is at SourceForge.
> >
> > A "last modified <date>" stamp would be nice.
> 
> I agree, but yaaaawn <wink>.  CVS says it was last modified before Jeremy
> went on vacation.  It's not up to date.  The designated release manager in
> Jeremy's absence apparently didn't touch it.  I can't gripe about that,
> though, because he's my boss <wink>.  He sent me email today saying "tag,
> now you're it!" (Guido will be gone all next week).  My plate is already
> full, though, and I won't get around to updating it today.

Okay - just wanted to make this point clear, since your reminder reads
"see the details there".

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Sun Aug 13 20:04:49 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 13 Aug 2000 11:04:49 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <200008130916.LAA29139@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sun, Aug 13, 2000 at 11:16:50AM +0200
References: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com> <200008130916.LAA29139@python.inrialpes.fr>
Message-ID: <20000813110449.A23269@ActiveState.com>

On Sun, Aug 13, 2000 at 11:16:50AM +0200, Vladimir Marangozov wrote:
> Tim Peters wrote:
> > 
> > I think it's more relying on the product of two other assumptions:  (a)
> > sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
> > elements in Python.  But you're right, sooner or later that's going to bite
> > us.
> 
> +1 on your patch, but frankly, if we reach a situation to be bitten

I'll check it in later today.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Mon Aug 14 01:08:43 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 13 Aug 2000 16:08:43 -0700
Subject: [Python-Dev] you may have some PCbuild hiccups
Message-ID: <20000813160843.A27104@ActiveState.com>

Hello all,

Recently I spearheaded a number of screw ups in the PCbuild/ directory.
PCbuild/*.dsp and *.dsw went from binary to text to binary again. These are
sticky CVS attributes on files in your checked out Python tree.

If you care about the PCbuild/ content (i.e. you build Python on Windows)
then you may need to completely delete the PCbuild directory and
re-get it from CVS. You can tell if you *need* to by doing a 'cvs status
*.dsw *.dsp'. If any of those files *don't* have the "Sticky Option: -kb",
they should. If they all do and MSDEV loads the project files okay, then you
are fine.

NOTE: You have to delete the *whole* PCbuild\ directory, not just its
contents. The PCbuild\CVS control directory is part of what you have to
re-get.


Sorry,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Mon Aug 14 02:08:45 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 13 Aug 2000 20:08:45 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>

[Tim]
>> List comprehensions are one of the best-loved features of Haskell
>> (really!), and Greg/Skip/Ping's patch implements as an exact a
>> parallel to Haskell's syntax and semantics as is possible in Python.

[Thomas Wouters]
> I don't see "it's cool in language X" as a particular good reason
> to include a feature... We don't add special syntax for regular
> expressions, support for continuations or direct access to hardware
> because of that, do we ?

As Guido (overly!) modestly says, the only language idea he ever invented
was an "else" clause on loops.  He decided listcomps "were Pythonic" before
knowing anything about Haskell (or SETL, from which Haskell took the idea).
Given that he *already* liked them, the value in looking at Haskell is for
its actual experience with them.  It would be pretty stupid *not* to look at
experience with other languages that already have it!  And that's whether
you're pro or con.

So it's not "cool in language X" that drives it at all, it's "cool in
language X" that serves to confirm or refute the prior judgment that "it's
Pythonic, period".  When, e.g., Eric predicts it will bite us hard someday,
I can point to Haskell and legitimately ask "why here and not there?".

There was once a great push for adding some notion of "protected" class
members to Python.  Guido was initially opposed, but tempted to waffle
because proponents kept pointing to C++.  Luckily, Stroustrup had something
to say about this in his "Design and Evolution of C++" book, including that
he thought adding "protected" was a mistake, driven by relentless "good
arguments" that opposed his own initial intuition.  So in that case,
*really* looking at C++ may have saved Guido from making the same mistake.

As another example, few arguments are made more frequently than that Python
should add embedded assignments in conditionals.  *Lots* of other languages
have that -- but they mostly serve to tell us it's a bug factory in
practice!  The languages that avoid the bugs point to ways to get the effect
safely (although none yet Pythonically enough for Guido).

So this is a fact:  language design is very little about wholesale
invention, and that's especially true of Python.  It's a mystically
difficult blending of borrowed ideas, and it's rare as readable Perl <wink>
that an idea will get borrowed if it didn't even work well in its native
home.  listcomps work great where they came from, and that plus "hey, Guido
likes 'em!" makes it 99% a done deal.

> My main beef with the syntax is that it is, in my eyes, unpythonic.
> It has an alien, forced feel to it, much more so than the 'evil'
> map/filter/reduce.  It doesn't 'fit' into Python the way most of
> the other features do;

Guido feels exactly the opposite:  the business about "alien, forced feel,
not fitting" is exactly what he's said about map/filter/reduce/lambda on
many occasions.  listcomps strike him (me too, for that matter) as much more
Pythonic than those.

> it's simply syntactic sugar for a specific kind of for-loop. It
> doesn't add any extra functionality,

All overwhelmingly true of augmented assignments, and range literals, and
three-argument getattr, and list.pop, etc etc etc too.  Python has lots of
syntactic sugar -- making life pleasant is not a bad thing.

> and for that large a syntactic change, I guess that scares me.

The only syntactic change is to add a new form of list constructor.  It's
isolated and self-contained, and so "small" in that sense.

> Those doubts were why I was glad you were going to write the PEP. I
> was looking forward to you explaining why I had those doubts and
> giving sound arguments against them :-)

There is no convincing argument to made either way on whether "it's
Pythonic", which I think is your primary worry.  People *never* reach
consensus on whether a given feature X "is Pythonic".  That's why it's
always Guido's job.  You've been here long enough to see that -1 and +1 are
about evenly balanced, except on (in recent memory) "import x as y" -- which
I conviently neglected to mention had been dismissed as unPythonic by Guido
just a couple weeks ago <wink -- but he didn't really mean it then,
according to me>.

> ...
> but that's mostly because I was expecting, like ESR, a huge debate
> on its syntax.

Won't happen, because it already did.  This was debated to death long ago,
and more than once, and Guido likes what he likes now.  Greg Wilson made the
only new point on listcomps I've seen since two weeks after they were first
proposed by Greg Ewing (i.e., that the ";" notation *really* sucked).

> Lets say that most my doubts arose after playing with it for
> a while. I fear people will start using it in odd construct, and
> in odd ways,

Unlike, say, filter, map and reduce <wink>?

> expecting other aspects of for-loops to be included
> in list comprehensions (break, else, continue, etc.)

Those ideas were rejected long ago too (and that Haskell and SETL also
rejected them independently shows that, whether we can explain it or not,
they're simply bad ideas).

> And then there's the way it's hard to parse because of the
> lack of punctuation in it:
>
> [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> for e in [Egg(a, b, c, d, e) for e in E]]

That isn't a serious argument, to my eyes.  Write that as a Lisp one-liner
and see what it looks like then -- nuts is nuts, and a "scare example" could
just as easily be concocted out of insane nesting of subscripts and/or
function calls and/or parenthesized arithmetic.  Idiotic nesting is a
possibility for any construct that nests!  BTW, you're missing the
possibility to nest listcomps in "the expression" position too, a la

>>> [[1 for i in range(n)] for n in range(10)]
[[],
 [1],
 [1, 1],
 [1, 1, 1],
 [1, 1, 1, 1],
 [1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1, 1, 1]]
>>>

I know you missed that possibility above because, despite your claim of
being hard to parse, it's dead easy to spot where your listcomps begin:  "["
is easy for the eye to find.

> I hope anyone writing something like that (notice the shadowing of
> some of the outer vrbls in the inner loops)

You can find the same in nested lambdas littering map/reduce/etc today.

> will either add some newlines and indentation by themselves, or
> will be hunted down and shot (or at least winged) by the PSU.

Nope!  We just shun them.  Natural selection will rid the Earth of them
without violence <wink>.

> I'm not arguing to remove list comprehensions. I think they are cool
> features that can replace map/filter, I just don't think they're that
> much better than the use of map/filter.

Haskell programmers have map/filter too, and Haskell programmers routinely
favor using listcomps.  This says something about what people who have both
prefer.  I predict that once you're used to them, you'll find them much more
expressive:  "[" tells you immediately you're getting a list, then the next
thing you see is what the list is built out of, and then there's a bunch of
lower-level detail.  It's great.

> Write-that-PEP-Tim-it-will-look-good-on-your-resume-ly y'rs,

except-i'm-too-old-to-need-a-resume-anymore<wink>-ly y'rs  - tim





From tim_one at email.msn.com  Mon Aug 14 03:31:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 13 Aug 2000 21:31:20 -0400
Subject: [Python-Dev] you may have some PCbuild hiccups
In-Reply-To: <20000813160843.A27104@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJHGPAA.tim_one@email.msn.com>

[Trent Mick]
> Recently I spearheaded a number of screw ups in the PCbuild/
> directory.

Let's say the intent was laudable but the execution a bit off the mark
<wink>.

[binary -> text -> binary again]
> ...
> NOTE: You have to delete the *whole* PCbuild\ directory, not just
> its contents. The PCbuild\CVS control directory is part of what you
> have to re-get.

Actually, I don't think you have to bother this time -- just do a regular
update.  The files *were* marked as text this time around, but there is no
"sticky bit" saying so in the local config, so a plain update replaces them
now.

OK, I made most of that up.  But a plain update did work fine for me ...





From nowonder at nowonder.de  Mon Aug 14 06:27:07 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Mon, 14 Aug 2000 04:27:07 +0000
Subject: [*].items() (was: Re: [Python-Dev] Lockstep iteration - eureka!)
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."            
			 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
			 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <l03102802b5b71c40f9fc@[193.78.237.121]> <3993FD49.C7E71108@prescod.net>
Message-ID: <3997751B.9BB1D9FA@nowonder.de>

Paul Prescod wrote:
> 
> Just van Rossum wrote:
> >
> >        for <index> indexing <element> in <seq>:
> 
> Let me throw out another idea. What if sequences just had .items()
> methods?
> 
> j=range(0,10)
> 
> for index, element in j.items():

I like the idea and so I've uploaded a patch for this to SF:
https://sourceforge.net/patch/?func=detailpatch&patch_id=101178&group_id=5470

For ease of reading:
This patch adds a .items() method to the list object.
.items() returns a list with of tuples. E.g.:

  for index, value in ["a", "b", "c"].items(): 
      print index, ":", value 

will print: 

  0: a 
  1: b 
  2: c 

I think this is an easy way to achieve looping over
index AND elements in parallel. Semantically the
following two expressions should be equivalent: 

for index, value in zip(range(len(mylist)), mylist):

for index, value in mylist.items():

In opposition to patch #110138 I would call this: 
"Adding syntactic sugar without adding syntax (or sugar<wink>):"

this-doesn't-deserve-new-syntax-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From greg at cosc.canterbury.ac.nz  Mon Aug 14 06:01:35 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:01:35 +1200 (NZST)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug
 #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811184407.A14470@xs4all.nl>
Message-ID: <200008140401.QAA14955@s454.cosc.canterbury.ac.nz>

> ERRORS
>
>       EINTR   A signal occurred.

Different unices seem to have manpages which differ considerably
in these areas. The Solaris manpage says:

     EINTR     The operation was interrupted  by  delivery  of  a
               signal  before  any  data  could be buffered to be
               sent.

which suggests that you won't get EINTR if some data *has* been
sent before the signal arrives. It seems to me the only thing that
could possibly happen in this case is to return with fewer bytes
than requested, whether the socket is non-blocking or not.

So it seems that, in the presence of signals, neither write()
nor send() can be relied upon to either completely succeed
or completely fail. 

Perhaps the reason this hasn't caused anyone a problem is that the
combination of blocking sockets and signals that you want to handle
and then carry on after are fairly rare.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Aug 14 05:51:45 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 15:51:45 +1200 (NZST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000811103701.A25386@keymaster.enme.ucalgary.ca>
Message-ID: <200008140351.PAA14951@s454.cosc.canterbury.ac.nz>

> We don't limit the amount of memory you can allocate on all
> machines just because your program may run out of memory on some
> machine.

Legend has it that Steve Jobs tried to do something like that
with the original 128K Macintosh. He was against making the
machine expandable in any way, so that any program which ran
one Mac would run on all Macs.

Didn't stay that way for very long...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Aug 14 06:17:30 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:17:30 +1200 (NZST)
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers
 for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>
Message-ID: <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>

Two reasons why list comprehensions fit better in Python
than the equivalent map/filter/lambda constructs:

1) Scoping. The expressions in the LC have direct access to the
   enclosing scope, which is not true of lambdas in Python.

2) Efficiency. An LC with if-clauses which weed out many potential
   list elements can be much more efficient than the equivalent
   filter operation, which must build the whole list first and
   then remove unwanted items.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Aug 14 06:24:43 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:24:43 +1200 (NZST)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug
 #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <001301c00469$cb380fe0$f2a6b5d4@hagrid>
Message-ID: <200008140424.QAA14962@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <effbot at telia.com>:

> fwiw, I still haven't found a single reference (SUSv2 spec, man-
> pages, Stevens, the original BSD papers) that says that a blocking
> socket may do anything but sending all the data, or fail.

The Solaris manpage sort of seems to indirectly suggest that
it might conceivabley be possible:

     EMSGSIZE  The socket requires that message  be  sent  atomi-
               cally, and the message was too long.

Which suggests that some types of socket may not require the
message to be sent atomically. (TCP/IP, for example.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Mon Aug 14 07:38:55 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 07:38:55 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/api api.tex,1.76,1.77
In-Reply-To: <200008140250.TAA31549@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Sun, Aug 13, 2000 at 07:50:23PM -0700
References: <200008140250.TAA31549@slayer.i.sourceforge.net>
Message-ID: <20000814073854.O14470@xs4all.nl>

On Sun, Aug 13, 2000 at 07:50:23PM -0700, Fred L. Drake wrote:

> In the section on the "Very High Level Layer", address concerns brought up
> by Edward K. Ream <edream at users.sourceforge.net> about FILE* values and
> incompatible C libraries in dynamically linked extensions.  It is not clear
> (to me) how realistic the issue is, but it is better documented than not.

> + Note also that several of these functions take \ctype{FILE*}
> + parameters.  On particular issue which needs to be handled carefully
> + is that the \ctype{FILE} structure for different C libraries can be
> + different and incompatible.  Under Windows (at least), it is possible
> + for dynamically linked extensions to actually use different libraries,
> + so care should be taken that \ctype{FILE*} parameters are only passed
> + to these functions if it is certain that they were created by the same
> + library that the Python runtime is using.

I saw a Jitterbug 'suggestion' bugthread, where Guido ended up liking the
idea of wrapping fopen() and fclose() in the Python library, so that you got
the right FILE structures when linking with another libc/compiler. Whatever
happened to that idea ? Or does it just await implementation ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Mon Aug 14 07:57:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 07:57:13 +0200
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 13, 2000 at 08:08:45PM -0400
References: <20000813095357.K14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>
Message-ID: <20000814075713.P14470@xs4all.nl>

Well, Tim, thanx for that mini-PEP, if I can call your recap of years of
discussion that ;-) It did clear up my mind, though I have a few comments to
make. This is the last I have to say about it, though, I didn't intend to
drag you into a new long discussion ;)

On Sun, Aug 13, 2000 at 08:08:45PM -0400, Tim Peters wrote:

> Guido feels exactly the opposite:  the business about "alien, forced feel,
> not fitting" is exactly what he's said about map/filter/reduce/lambda on
> many occasions. 

Note that I didn't mention lambda, and did so purposely ;) Yes, listcomps
are much better than lambda. And I'll grant the special case of 'None' as
the function is unpythonic, in map/filter/reduce. Other than that, they're
just functions, which I hope aren't too unpythonic<wink>

> > [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> > S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> > for e in [Egg(a, b, c, d, e) for e in E]]

> That isn't a serious argument, to my eyes.

Well, it's at the core of my doubts :) 'for' and 'if' start out of thin air.
I don't think any other python statement or expression can be repeated and
glued together without any kind of separator, except string literals (which
I can see the point of, but scared me a little none the less.)

I don't know enough lisp to write this expression in that, but I assume you
could still match the parentheses to find out how they are grouped.

> I know you missed that possibility above because, despite your claim of
> being hard to parse, it's dead easy to spot where your listcomps begin:  "["
> is easy for the eye to find.

That's the start of a listcomp, but not of a specific listcomp-for or -if.

> > I hope anyone writing something like that (notice the shadowing of
> > some of the outer vrbls in the inner loops)

> You can find the same in nested lambdas littering map/reduce/etc today.

Yes, and wasn't the point to remove those ? <wink>

Like I said, I'm not arguing against listcomprehensions, I'm just saying I'm
sorry we didn't get yet another debate on syntax ;) Having said that, I'll
step back and let Eric's predicted doom fall over Python; hopefully we are
wrong and you all are right :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Mon Aug 14 11:44:39 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 14 Aug 2000 11:44:39 +0200
Subject: [Python-Dev] Preventing recursion core dumps 
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
	     Fri, 11 Aug 2000 09:28:09 -0500 , <200008111428.JAA04464@cj20424-a.reston1.va.home.com> 
Message-ID: <20000814094440.0BC7F303181@snelboot.oratrix.nl>

Isn't the solution to this problem to just implement PyOS_CheckStack() for 
unix?

I assume you can implement it fairly cheaply by having the first call compute 
a stack warning address and subsequent calls simply checking that the stack 
hasn't extended below the limit yet.

It might also be needed to add a few more PyOS_CheckStack() calls here and 
there, but I think most of them are in place.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From mal at lemburg.com  Mon Aug 14 13:27:27 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 14 Aug 2000 13:27:27 +0200
Subject: [Python-Dev] Doc-strings for class attributes ?!
Message-ID: <3997D79F.CC4A5A0E@lemburg.com>

I've been doing a lot of auto-doc style documenation lately
and have wondered how to include documentation for class attributes
in a nice and usable way.

Right now, we already have doc-strings for modules, classes,
functions and methods. Yet there is no way to assign doc-strings
to arbitrary class attributes.

I figured that it would be nice to have the doc-strings for
attributes use the same notation as for the other objects, e.g.

class C
    " My class C "

    a = 1
    " This is the attribute a of class C, used for ..."

    b = 0
    " Setting b to 1 causes..."

The idea is to create an implicit second attribute for every
instance of documented attribute with a special name, e.g. for
attribute b:

    __doc__b__ = " Setting b to 1 causes..."

That way doc-strings would be able to use class inheritance
just like the attributes themselves. The extra attributes can
be created by the compiler. In -OO mode, these attributes would
not be created.

What do you think about this idea ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Mon Aug 14 16:13:21 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 10:13:21 -0400 (EDT)
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<39933AD8.B8EF5D59@nowonder.de>
	<20000811005013.F17171@xs4all.nl>
Message-ID: <14743.65153.264194.444209@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Fine, the patch addresses that. When the hostname passed to
    TW> smtplib is "" (which is the default), it should be turned into
    TW> a FQDN. I agree. However, if someone passed in a name, we do
    TW> not know if they even *want* the name turned into a FQDN. In
    TW> the face of ambiguity, refuse the temptation to guess.

Just to weigh in after the fact, I agree with Thomas.  All this stuff
is primarily there to generate something sane for the default empty
string argument.  If the library client passes in their own name,
smtplib.py should use that as given.

-Barry



From fdrake at beopen.com  Mon Aug 14 16:46:17 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 10:46:17 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_ntpath.py,1.2,1.3
In-Reply-To: <200008140621.XAA12890@slayer.i.sourceforge.net>
References: <200008140621.XAA12890@slayer.i.sourceforge.net>
Message-ID: <14744.1593.850598.411098@cj42289-a.reston1.va.home.com>

Mark Hammond writes:
 > Test for fix to bug #110673: os.abspatth() now always returns
 > os.getcwd() on Windows, if an empty path is specified.  It
 > previously did not if an empty path was delegated to
 > win32api.GetFullPathName())
...
 > + tester('ntpath.abspath("")', os.getcwd())

  This doesn't work.  The test should pass on non-Windows platforms as
well; on Linux I get this:

cj42289-a(.../python/linux-beowolf); ./python ../Lib/test/test_ntpath.py
error!
evaluated: ntpath.abspath("")
should be: /home/fdrake/projects/python/linux-beowolf
 returned: \home\fdrake\projects\python\linux-beowolf\

1 errors.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Mon Aug 14 16:56:39 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 14 Aug 2000 09:56:39 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0001.txt,1.4,1.5
In-Reply-To: <200008141448.HAA18067@slayer.i.sourceforge.net>
References: <200008141448.HAA18067@slayer.i.sourceforge.net>
Message-ID: <14744.2215.11395.695253@beluga.mojam.com>

    Barry> There are now three basic types of PEPs: informational, standards
    Barry> track, and technical.

Looking more like RFCs all the time... ;-)

Skip



From jim at interet.com  Mon Aug 14 17:25:59 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 14 Aug 2000 11:25:59 -0400
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <39980F87.85641FD2@interet.com>

Bill Tutt wrote:
> 
> This is an alternative approach that we should certainly consider. We could
> use ANTLR (www.antlr.org) as our parser generator, and have it generate Java

What about using Bison/Yacc?  I have been playing with a
lint tool for Python, and have been using it.

JimA



From trentm at ActiveState.com  Mon Aug 14 17:41:28 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 14 Aug 2000 08:41:28 -0700
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was Lockstep iteration - eureka!))
In-Reply-To: <39980F87.85641FD2@interet.com>; from jim@interet.com on Mon, Aug 14, 2000 at 11:25:59AM -0400
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com> <39980F87.85641FD2@interet.com>
Message-ID: <20000814084128.A7537@ActiveState.com>

On Mon, Aug 14, 2000 at 11:25:59AM -0400, James C. Ahlstrom wrote:
> What about using Bison/Yacc?  I have been playing with a
> lint tool for Python, and have been using it.
> 
Oh yeah? What does the linter check? I would be interested in seeing that.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Mon Aug 14 17:46:50 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 11:46:50 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
Message-ID: <14744.5229.470633.973850@anthem.concentric.net>

>>>>> "PS" == Peter Schneider-Kamp <nowonder at nowonder.de> writes:

    PS> After sleeping over it, I noticed that at least
    PS> BaseHTTPServer and ftplib also use a similar
    PS> algorithm to get a fully qualified domain name.

    PS> Together with smtplib there are four occurences
    PS> of the algorithm (2 in BaseHTTPServer). I think
    PS> it would be good not to have four, but one
    PS> implementation.

    PS> First I thought it could be socket.get_fqdn(),
    PS> but it seems a bit troublesome to write it in C.

    PS> Should this go somewhere? If yes, where should
    PS> it go?

    PS> I'll happily prepare a patch as soon as I know
    PS> where to put it.

I wonder if we should move socket to _socket and write a Python
wrapper which would basically import * from _socket and add
make_fqdn().

-Barry



From thomas at xs4all.net  Mon Aug 14 17:48:37 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 17:48:37 +0200
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.5229.470633.973850@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 14, 2000 at 11:46:50AM -0400
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl> <3993D570.7578FE71@nowonder.de> <14744.5229.470633.973850@anthem.concentric.net>
Message-ID: <20000814174837.S14470@xs4all.nl>

On Mon, Aug 14, 2000 at 11:46:50AM -0400, Barry A. Warsaw wrote:

> >>>>> "PS" == Peter Schneider-Kamp <nowonder at nowonder.de> writes:

>     PS> After sleeping over it, I noticed that at least
>     PS> BaseHTTPServer and ftplib also use a similar
>     PS> algorithm to get a fully qualified domain name.
> 
>     PS> Together with smtplib there are four occurences
>     PS> of the algorithm (2 in BaseHTTPServer). I think
>     PS> it would be good not to have four, but one
>     PS> implementation.
> 
>     PS> First I thought it could be socket.get_fqdn(),
>     PS> but it seems a bit troublesome to write it in C.
> 
>     PS> Should this go somewhere? If yes, where should
>     PS> it go?
> 
>     PS> I'll happily prepare a patch as soon as I know
>     PS> where to put it.
> 
> I wonder if we should move socket to _socket and write a Python
> wrapper which would basically import * from _socket and add
> make_fqdn().

+1 on that idea, especially since BeOS and Windows (I think ?) already have
that constructions. If we are going to place this make_fqdn() function
anywhere, it should be the socket module or a 'dns' module. (And I mean a
real DNS module, not the low-level wrapper around raw DNS packets that Guido
wrote ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Mon Aug 14 17:56:15 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 11:56:15 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
	<l03102802b5b71c40f9fc@[193.78.237.121]>
	<3993FD49.C7E71108@prescod.net>
Message-ID: <14744.5791.895030.893545@anthem.concentric.net>

>>>>> "PP" == Paul Prescod <paul at prescod.net> writes:

    PP> Let me throw out another idea. What if sequences just had
    PP> .items() methods?

Funny, I remember talking with Guido about this on a lunch trip
several years ago.  Tim will probably chime in that /he/ proposed it
in the Python 0.9.3 time frame.  :)

-Barry



From fdrake at beopen.com  Mon Aug 14 17:59:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 11:59:53 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.5229.470633.973850@anthem.concentric.net>
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
	<14744.5229.470633.973850@anthem.concentric.net>
Message-ID: <14744.6009.66009.888078@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > I wonder if we should move socket to _socket and write a Python
 > wrapper which would basically import * from _socket and add
 > make_fqdn().

  I think we could either do this or use PyRun_String() from
initsocket().


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Mon Aug 14 18:09:11 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:09:11 -0400 (EDT)
Subject: [Python-Dev] Cookie.py
References: <20000811122608.F20646@kronos.cnri.reston.va.us>
	<Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <14744.6567.225562.458943@anthem.concentric.net>

>>>>> "MZ" == Moshe Zadka <moshez at math.huji.ac.il> writes:

    | a) SimpleCookie -- never uses pickle
    | b) SerilizeCookie -- always uses pickle
    | c) SmartCookie -- uses pickle based on old heuristic.

Very cool.  The autopicklification really bugged me too (literally) in
Mailman.

-Barry



From bwarsaw at beopen.com  Mon Aug 14 18:12:45 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:12:45 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
	!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <14744.6781.535265.161119@anthem.concentric.net>

>>>>> "BT" == Bill Tutt <billtut at microsoft.com> writes:

    BT> This is an alternative approach that we should certainly
    BT> consider. We could use ANTLR (www.antlr.org) as our parser
    BT> generator, and have it generate Java for JPython, and C++ for
    BT> CPython.  This would be a good chunk of work, and it's
    BT> something I really don't have time to pursue. I don't even
    BT> have time to pursue the idea about moving keyword recognition
    BT> into the lexer.

    BT> I'm just not sure if you want to bother introducing C++ into
    BT> the Python codebase solely to only have one parser for CPython
    BT> and JPython.

We've talked about exactly those issues internally a while back, but
never came to a conclusion (IIRC) about the C++ issue for CPython.

-Barry



From jim at interet.com  Mon Aug 14 18:29:08 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 14 Aug 2000 12:29:08 -0400
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was 
 Lockstep iteration - eureka!))
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com> <39980F87.85641FD2@interet.com> <20000814084128.A7537@ActiveState.com>
Message-ID: <39981E54.D50BD0B4@interet.com>

Trent Mick wrote:
> 
> On Mon, Aug 14, 2000 at 11:25:59AM -0400, James C. Ahlstrom wrote:
> > What about using Bison/Yacc?  I have been playing with a
> > lint tool for Python, and have been using it.
> >
> Oh yeah? What does the linter check? I would be interested in seeing that.

Actually I have better luck parsing Python than linting it.  My
initial naive approach using C-language wisdom such as checking for
line numbers where variables are set/used failed.  I now feel that
a Python lint tool must either use complete data flow analysis
(hard) or must actually interpret the code as Python does (hard).
All I can really do so far is get and check function signatures.
I can supply more details if you want, but remember it doesn't
work yet, and I may not have time to complete it.  I learned a
lot though.

To parse Python I first use Parser/tokenizer.c to return tokens,
then a Yacc grammar file.  This parses all of Lib/*.py in less
than two seconds on a modest machine.  The tokens returned by
tokenizer.c must be massaged a bit to be suitable for Yacc, but
nothing major.

All the Yacc actions are calls to Python methods, so the real
work is written in Python.  Yacc just represents the grammar.

The problem I have with the current grammar is the large number
of confusing shifts required.  The grammar can't specify operator
precedence, so it uses shift/reduce conflicts instead.  Yacc
eliminates this problem.

JimA



From tim_one at email.msn.com  Mon Aug 14 18:42:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 14 Aug 2000 12:42:14 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <14744.5791.895030.893545@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>

[Paul Prescod]
> Let me throw out another idea. What if sequences just had
> .items() methods?

[Barry A. Warsaw]
> Funny, I remember talking with Guido about this on a lunch trip
> several years ago.  Tim will probably chime in that /he/ proposed it
> in the Python 0.9.3 time frame.  :)

Not me, although *someone* proposed it at least that early, perhaps at 0.9.1
already.  IIRC, that was the very first time Guido used the term
"hypergeneralization" in a cluck-cluck kind of public way.  That is,
sequences and mappings are different concepts in Python, and intentionally
so.  Don't know how he feels now.

But if you add seq.items(), you had better add seq.keys() too, and
seq.values() as a synonym for seq[:].  I guess the perceived advantage of
adding seq.items() is that it supplies yet another incredibly slow and
convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
allocate gazillabytes of storage and compute all the indexes into a massive
data structure up front, and then we can use the loop index that's already
sitting there for free anyway to index into that and get back a redundant
copy of itself!" <wink>.

not-a-good-sign-when-common-sense-is-offended-ly y'rs  - tim





From bwarsaw at beopen.com  Mon Aug 14 18:48:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:48:59 -0400 (EDT)
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com>
Message-ID: <14744.8955.35531.757406@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> Could someone at BeOpen please check what happened to the
    M> python-announce mailing list ?!

This is on my task list, but I was on vacation last week and have been
swamped with various other things.  My plan is to feed the
announcements to a Mailman list, where approval can happen using the
same moderator interface.  But I need to make a few changes to Mailman
to support this.

-Barry



From mal at lemburg.com  Mon Aug 14 18:54:05 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 14 Aug 2000 18:54:05 +0200
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com> <14744.8955.35531.757406@anthem.concentric.net>
Message-ID: <3998242D.A61010FB@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> Could someone at BeOpen please check what happened to the
>     M> python-announce mailing list ?!
> 
> This is on my task list, but I was on vacation last week and have been
> swamped with various other things.  My plan is to feed the
> announcements to a Mailman list, where approval can happen using the
> same moderator interface.  But I need to make a few changes to Mailman
> to support this.

Great :-)

BTW, doesn't SourceForge have some News channel for Python
as well (I have seen these for other projects) ? Would be
cool to channel the announcements there as well.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Mon Aug 14 20:58:11 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 14 Aug 2000 13:58:11 -0500 (CDT)
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was 
 Lockstep iteration - eureka!))
In-Reply-To: <39981E54.D50BD0B4@interet.com>
Message-ID: <Pine.LNX.4.10.10008141345220.3988-100000@server1.lfw.org>

Trent Mick wrote:
Oh yeah? What does the linter check? I would be interested in seeing that.

James C. Ahlstrom wrote:
> Actually I have better luck parsing Python than linting it.  [...]
> All I can really do so far is get and check function signatures.

Python is hard to lint-check because types and objects are so
dynamic.  Last time i remember visiting this issue, Tim Peters
came up with a lint program that was based on warning you if
you used a particular spelling of an identifier only once (thus
likely to indicate a typing mistake).

I enhanced this a bit to follow imports and the result is at

    http://www.lfw.org/python/

(look for "pylint").

The rule is pretty simplistic, but i've tweaked it a bit and it
has actually worked pretty well for me.

Anyway, feel free to give it a whirl.



-- ?!ng




From bwarsaw at beopen.com  Mon Aug 14 21:12:04 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:12:04 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
	<14744.5229.470633.973850@anthem.concentric.net>
	<14744.6009.66009.888078@cj42289-a.reston1.va.home.com>
Message-ID: <14744.17540.586064.729048@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    |   I think we could either do this or use PyRun_String() from
    | initsocket().

Ug.  -1 on using PyRun_String().  Doing the socket->_socket shuffle is
better for the long term.

-Barry



From nowonder at nowonder.de  Mon Aug 14 23:12:03 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Mon, 14 Aug 2000 21:12:03 +0000
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <399860A3.4E9A340E@nowonder.de>

Tim Peters wrote:
> 
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived advantage of
> adding seq.items() is that it supplies yet another incredibly slow and
> convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
> allocate gazillabytes of storage and compute all the indexes into a massive
> data structure up front, and then we can use the loop index that's already
> sitting there for free anyway to index into that and get back a redundant
> copy of itself!" <wink>.

That's a -1, right? <0.1 wink>

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From fdrake at beopen.com  Mon Aug 14 21:13:29 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 15:13:29 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.17540.586064.729048@anthem.concentric.net>
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
	<14744.5229.470633.973850@anthem.concentric.net>
	<14744.6009.66009.888078@cj42289-a.reston1.va.home.com>
	<14744.17540.586064.729048@anthem.concentric.net>
Message-ID: <14744.17625.935969.667720@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Ug.  -1 on using PyRun_String().  Doing the socket->_socket shuffle is
 > better for the long term.

  I'm inclined to agree, simply because it allows at least a slight
simplification in socketmodule.c since the conditional naming of the
module init function can be removed.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Mon Aug 14 21:24:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:24:10 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <14744.5791.895030.893545@anthem.concentric.net>
	<LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <14744.18266.840173.466719@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> But if you add seq.items(), you had better add seq.keys() too,
    TP> and seq.values() as a synonym for seq[:].  I guess the
    TP> perceived advantage of adding seq.items() is that it supplies
    TP> yet another incredibly slow and convoluted way to get at the
    TP> for-loop index?  "Ah, that's the ticket!  Let's allocate
    TP> gazillabytes of storage and compute all the indexes into a
    TP> massive data structure up front, and then we can use the loop
    TP> index that's already sitting there for free anyway to index
    TP> into that and get back a redundant copy of itself!" <wink>.

Or create a generator.  <oops, slap>

-Barry



From bwarsaw at beopen.com  Mon Aug 14 21:25:07 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:25:07 -0400 (EDT)
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com>
	<14744.8955.35531.757406@anthem.concentric.net>
	<3998242D.A61010FB@lemburg.com>
Message-ID: <14744.18323.499501.115700@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> BTW, doesn't SourceForge have some News channel for Python
    M> as well (I have seen these for other projects) ? Would be
    M> cool to channel the announcements there as well.

Yes, but it's a bit clunky.

-Barry



From esr at thyrsus.com  Tue Aug 15 00:57:18 2000
From: esr at thyrsus.com (esr at thyrsus.com)
Date: Mon, 14 Aug 2000 18:57:18 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>
References: <20000813095357.K14470@xs4all.nl> <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>
Message-ID: <20000814185718.A2509@thyrsus.com>

Greg Ewing <greg at cosc.canterbury.ac.nz>:
> Two reasons why list comprehensions fit better in Python
> than the equivalent map/filter/lambda constructs:
> 
> 1) Scoping. The expressions in the LC have direct access to the
>    enclosing scope, which is not true of lambdas in Python.

This is a bug in lambdas, not a feature of syntax.
 
> 2) Efficiency. An LC with if-clauses which weed out many potential
>    list elements can be much more efficient than the equivalent
>    filter operation, which must build the whole list first and
>    then remove unwanted items.

A better argument.  To refute it, I'd need to open a big can of worms
labeled "lazy evaluation".
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Freedom, morality, and the human dignity of the individual consists
precisely in this; that he does good not because he is forced to do
so, but because he freely conceives it, wants it, and loves it.
	-- Mikhail Bakunin 



From esr at thyrsus.com  Tue Aug 15 00:59:08 2000
From: esr at thyrsus.com (esr at thyrsus.com)
Date: Mon, 14 Aug 2000 18:59:08 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000814075713.P14470@xs4all.nl>
References: <20000813095357.K14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com> <20000814075713.P14470@xs4all.nl>
Message-ID: <20000814185908.B2509@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> Like I said, I'm not arguing against listcomprehensions, I'm just saying I'm
> sorry we didn't get yet another debate on syntax ;) Having said that, I'll
> step back and let Eric's predicted doom fall over Python; hopefully we are
> wrong and you all are right :-)

Now, now.  I'm not predicting the doom of Python as a whole, just that 
listcomp syntax will turn out to have been a bad, limiting mistake.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

It is proper to take alarm at the first experiment on our
liberties. We hold this prudent jealousy to be the first duty of
citizens and one of the noblest characteristics of the late
Revolution. The freemen of America did not wait till usurped power had
strengthened itself by exercise and entangled the question in
precedents. They saw all the consequences in the principle, and they
avoided the consequences by denying the principle. We revere this
lesson too much ... to forget it
	-- James Madison.



From MarkH at ActiveState.com  Tue Aug 15 02:46:56 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 15 Aug 2000 10:46:56 +1000
Subject: [Python-Dev] WindowsError repr
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>

I have just checked in a fix for: [ Bug #110670 ] Win32 os.listdir raises
confusing errors
http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=110670

In a nutshell:
>>> os.listdir('/cow')
...
OSError: [Errno 3] No such process: '/cow'
>>>

The solution here was to use the new WindowsError object that was defined
back in February
(http://www.python.org/pipermail/python-dev/2000-February/008803.html)  As
this is a sub-class of OSError, nothing will break.

However, the _look_ of the error does change.  After my fix, it now looks
like:

>>> os.listdir('/cow')
...
WindowsError: [Errno 3] The system cannot find the path specified: '/cow'
>>>

AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
work, as WindowsError derives from OSError.  It just worries me that people
will start explicitly catching "WindowsError", regardless of whatever
documentation we might write on the subject.

Does anyone see this as a problem?  Should a WindowsError masquerade as
"OSError", or maybe just look a little more like it - eg, "OSError
(windows)" ??

Thoughts,

Mark.




From tim_one at email.msn.com  Tue Aug 15 03:01:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 14 Aug 2000 21:01:55 -0400
Subject: [Python-Dev] WindowsError repr
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMIGPAA.tim_one@email.msn.com>

[Mark Hammond]
> ...
> However, the _look_ of the error does change.  After my fix, it now looks
> like:
>
> >>> os.listdir('/cow')
> ...
> WindowsError: [Errno 3] The system cannot find the path specified: '/cow'
> >>>

Thank you!

> AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
> work, as WindowsError derives from OSError.  It just worries me
> that people will start explicitly catching "WindowsError", regardless
> of whatever documentation we might write on the subject.
>
> Does anyone see this as a problem?  Should a WindowsError masquerade as
> "OSError", or maybe just look a little more like it - eg, "OSError
> (windows)" ??

I can assure you that nobody running on a Unix(tm) derivative is going to
catch WindowsError as such on purpose, so the question is how stupid are
Windows users?  I say leave it alone and let them tell us <wink>.





From greg at cosc.canterbury.ac.nz  Tue Aug 15 03:08:00 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 15 Aug 2000 13:08:00 +1200 (NZST)
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers
 for 2.0)
In-Reply-To: <20000814075713.P14470@xs4all.nl>
Message-ID: <200008150108.NAA15067@s454.cosc.canterbury.ac.nz>

> > [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> > S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> > for e in [Egg(a, b, c, d, e) for e in E]]

Note that shadowing of the local variables like that in
an LC is NOT allowed, because, like the variables in a
normal for loop, they're all at the same scope level.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Tue Aug 15 06:43:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 00:43:44 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <399860A3.4E9A340E@nowonder.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENAGPAA.tim_one@email.msn.com>

[Tim]
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived
> advantage of adding seq.items() is that it supplies yet another
> incredibly slow and convoluted way to get at the for-loop index?
> "Ah, that's the ticket!  Let's allocate gazillabytes of storage and
> compute all the indexes into a massive data structure up front, and
> then we can use the loop index that's already sitting there for
> free anyway to index into that and get back a redundant copy of
> itself!" <wink>.

[Peter Schneider-Kamp]]
> That's a -1, right? <0.1 wink>

-0 if you also add .keys() and .values() (if you're going to
hypergeneralize, don't go partway nuts -- then it's both more general than
it should be yet still not as general as people will expect).

-1 if it's *just* seq.items().

+1 on an "indexing" clause (the BDFL liked that enough to implement it a few
years ago, but it didn't go in then because he found some random putz who
had used "indexing" as a vrbl name; but if doesn't need to be a keyword,
even that lame (ask Just <wink>) objection goes away).

sqrt(-1) on Barry's generator tease, because that's an imaginary proposal at
this stage of the game.





From effbot at telia.com  Tue Aug 15 07:33:03 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 07:33:03 +0200
Subject: [Python-Dev] WindowsError repr
References: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>
Message-ID: <003d01c0067a$4aa6dc40$f2a6b5d4@hagrid>

mark wrote:
> AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
> work, as WindowsError derives from OSError.  It just worries me that people
> will start explicitly catching "WindowsError", regardless of whatever
> documentation we might write on the subject.
> 
> Does anyone see this as a problem?

I've seen bigger problems -- but I think it's a problem.

any reason you cannot just use a plain OSError?  is the extra
"this is not a generic OSError" information bit actually used by
anyone?

</F>




From effbot at telia.com  Tue Aug 15 08:14:42 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 08:14:42 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.7,1.8
References: <200008150558.WAA26703@slayer.i.sourceforge.net>
Message-ID: <006d01c00680$1c4469c0$f2a6b5d4@hagrid>

tim wrote:
> !     test_popen2       Win32      X X    26-Jul-2000
>           [believe this was fix by /F]
> !         [still fails 15-Aug-2000 for me, on Win98 - tim
> !          test test_popen2 crashed -- exceptions.WindowsError :
> !          [Errno 2] The system cannot find the file specified
> !         ]

do you have w9xpopen.exe in place?

(iirc, mark just added the build files)

</F>




From tim_one at email.msn.com  Tue Aug 15 08:30:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 02:30:40 -0400
Subject: [Python-Dev] [PEP 200] Help!
Message-ID: <LNBBLJKPBEHFEDALKOLCEENEGPAA.tim_one@email.msn.com>

I took a stab at updating PEP200 (the 2.0 release plan), but if you know
more about any of it that should be recorded or changed, please just do so!
There's no reason to funnel updates thru me.  Jeremy may feel differently
when he gets back, but in the meantime this is just more time-consuming
stuff I hadn't planned on needing to do.

Windows geeks:  what's going on with test_winreg2 and test_popen2?  Those
tests have been failing forever (at least on Win98 for me), and the grace
period has more than expired.  Fredrik, if you're still waiting for me to do
something with popen2 (rings a vague bell), please remind me -- I've
forgotten what it was!

thrashingly y'rs  - tim





From tim_one at email.msn.com  Tue Aug 15 08:43:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 02:43:06 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.7,1.8
In-Reply-To: <006d01c00680$1c4469c0$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENFGPAA.tim_one@email.msn.com>

> tim wrote:
> > !     test_popen2       Win32      X X    26-Jul-2000
> >           [believe this was fix by /F]
> > !         [still fails 15-Aug-2000 for me, on Win98 - tim
> > !          test test_popen2 crashed -- exceptions.WindowsError :
> > !          [Errno 2] The system cannot find the file specified
> > !         ]

[/F]
> do you have w9xpopen.exe in place?
>
> (iirc, mark just added the build files)

Ah, thanks!  This is coming back to me now ... kinda ... will pursue.





From tim_one at email.msn.com  Tue Aug 15 09:07:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 03:07:49 -0400
Subject: [Python-Dev] test_popen2 on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENFGPAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENGGPAA.tim_one@email.msn.com>

[/F]
> do you have w9xpopen.exe in place?
>
> (iirc, mark just added the build files)

Heh -- yes, and I wasn't building them.

Now test_popen2 fails for a different reason:

def _test():
    teststr = "abc\n"
    print "testing popen2..."
    r, w = popen2('cat')
    ...

Ain't no "cat" on Win98!  The test is specific to Unix derivatives.  Other
than that, popen2 is working for me now.

Mumble.





From MarkH at ActiveState.com  Tue Aug 15 10:08:33 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 15 Aug 2000 18:08:33 +1000
Subject: [Python-Dev] test_popen2 on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKENGGPAA.tim_one@email.msn.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCENBDEAA.MarkH@ActiveState.com>

> Ain't no "cat" on Win98!  The test is specific to Unix
> derivatives.  Other than that, popen2 is working for me

heh - I noticed that yesterday, then lumped it in the too hard basket.

What I really wanted was for test_popen2 to use python itself for the
sub-process.  This way, commands like 'python -c "import sys;sys.exit(2)"'
could test the handle close semantics, for example.  I gave up when I
realized I would probably need to create temp files with the mini-programs.

I was quite confident that if I attempted this, I would surely break the
test suite on a few platforms.  I wasn't brave enough to risk those
testicles of wrath at this stage in the game <wink>

Mark.





From thomas at xs4all.net  Tue Aug 15 13:15:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 13:15:42 +0200
Subject: [Python-Dev] New PEP for import-as
Message-ID: <20000815131542.B14470@xs4all.nl>


I wrote a quick PEP describing the 'import as' proposal I posted a patch for
last week. Mostly since I was bored in the train to work (too many kids running
around to play Diablo II or any other game -- I hate it when those brats go
'oh cool' and keep hanging around looking over my shoulder ;-) but also a
bit because Tim keeps insisting it should be easy to write a PEP. Perhaps
lowering the standard by providing a few *small* PEPs helps with that ;)
Just's 'indexing-for' PEP would be a good one, too, in that case.

Anyway, the proto-PEP is attached. It's in draft status as far as I'm
concerned, but the PEP isn't really necessary if the feature is accepted by
acclamation.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
-------------- next part --------------
PEP: 2XX
Title: Import As
Version: $Revision: 1.0 $
Owner: thomas at xs4all.net (Thomas Wouters)
Python-Version: 2.0
Status: Draft


Introduction

    This PEP describes the `import as' proposal for Python 2.0. This
    PEP tracks the status and ownership of this feature. It contains a
    description of the feature and outlines changes necessary to
    support the feature. The CVS revision history of this file
    contains the definitive historical record.


Rationale

    This PEP proposes a small extention of current Python syntax
    regarding the `import' and `from <module> import' statements. 
    These statements load in a module, and either bind that module to
    a local name, or binds objects from that module to a local name. 
    However, it is sometimes desirable to bind those objects to a
    different name, for instance to avoid name clashes. Currently, a
    round-about way has to be used to achieve this:

    import os
    real_os = os
    del os
    
    And similar for the `from ... import' statement:
    
    from os import fdopen, exit, stat
    os_fdopen = fdopen
    os_stat = stat
    del fdopen, stat
    
    The proposed syntax change would add an optional `as' clause to
    both these statements, as follows:

    import os as real_os
    from os import fdopen as os_fdopen, exit, stat as os_stat
    
    The `as' name is not intended to be a keyword, and some trickery
    has to be used to convince the CPython parser it isn't one. For
    more advanced parsers/tokenizers, however, this should not be a
    problem.


Implementation details

    A proposed implementation of this new clause can be found in the
    SourceForge patch manager[XX]. The patch uses a NAME field in the
    grammar rather than a bare string, to avoid the keyword issue. It
    also introduces a new bytecode, IMPORT_FROM_AS, which loads an
    object from a module and pushes it onto the stack, so it can be
    stored by a normal STORE_NAME opcode.
    
    The special case of `from module import *' remains a special case,
    in that it cannot accomodate an `as' clause. Also, the current
    implementation does not use IMPORT_FROM_AS for the old form of
    from-import, even though it would make sense to do so. The reason
    for this is that the current IMPORT_FROM bytecode loads objects
    directly from a module into the local namespace, in one bytecode
    operation, and also handles the special case of `*'. As a result
    of moving to the IMPORT_FROM_AS bytecode, two things would happen:
    
    - Two bytecode operations would have to be performed, per symbol,
      rather than one.
      
    - The names imported through `from-import' would be susceptible to
      the `global' keyword, which they currently are not. This means
      that `from-import' outside of the `*' special case behaves more
      like the normal `import' statement, which already follows the
      `global' keyword. It also means, however, that the `*' special
      case is even more special, compared to the ordinary form of
      `from-import'

    However, for consistency and for simplicity of implementation, it
    is probably best to split off the special case entirely, making a
    separate bytecode `IMPORT_ALL' that handles the special case of
    `*', and handle all other forms of `from-import' the way the
    proposed `IMPORT_FROM_AS' bytecode does.

    This dilemma does not apply to the normal `import' statement,
    because this is alread split into two opcodes, a `LOAD_MODULE' and a
    `STORE_NAME' opcode. Supporting the `import as' syntax is a slight
    change to the compiler only.


Copyright

    This document has been placed in the Public Domain.


References

    [1]
http://sourceforge.net/patch/?func=detailpatch&patch_id=101135&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

From nowonder at nowonder.de  Tue Aug 15 17:32:50 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 15 Aug 2000 15:32:50 +0000
Subject: [Python-Dev] IDLE development - Call for participation
Message-ID: <399962A2.D53A048F@nowonder.de>

To (hopefully) speed up the devlopment of IDLE a temporary
fork has been created as a seperate project at SourceForge:

  http://idlefork.sourceforge.net
  http://sourceforge.net/projects/idlefork

The CVS version represents the enhanced IDLE version
sed by David Scherer in his VPython. Besides other
improvements this version executes threads in a
seperate process.

The spanish inquisition invites everybody interested in
IDLE (and not keen to participate in any witch trials)
to contribute to the project.

Any kind of contribution (discussion of new features,
bug reports, patches) will be appreciated.

If we can get the new IDLE version stable and Python's
benevolent dictator for life blesses our lines of code,
the improved IDLE may go back into Python's source
tree proper.

at-least-it'll-be-part-of-Py3K-<wink>-ly y'rs
Peter

P.S.: You do not have to be a member of the Flying Circus.
P.P.S.: There is no Spanish inquisition <0.5 wink>!
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Tue Aug 15 17:56:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 17:56:46 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python pythonrun.c,2.105,2.106
In-Reply-To: <200008151549.IAA25722@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Tue, Aug 15, 2000 at 08:49:06AM -0700
References: <200008151549.IAA25722@slayer.i.sourceforge.net>
Message-ID: <20000815175646.A376@xs4all.nl>

On Tue, Aug 15, 2000 at 08:49:06AM -0700, Fred L. Drake wrote:

> + #include "osdefs.h"			/* SEP */

This comment is kind of cryptic... I know of only one SEP, and that's in "a
SEP field", a construct we use quite often at work ;-) Does this comment
mean the same ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 15 18:09:34 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 12:09:34 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python pythonrun.c,2.105,2.106
In-Reply-To: <20000815175646.A376@xs4all.nl>
References: <200008151549.IAA25722@slayer.i.sourceforge.net>
	<20000815175646.A376@xs4all.nl>
Message-ID: <14745.27454.815489.456310@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > On Tue, Aug 15, 2000 at 08:49:06AM -0700, Fred L. Drake wrote:
 > 
 > > + #include "osdefs.h"			/* SEP */
 > 
 > This comment is kind of cryptic... I know of only one SEP, and that's in "a
 > SEP field", a construct we use quite often at work ;-) Does this comment
 > mean the same ?

  Very cryptic indeed!  It meant I was including osdefs.h to get the
SEP #define from there, but then I didn't need it in the final version
of the code, so the #include can be removed.
  I'll remove those pronto!  Thanks for pointing out my sloppiness!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Tue Aug 15 19:47:23 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 15 Aug 2000 10:47:23 -0700
Subject: [Python-Dev] segfault in sre on 64-bit plats
Message-ID: <20000815104723.A27306@ActiveState.com>

Fredrik,

The sre module currently segfaults on one of the tests suite tests on both
Win64 and 64-bit linux:

    [trentm at nickel src]$ ./python -c "import sre; sre.match('(x)*', 50000*'x')" > srefail.out
    Segmentation fault (core dumped)

I know that I can't expect you to debug this completely, as you don't have to
hardware, but I was hoping you might be able to shed some light on the
subject for me.

This test on Win32 and Linux32 hits the recursion limit check of 10000 in
SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
7500. I don't want to just willy-nilly drop the recursion limit down to make
the problem go away.

Do you have any idea why the segfault may be occuring on 64-bit platforms?

Mark (Favas), have you been having any problems with sre on your 64-bit plats?


In the example above I turned VERBOSE on in _sre.c. WOuld the trace help you?
Here is the last of it (the whole thing is 2MB so I am not sending it all):

    copy 0:1 to 15026 (2)
    |0x600000000020b90c|0x6000000000200d72|ENTER 7517
    |0x600000000020b90e|0x6000000000200d72|MARK 0
    |0x600000000020b912|0x6000000000200d72|LITERAL 120
    |0x600000000020b916|0x6000000000200d73|MARK 1
    |0x600000000020b91a|0x6000000000200d73|MAX_UNTIL 7515
    copy 0:1 to 15028 (2)
    |0x600000000020b90c|0x6000000000200d73|ENTER 7518
    |0x600000000020b90e|0x6000000000200d73|MARK 0
    |0x600000000020b912|0x6000000000200d73|LITERAL 120
    |0x600000000020b916|0x6000000000200d74|MARK 1
    |0x600000000020b91a|0x6000000000200d74|MAX_UNTIL 7516
    copy 0:1 to 15030 (2)
    |0x600000000020b90c|0x6000000000200d74|ENTER 7519
    |0x600000000020b90e|0x6000000000200d74|MARK 0
    |0x600000000020b912|0x6000000000200d74|LITERAL 120
    |0x600000000020b916|0x6000000000200d75|MARK 1
    |0x600000000020b91a|0x6000000000200d75|MAX_UNTIL 7517
    copy 0:1 to 15032 (2)
    |0x600000000020b90c|0x6000000000200d75|ENTER 7520
    |0x600000000020b90e|0x6000000000200d75|MARK 0
    |0x600000000020b912|0x6000000000200d75|LITERAL 120
    |0x600000000020b916|0x6000000000200d76|MARK 1
    |0x600000000020b91a|0x6000000000200d76|MAX_UNTIL 7518
    copy 0:1 to 15034 (2)
    |0x600000000020b90c|0x6000000000200d76|ENTER 7521
    |0x600000000020b90e|0x6000000000200d76|MARK 0
    |0x600000000020b912|0x6000000000200d76|LITERAL 120
    |0x600000000020b916|0x6000000000200d77|MARK 1
    |0x600000000020b91a|0x6000000000200d77|MAX_UNTIL 7519
    copy 0:1 to 15036 (2)
    |0x600000000020b90c|0x600



Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Tue Aug 15 20:24:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 20:24:14 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008151746.KAA06454@bush.i.sourceforge.net>; from noreply@sourceforge.net on Tue, Aug 15, 2000 at 10:46:39AM -0700
References: <200008151746.KAA06454@bush.i.sourceforge.net>
Message-ID: <20000815202414.B376@xs4all.nl>

On Tue, Aug 15, 2000 at 10:46:39AM -0700, noreply at sourceforge.net wrote:

[ About my slight fix to ref5.tex, on list comprehensions syntax ]

> Comment by tim_one:

> Reassigned to Fred, because it's a simple doc change.  Fred, accept this
> <wink> and check it in.  Note that the grammar has a bug, though, so this
> will need to be changed again (and so will the implementation).  That is,
> [x if 6] should not be a legal expression but the grammar allows it today.

A comment by someone (?!ng ?) who forgot to login, at the original
list-comprehensions patch suggests that Skip forgot to include the
documentation patch to listcomps he provided. Ping, Skip, can you sort this
out and check in the rest of that documentation (which supposedly includes a
tutorial section as well) ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Tue Aug 15 20:27:38 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 20:27:38 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/ref ref5.tex,1.32,1.33
In-Reply-To: <200008151754.KAA19233@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Tue, Aug 15, 2000 at 10:54:51AM -0700
References: <200008151754.KAA19233@slayer.i.sourceforge.net>
Message-ID: <20000815202737.C376@xs4all.nl>

On Tue, Aug 15, 2000 at 10:54:51AM -0700, Fred L. Drake wrote:

> Index: ref5.tex
> diff -C2 -r1.32 -r1.33
> *** ref5.tex	2000/08/12 18:09:50	1.32
> --- ref5.tex	2000/08/15 17:54:49	1.33
> ***************
> *** 153,157 ****
>   
>   \begin{verbatim}
> ! list_display:   "[" [expression_list [list_iter]] "]"
>   list_iter:   list_for | list_if
>   list_for:    "for" expression_list "in" testlist [list_iter]
> --- 153,158 ----
>   
>   \begin{verbatim}
> ! list_display:   "[" [listmaker] "]"
> ! listmaker:   expression_list ( list_iter | ( "," expression)* [","] )

Uhm, this is wrong, and I don't think it was what I submitted either
(though, if I did, I apologize :) The first element of listmaker is an
expression, not an expression_list. I'll change that, unless Ping and Skip
wake up and fix it in a much better way instead.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 15 20:32:07 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 14:32:07 -0400 (EDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
References: <200008151746.KAA06454@bush.i.sourceforge.net>
	<20000815202414.B376@xs4all.nl>
Message-ID: <14745.36007.423378.87635@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > A comment by someone (?!ng ?) who forgot to login, at the original
 > list-comprehensions patch suggests that Skip forgot to include the
 > documentation patch to listcomps he provided. Ping, Skip, can you sort this
 > out and check in the rest of that documentation (which supposedly includes a
 > tutorial section as well) ?

  I've not been tracking the list comprehensions discussion, but there
is a (minimal) entry in the tutorial.  It could use some fleshing out.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Tue Aug 15 20:34:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 14:34:43 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/ref ref5.tex,1.32,1.33
In-Reply-To: <20000815202737.C376@xs4all.nl>
References: <200008151754.KAA19233@slayer.i.sourceforge.net>
	<20000815202737.C376@xs4all.nl>
Message-ID: <14745.36163.362268.388275@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Uhm, this is wrong, and I don't think it was what I submitted either
 > (though, if I did, I apologize :) The first element of listmaker is an
 > expression, not an expression_list. I'll change that, unless Ping and Skip
 > wake up and fix it in a much better way instead.

  You're right; that's what I get for applying it manually (trying to
avoid all the machinery of saving/patching from SF...).
  Fixed in a sec!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Aug 15 21:11:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:11:11 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
References: <20000815104723.A27306@ActiveState.com>
Message-ID: <005401c006ec$a95a74a0$f2a6b5d4@hagrid>

trent wrote:
> This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> 7500. I don't want to just willy-nilly drop the recursion limit down to make
> the problem go away.

SRE is overflowing the stack, of course :-(

:::

I spent a little time surfing around on the MSDN site, and came
up with the following little PyOS_CheckStack implementation for
Visual C (and compatibles):

#include <malloc.h>

int __inline
PyOS_CheckStack()
{
    __try {
        /* alloca throws a stack overflow exception if there's less
           than 2k left on the stack */
        alloca(2000);
        return 0;
    } __except (1) {
        /* just ignore the error */
    }
    return 1;
}

a quick look at the resulting assembler code indicates that this
should be pretty efficient (some exception-related stuff, and a
call to an internal stack probe function), but I haven't added it
to the interpreter (and I don't have time to dig deeper into this
before the weekend).

maybe someone else has a little time to spare?

it shouldn't be that hard to figure out 1) where to put this, 2) what
ifdef's to use around it, and 3) what "2000" should be changed to...

(and don't forget to set USE_STACKCHECK)

</F>




From effbot at telia.com  Tue Aug 15 21:17:49 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:17:49 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
References: <20000815104723.A27306@ActiveState.com> <005401c006ec$a95a74a0$f2a6b5d4@hagrid>
Message-ID: <008601c006ed$8100c120$f2a6b5d4@hagrid>

I wrote:
>     } __except (1) {

should probably be:

    } __except (EXCEPTION_EXECUTE_HANDLER) {

</F>




From effbot at telia.com  Tue Aug 15 21:19:32 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:19:32 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
Message-ID: <009501c006ed$be40afa0$f2a6b5d4@hagrid>

I wrote:
>     } __except (EXCEPTION_EXECUTE_HANDLER) {

which is defined in "excpt.h"...

</F>




From tim_one at email.msn.com  Tue Aug 15 21:19:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 15:19:23 -0400
Subject: [Python-Dev] Call for reviewer!
Message-ID: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>

There are 5 open & related patches to getopt.py:  101106 thru 101110
inclusive.  Who wants to review these?  Fair warning in advance that Guido
usually hates adding stuff to getopt, and the patch comment

    I examined the entire 1.6b1 tarball for incompatibilities,
    and found only 2 in 90+ modules using getopt.py.

probably means it's dead on arrival (2+% is infinitely more than 0% <0.1
wink>).

On that basis alone, my natural inclination is to reject them for lack of
backward compatibility.  So let's get some votes and see whether there's
sufficient support to overcome that.





From trentm at ActiveState.com  Tue Aug 15 21:53:46 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 15 Aug 2000 12:53:46 -0700
Subject: [Python-Dev] Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Aug 15, 2000 at 03:19:23PM -0400
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
Message-ID: <20000815125346.I30086@ActiveState.com>

On Tue, Aug 15, 2000 at 03:19:23PM -0400, Tim Peters wrote:
> There are 5 open & related patches to getopt.py:  101106 thru 101110
> inclusive.  Who wants to review these?  Fair warning in advance that Guido
> usually hates adding stuff to getopt, and the patch comment
> 
>     I examined the entire 1.6b1 tarball for incompatibilities,
>     and found only 2 in 90+ modules using getopt.py.
> 
> probably means it's dead on arrival (2+% is infinitely more than 0% <0.1
> wink>).
> 
> On that basis alone, my natural inclination is to reject them for lack of
> backward compatibility.  So let's get some votes and see whether there's
> sufficient support to overcome that.
> 

-0 (too timid to use -1)

getopt is a nice simple, quick, useful module. Rather than extending it I
would rather see a separate getopt-like module for those who need some more
heavy duty option processing. One that supports windows '/' switch markers.
One where each option is maybe a class instance with methods that do the
processing and record state for that option and with attributes for help
strings and the number of arguments accepted and argument validification
methods. One that supports abstraction of options to capabilities (e.g. two
compiler interfaces, same capability, different option to specify it, share
option processing). One that support different algorithms for parsing the
command line (some current apps like to run through and grab *all* the
options, some like to stop option processing at the first non-option).

Call it 'supergetopt' and whoever cam 'import supergetopt as getopt'.

Keep getopt the way it is. Mind you, I haven't looked at the proposed patches
so my opinion might be unfair.

Trent


-- 
Trent Mick
TrentM at ActiveState.com



From akuchlin at mems-exchange.org  Tue Aug 15 22:01:56 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 15 Aug 2000 16:01:56 -0400
Subject: [Python-Dev] Call for reviewer!
In-Reply-To: <20000815125346.I30086@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 15, 2000 at 12:53:46PM -0700
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com> <20000815125346.I30086@ActiveState.com>
Message-ID: <20000815160156.D16506@kronos.cnri.reston.va.us>

On Tue, Aug 15, 2000 at 12:53:46PM -0700, Trent Mick wrote:
>Call it 'supergetopt' and whoever cam 'import supergetopt as getopt'.

Note that there's Lib/distutils/fancy_getopt.py.  The docstring reads:

Wrapper around the standard getopt module that provides the following
additional features:
  * short and long options are tied together
  * options have help strings, so fancy_getopt could potentially
    create a complete usage summary
  * options set attributes of a passed-in object

--amk



From bwarsaw at beopen.com  Tue Aug 15 22:30:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 16:30:59 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
Message-ID: <14745.43139.834290.323136@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <twouters at users.sourceforge.net> writes:

    TW> Apply SF patch #101151, by Peter S-K, which fixes smtplib's
    TW> passing of the 'helo' and 'ehlo' message, and exports the
    TW> 'make_fqdn' function. This function should be moved to
    TW> socket.py, if that module ever gets a Python wrapper.

Should I work on this for 2.0?  Specifically 1) moving socketmodule to
_socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
socket.py instead of smtplib.

It makes no sense for make_fqdn to live in smtplib.

I'd be willing to do this.

-Barry



From gstein at lyra.org  Tue Aug 15 22:42:02 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 15 Aug 2000 13:42:02 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.43139.834290.323136@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:30:59PM -0400
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <20000815134202.K19525@lyra.org>

On Tue, Aug 15, 2000 at 04:30:59PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TW" == Thomas Wouters <twouters at users.sourceforge.net> writes:
> 
>     TW> Apply SF patch #101151, by Peter S-K, which fixes smtplib's
>     TW> passing of the 'helo' and 'ehlo' message, and exports the
>     TW> 'make_fqdn' function. This function should be moved to
>     TW> socket.py, if that module ever gets a Python wrapper.
> 
> Should I work on this for 2.0?  Specifically 1) moving socketmodule to
> _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
> socket.py instead of smtplib.
> 
> It makes no sense for make_fqdn to live in smtplib.
> 
> I'd be willing to do this.

Note that Windows already has a socket.py module (under plat-win or
somesuch). You will want to integrate that with any new socket.py that you
implement.

Also note that Windows does some funny stuff in socketmodule.c to export
itself as _socket. (the *.dsp files already build it as _socket.dll)


+1

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Tue Aug 15 22:46:15 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 16:46:15 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
	<14745.43139.834290.323136@anthem.concentric.net>
	<20000815134202.K19525@lyra.org>
Message-ID: <14745.44055.15573.283903@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> Note that Windows already has a socket.py module (under
    GS> plat-win or somesuch). You will want to integrate that with
    GS> any new socket.py that you implement.

    GS> Also note that Windows does some funny stuff in socketmodule.c
    GS> to export itself as _socket. (the *.dsp files already build it
    GS> as _socket.dll)

    GS> +1

Should we have separate plat-*/socket.py files or does it make more
sense to try to integrate them into one shared socket.py?  From quick
glance it certainly looks like there's Windows specific stuff in
plat-win/socket.py (big surprise, huh?)

-Barry



From nowonder at nowonder.de  Wed Aug 16 00:47:24 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 15 Aug 2000 22:47:24 +0000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib 
 libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <3999C87C.24A0DF82@nowonder.de>

"Barry A. Warsaw" wrote:
> 
> Should I work on this for 2.0?  Specifically 1) moving socketmodule to
> _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
> socket.py instead of smtplib.

+1 on you doing that. I'd volunteer, but I am afraid ...

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Tue Aug 15 23:04:11 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 23:04:11 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.44055.15573.283903@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:46:15PM -0400
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net> <20000815134202.K19525@lyra.org> <14745.44055.15573.283903@anthem.concentric.net>
Message-ID: <20000815230411.D376@xs4all.nl>

On Tue, Aug 15, 2000 at 04:46:15PM -0400, Barry A. Warsaw wrote:

>     GS> Note that Windows already has a socket.py module (under
>     GS> plat-win or somesuch). You will want to integrate that with
>     GS> any new socket.py that you implement.

BeOS also has its own socket.py wrapper, to provide some functions BeOS
itself is missing (dup, makefile, fromfd, ...) I'm not sure if that's still
necessary, though, perhaps BeOS decided to implement those functions in a
later version ?

> Should we have separate plat-*/socket.py files or does it make more
> sense to try to integrate them into one shared socket.py?  From quick
> glance it certainly looks like there's Windows specific stuff in
> plat-win/socket.py (big surprise, huh?)

And don't forget the BeOS stuff ;P This is the biggest reason I didn't do it
myself: it takes some effort and a lot of grokking to fix this up properly,
without spreading socket.py out in every plat-dir. Perhaps it needs to be
split up like the os module ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Wed Aug 16 00:25:33 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 15 Aug 2000 18:25:33 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <20000815230411.D376@xs4all.nl>
References: <14745.44055.15573.283903@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:46:15PM -0400
Message-ID: <1245744161-146360088@hypernet.com>

Thomas Wouters wrote:
> On Tue, Aug 15, 2000 at 04:46:15PM -0400, Barry A. Warsaw wrote:
> 
> >     GS> Note that Windows already has a socket.py module (under
> >     GS> plat-win or somesuch). You will want to integrate that
> >     with GS> any new socket.py that you implement.
> 
> BeOS also has its own socket.py wrapper, to provide some
> functions BeOS itself is missing (dup, makefile, fromfd, ...) I'm
> not sure if that's still necessary, though, perhaps BeOS decided
> to implement those functions in a later version ?

Sounds very close to what Windows left out. As for *nixen, 
there are some differences between BSD and SysV sockets, 
but they're really, really arcane.
 


- Gordon



From fdrake at beopen.com  Wed Aug 16 01:06:15 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 19:06:15 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.43139.834290.323136@anthem.concentric.net>
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
	<14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <14745.52455.487734.450253@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Should I work on this for 2.0?  Specifically 1) moving socketmodule to
 > _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
 > socket.py instead of smtplib.
 > 
 > It makes no sense for make_fqdn to live in smtplib.

  I've started, but am momentarily interupted.  Watch for it late
tonight.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Wed Aug 16 01:19:42 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 19:19:42 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
	<14745.43139.834290.323136@anthem.concentric.net>
	<14745.52455.487734.450253@cj42289-a.reston1.va.home.com>
Message-ID: <14745.53262.605601.806635@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    Fred>   I've started, but am momentarily interupted.  Watch for it
    Fred> late tonight.  ;)

Okay fine.  I'll hold off on socket module then, and will take a look
at whatever you come up with.

-Barry



From gward at mems-exchange.org  Wed Aug 16 01:57:51 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Tue, 15 Aug 2000 19:57:51 -0400
Subject: [Python-Dev] Winreg update
In-Reply-To: <3993FEC7.4E38B4F1@prescod.net>; from paul@prescod.net on Fri, Aug 11, 2000 at 08:25:27AM -0500
References: <3993FEC7.4E38B4F1@prescod.net>
Message-ID: <20000815195751.A16100@ludwig.cnri.reston.va.us>

On 11 August 2000, Paul Prescod said:
> This is really easy so I want
> some real feedback this time. Distutils people, this means you! Mark! I
> would love to hear Bill Tutt, Greg Stein and anyone else who claims some
> knowledge of Windows!

All I know is that the Distutils only use the registry for one thing:
finding the MSVC binaries (in distutils/msvccompiler.py).  The registry
access is coded in such a way that we can use either the
win32api/win32con modules ("old way") or _winreg ("new way", but still
the low-level interface).

I'm all in favour of high-level interfaces, and I'm also in favour of
speaking the local tongue -- when in Windows, follow the Windows API (at
least for features that are totally Windows-specific, like the
registry).  But I know nothing about all this stuff, and as far as I
know the registry access in distutils/msvccompiler.py works just fine as
is.

        Greg



From tim_one at email.msn.com  Wed Aug 16 02:28:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 20:28:10 -0400
Subject: [Python-Dev] Release Satii
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAAHAAA.tim_one@email.msn.com>

1.6:  Is done, but being held back (by us -- two can play at this game
<wink>) pending resolution of license issues.  Since 2.0 will be a
derivative work of 1.6, the license that goes out with 1.6 affects us
forever after.  Can't say more about that because I don't know more; and
Guido is out of town this week.

2.0:  Full steam ahead!  Just finished going thru every patch on
SourceForge.  What's Open at this instant is *it* for new 2.0 features.
More accurately, they're the only new features that will still be
*considered* for 2.0 (not everything in Open now will necessarily be
accepted).  The only new patches that won't be instantly Postponed from now
until 2.0 final ships are bugfixes.  Some oddities:

+ 8 patches remain unassigned.  7 of those are part of a single getopt
crusade (well, two getopt crusades, since as always happens when people go
extending getopt, they can't agree about what to do), and if nobody speaks
in their favor they'll probably get gently rejected.  The eighth is a CGI
patch from Ping that looks benign to me but is incomplete (missing doc
changes).

+ /F's Py_ErrFormat patch got moved back from Rejected to Open so we can
find all the putative 2.0 patches in one SF view (i.e., Open).

I've said before that I have no faith in the 2.0 release schedule.  Here's
your chance to make a fool of me -- and in public too <wink>!

nothing-would-make-me-happier-ly y'rs  - tim





From greg at cosc.canterbury.ac.nz  Wed Aug 16 02:57:18 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 12:57:18 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
Message-ID: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas at xs4all.net>:

> Comment by tim_one:

> [x if 6] should not be a legal expression but the grammar allows it today.

Why shouldn't it be legal?

The meaning is quite clear (either a one-element list or an empty
list). It's something of a degenerate case, but I don't think
degenerate cases should be excluded simply because they're
degenerate.

Excluding it will make both the implementation and documentation
more complicated, with no benefit that I can see.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Wed Aug 16 03:26:36 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 21:26:36 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEADHAAA.tim_one@email.msn.com>

[Tim]
> [x if 6] should not be a legal expression but the grammar
> allows it today.

[Greg Ewing]
> Why shouldn't it be legal?

Because Guido hates it.  It's almost certainly an error on the part of the
user; really the same reason that zip() without arguments raises an
exception.

> ...
> Excluding it will make both the implementation and documentation
> more complicated,

Of course, but marginally so.  "The first clause must be an iterator"; end
of doc changes.

> with no benefit that I can see.

Catching likely errors is a benefit for the user.  I realize that Haskell
does allow it -- although that would be a surprise to most Haskell users
<wink>.





From dgoodger at bigfoot.com  Wed Aug 16 04:36:02 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Tue, 15 Aug 2000 22:36:02 -0400
Subject: [Python-Dev] Re: Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
Message-ID: <B5BF7652.7B39%dgoodger@bigfoot.com>

I thought the "backwards compatibility" issue might be a sticking point ;>
And I *can* see why.

So, if I were to rework the patch to remove the incompatibility, would it
fly or still be shot down? Here's the change, in a nutshell:

Added a function getoptdict(), which returns the same data as getopt(), but
instead of a list of [(option, optarg)], it returns a dictionary of
{option:optarg}, enabling random/direct access.

getoptdict() turns this:

    if __name__ == '__main__':
        import getopt
        opts, args = getopt.getopt(sys.argv[1:], 'a:b')
        if len(args) <> 2:
            raise getopt.error, 'Exactly two arguments required.'
        options = {'a': [], 'b': 0}  # default option values
        for opt, optarg in opts:
            if opt == '-a':
                options['a'].append(optarg)
            elif opt == '-b':
                options['b'] = 1
        main(args, options)

into this:

    if __name__ == '__main__':
        import getopt
        opts, args = getopt.getoptdict(sys.argv[1:], 'a:b',
                                       repeatedopts=APPEND)
        if len(args) <> 2:
            raise getopt.error, 'Exactly two arguments required.'
        options = {'a': opts.get('-a', [])}
        options['b'] = opts.has_key('-b')
        main(args, options)

(Notice how the defaults get subsumed into the option processing, which goes
from 6 lines to 2 for this short example. A much higher-level interface,
IMHO.)

BUT WAIT, THERE'S MORE! As part of the deal, you get a free test_getopt.py
regression test module! Act now; vote +1! (Actually, you'll get that no
matter what you vote. I'll remove the getoptdict-specific stuff and resubmit
it if this patch is rejected.)

The incompatibility was introduced because the current getopt() returns an
empty string as the optarg (second element of the tuple) for an argumentless
option. I changed it to return None. Otherwise, it's impossible to
differentiate between an argumentless option '-a' and an empty string
argument '-a ""'. But I could rework it to remove the incompatibility.

Again: If the patch were to become 100% backwards-compatible, with just the
addition of getoptdict(), would it still be rejected, or does it have a
chance?

Eagerly awaiting your judgement...

-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From amk at s205.tnt6.ann.va.dialup.rcn.com  Wed Aug 16 05:13:08 2000
From: amk at s205.tnt6.ann.va.dialup.rcn.com (A.M. Kuchling)
Date: Tue, 15 Aug 2000 23:13:08 -0400
Subject: [Python-Dev] Fate of Include/my*.h
Message-ID: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>

The now-redundant Include/my*.h files in Include should either be
deleted, or at least replaced with empty files containing only a "This
file is obsolete" comment.  I don't think they were ever part of the
public API (Python.h always included them), so deleting them shouldn't
break anything.   

--amk



From greg at cosc.canterbury.ac.nz  Wed Aug 16 05:04:33 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 15:04:33 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEADHAAA.tim_one@email.msn.com>
Message-ID: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>

Tim Peters:

> Because Guido hates it.  It's almost certainly an error on the part
> of the user

Guido doesn't like it, therefore it must be an error. Great
piece of logic there.

> Catching likely errors is a benefit for the user.

What evidence is there that this particular "likely error" is
going to be prevalent enough to justify outlawing a potentially
useful construct? Where are the hoardes of Haskell user falling
into this trap and clamouring for it to be disallowed?

> really the same reason that zip() without arguments raises an
> exception.

No, I don't think it's the same reason. It's not clear what
zip() without arguments should return. There's no such difficulty
in this case.

For the most part, Python is free of artificial restrictions, and I
like it that way. Imposing a restriction of this sort seems
un-Pythonic.

This is the second gratuitous change that's been made to my
LC syntax without any apparent debate. While I acknowledge the
right of the BDFL to do this, I'm starting to feel a bit
left out...

> I realize that Haskell does allow it -- although that would be a
> surprise to most Haskell users

Which suggests that they don't trip over this feature very
often, otherwise they'd soon find out about it!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From gstein at lyra.org  Wed Aug 16 05:20:13 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 15 Aug 2000 20:20:13 -0700
Subject: [Python-Dev] Fate of Include/my*.h
In-Reply-To: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>; from amk@s205.tnt6.ann.va.dialup.rcn.com on Tue, Aug 15, 2000 at 11:13:08PM -0400
References: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <20000815202013.H17689@lyra.org>

On Tue, Aug 15, 2000 at 11:13:08PM -0400, A.M. Kuchling wrote:
> The now-redundant Include/my*.h files in Include should either be
> deleted, or at least replaced with empty files containing only a "This
> file is obsolete" comment.  I don't think they were ever part of the
> public API (Python.h always included them), so deleting them shouldn't
> break anything.   

+1 on deleting them.

-- 
Greg Stein, http://www.lyra.org/



From tim_one at email.msn.com  Wed Aug 16 05:23:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 23:23:44 -0400
Subject: [Python-Dev] Nasty new bug in test_longexp
Message-ID: <LNBBLJKPBEHFEDALKOLCEEAGHAAA.tim_one@email.msn.com>

Fred, I vaguely recall you touched something here recently, so you're top o'
the list.  Smells like an uninitialized variable.

1 of 4:  test_longexp fails in release build:

C:\PCbuild>python ..\lib\test\regrtest.py test_longexp
test_longexp
test test_longexp failed -- Writing: '\012', expected: ' '
1 test failed: test_longexp

2 of 4:  but passes in verbose mode, despite that the output doesn't appear
to match what's expected (missing " (line 1)"):

C:\PCbuild>python ..\lib\test\regrtest.py -v test_longexp
test_longexp
test_longexp
Caught SyntaxError for long expression: expression too long
1 test OK.

3 of 4:  but passes in debug build:

C:\PCbuild>python_d ..\lib\test\regrtest.py test_longexp
Adding parser accelerators ...
Done.
test_longexp
1 test OK.
[3962 refs]

4 of 4: and verbose debug output does appear to match what's expected:

C:\PCbuild>python_d ..\lib\test\regrtest.py -v test_longexp

Adding parser accelerators ...
Done.
test_longexp
test_longexp
Caught SyntaxError for long expression: expression too long (line 1)
1 test OK.
[3956 refs]

C:\PCbuild>





From tim_one at email.msn.com  Wed Aug 16 05:24:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 23:24:44 -0400
Subject: [Python-Dev] Fate of Include/my*.h
In-Reply-To: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAHHAAA.tim_one@email.msn.com>

[A.M. Kuchling]
> The now-redundant Include/my*.h files in Include should either be
> deleted, or at least replaced with empty files containing only a "This
> file is obsolete" comment.  I don't think they were ever part of the
> public API (Python.h always included them), so deleting them shouldn't
> break anything.   

+1





From tim_one at email.msn.com  Wed Aug 16 06:13:00 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 00:13:00 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>

[Tim]
>> Because Guido hates it.  It's almost certainly an error on the part
>> of the user

[Greg Ewing]
> Guido doesn't like it, therefore it must be an error. Great
> piece of logic there.

Perhaps I should have used a colon:  Guido hates it *because* it's almost
certainly an error.  I expect the meaning was plain enough without that,
though.

>> Catching likely errors is a benefit for the user.

> What evidence is there that this particular "likely error" is

Nobody said it was likely.  Scare quotes don't work unless you quote
something that was actually said <wink>.  Likeliness has nothing to do with
whether Python calls something an error anyway, here or anywhere else.

> going to be prevalent enough to justify outlawing a potentially
> useful construct?

Making a list that's either empty or a singleton is useful?  Fine, here you
go:

   (boolean and [x] or [])

We don't need listcomps for that.  listcomps are a concrete implementation
of mathematical set-builder notation, and without iterators to supply a
universe of elements to build *on*, it may make *accidental* sense thanks to
this particular implementation -- but about as much *logical* sense as
map(None, seq1, seq2, ...) makes now.  SETL is the original computer
language home for comprehensions (both set and list), and got this part
right (IMO; Guido just hates it for his own incrutable reasons <wink>).

> Where are the hoardes of Haskell user falling into this trap and
> clamouring for it to be disallowed?

I'd look over on comp.lang.haskell -- provided anyone is still hanging out
there.

>> really the same reason that zip() without arguments raises an
>> exception.

> No, I don't think it's the same reason. It's not clear what
> zip() without arguments should return. There's no such difficulty
> in this case.

A zip with no arguments has no universe to zip over; a listcomp without
iterators has no universe to build on.  I personally don't want syntax
that's both a floor wax and a dessert topping.  The *intent* here is to
supply a flexible and highly expressive way to build lists out of other
sequences; no other sequences, use something else.

> For the most part, Python is free of artificial restrictions, and I
> like it that way. Imposing a restriction of this sort seems
> un-Pythonic.
>
> This is the second gratuitous change that's been made to my
> LC syntax without any apparent debate.

The syntax hasn't been changed yet -- this *is* the debate.  I won't say any
more about it, let's hear what others think.

As to being upset over changes to your syntax, I offered you ownership of
the PEP the instant it was dumped on me (26-Jul), but didn't hear back.
Perhaps you didn't get the email.  BTW, what was the other gratuitous
change?  Requiring parens around tuple targets?  That was discussed here
too, but the debate was brief as consensus seemed clearly to favor requiring
them.  That, plus Guido suggested it at a PythonLabs mtg, and agreement was
unanimous on that point.  Or are you talking about some other change (I
can't recall any other one)?

> While I acknowledge the right of the BDFL to do this, I'm starting
> to feel a bit left out...

Well, Jeez, Greg -- Skip took over the patch, Ping made changes to it after,
I got stuck with the PEP and the Python-Dev rah-rah stuff, and you just sit
back and snipe.  That's fine, you're entitled, but if you choose not to do
the work anymore, you took yourself out of the loop.

>> I realize that Haskell does allow it -- although that would be a
>> surprise to most Haskell users

> Which suggests that they don't trip over this feature very
> often, otherwise they'd soon find out about it!

While also suggesting it's useless to allow it.





From paul at prescod.net  Wed Aug 16 06:30:06 2000
From: paul at prescod.net (Paul Prescod)
Date: Wed, 16 Aug 2000 00:30:06 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <399A18CE.6CFFCAB9@prescod.net>

Tim Peters wrote:
> 
> ...
> 
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived advantage of
> adding seq.items() is that it supplies yet another incredibly slow and
> convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
> allocate gazillabytes of storage and compute all the indexes into a massive
> data structure up front, and then we can use the loop index that's already
> sitting there for free anyway to index into that and get back a redundant
> copy of itself!" <wink>.
> 
> not-a-good-sign-when-common-sense-is-offended-ly y'rs  - tim

.items(), .keys(), .values() and range() all offended my common sense
when I started using Python in the first place. I got over it. 

I really don't see this "indexing" issue to be common enough either for
special syntax OR to worry alot about efficiency. Nobody is forcing
anyone to use .items(). If you want a more efficient way to do it, it's
available (just not as syntactically beautifu -- same as range/xrangel).

That isn't the case for dictionary .items(), .keys() and .values().

Also, if .keys() returns a range object then theoretically the
interpreter could recognize that it is looping over a range and optimize
it at runtime. That's an alternate approach to optimizing range literals
through new byte-codes. I don't have time to think about what that would
entail right now.... :(

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From fdrake at beopen.com  Wed Aug 16 06:51:34 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 00:51:34 -0400 (EDT)
Subject: [Python-Dev] socket module changes
Message-ID: <14746.7638.870650.747281@cj42289-a.reston1.va.home.com>

  This is a brief description of what I plan to check in to make the
changes we've discussed regarding the socket module.  I'll make the
checkins tomorrow morning, allowing you all night to scream if you
think that'll help.  ;)
  Windows and BeOS both use a wrapper module, but these are
essentially identical; the Windows variant has evolved a bit more, but
that evolution is useful for BeOS as well, aside from the errorTab
table (which gives messages for Windows-specific error numbers).  I
will be moving the sharable portions to a new module, _dupless_socket,
which the new socket module will import on Windows and BeOS.  (That
name indicates why they use a wrapper in the first place!)  The
errorTab definition will be moved to the new socket module and will
only be defined on Windows.  The exist wrappers, plat-beos/socket.py
and plat-win/socket.py, will be removed.
  socketmodule.c will only build as _socket, allowing much
simplification of the conditional compilation at the top of the
initialization function.
  The socket module will include the make_fqdn() implementation,
adjusted to make local references to the socket module facilities it
requires and to use string methods instead of using the string
module.  It is documented.
  The docstring in _socket will be moved to socket.py.
  If the screaming doesn't wake me, I'll check this in in the
morning.  The regression test isn't complaining!  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Wed Aug 16 07:12:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 01:12:21 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <399A18CE.6CFFCAB9@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com>

[Paul Prescod]
> ...
> I really don't see this "indexing" issue to be common enough

A simple grep (well, findstr under Windows) finds over 300 instances of

    for ... in range(len(...

in the .py files on my laptop.  I don't recall exactly what the percentages
were when I looked over a very large base of Python code several years ago,
but I believe it was about 1 in 7 for loops.

> for special syntax OR to worry alot about efficiency.

1 in 7 is plenty.  range(len(seq)) is a puzzler to newbies, too -- it's
*such* an indirect way of saying what they say directly in other languages.

> Nobody is forcing anyone to use .items().

Agreed, but since seq.items() doesn't exist now <wink>.

> If you want a more efficient way to do it, it's available (just not as
> syntactically beautiful -- same as range/xrangel).

Which way would that be?  I don't know of one, "efficient" either in the
sense of runtime speed or of directness of expression.  xrange is at least a
storage-efficient way, and isn't it grand that we index the xrange object
with the very integer we're (usually) trying to get it to return <wink>?
The "loop index" isn't an accident of the way Python happens to implement
"for" today, it's the very basis of Python's thing.__getitem__(i)/IndexError
iteration protocol.  Exposing it is natural, because *it* is natural.

> ...
> Also, if .keys() returns a range object then theoretically the
> interpreter could recognize that it is looping over a range and optimize
> it at runtime.

Sorry, but seq.keys() just makes me squirm.  It's a little step down the
Lispish path of making everything look the same.  I don't want to see
float.write() either <wink>.

although-that-would-surely-be-more-orthogonal-ly y'rs  - tim





From thomas at xs4all.net  Wed Aug 16 07:34:29 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 16 Aug 2000 07:34:29 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 12:13:00AM -0400
References: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz> <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <20000816073429.E376@xs4all.nl>

On Wed, Aug 16, 2000 at 12:13:00AM -0400, Tim Peters wrote:

> > This is the second gratuitous change that's been made to my
> > LC syntax without any apparent debate.
> 
> The syntax hasn't been changed yet -- this *is* the debate.  I won't say any
> more about it, let's hear what others think.

It'd be nice to hear *what* the exact syntax issue is. At first I thought
you meant forcing parentheses around all forms of iterator expressions, but
apparently you mean requiring at least a single 'for' statement in a
listcomp ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Wed Aug 16 07:36:24 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 01:36:24 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAMHAAA.tim_one@email.msn.com>

Clarification:

[Tim]
>>> Catching likely errors is a benefit for the user.

[Greg Ewing]
>> What evidence is there that this particular "likely error" is ..

[Tim]
> Nobody said it was likely. ...

Ha!  I did!  But not in Greg's sense.  It was originally in the sense of "if
we see it, it's almost certainly an error on the part of the user", not that
"it's likely we'll see this".  This is in the same sense that Python
considers

    x = float(i,,)
or
    x = for i [1,2,3]

to be likely errors -- you don't see 'em often, but they're most likely
errors on the part of the user when you do.

back-to-the-more-mundane-confusions-ly y'rs  - tim





From greg at cosc.canterbury.ac.nz  Wed Aug 16 08:02:23 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 18:02:23 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <200008160602.SAA15239@s454.cosc.canterbury.ac.nz>

> Guido hates it *because* it's almost certainly an error.

Yes, I know what you meant. I was just trying to point out
that, as far as I can see, it's only Guido's *opinion* that
it's almost certainly an error.

Let n1 be the number of times that [x if y] appears in some
program and the programmer actually meant to write something
else. Let n2 be the number of times [x if y] appears and
the programmer really meant it.

Now, I agree that n1+n2 will probably be a very small number.
But from that alone it doesn't follow that a given instance
of [x if y] is probably an error. That is only true if
n1 is much greater than n2, and in the absence of any
experience, there's no reason to believe that.

> A zip with no arguments has no universe to zip over; a listcomp without
> iterators has no universe to build on... The *intent* here is to
> supply a flexible and highly expressive way to build lists out of other
> sequences; no other sequences, use something else.

That's a reasonable argument. It might even convince me if
I think about it some more. I'll think about it some more.

> if you choose not to do the work anymore, you took yourself out of the
> loop.

You're absolutely right. I'll shut up now.

(By the way, I think your mail must have gone astray, Tim --
I don't recall ever being offered ownership of a PEP, whatever
that might entail.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Wed Aug 16 08:18:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 02:18:30 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000816073429.E376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEAOHAAA.tim_one@email.msn.com>

[Thomas Wouters]
> It'd be nice to hear *what* the exact syntax issue is. At first I thought
> you meant forcing parentheses around all forms of iterator
> expressions,

No, that's an old one, and was requiring parens around a target expression
iff it's a tuple.  So

    [x, y for x in s for y in t]  # BAD
    [(x, y) for x in s for y in t]  # good
    [(((x, y))) for x in s for y in t]  # good, though silly
    [x+y for x in s for y in t] # good
    [(x+y) for x in s for y in t] # good
    [x , for x in s] # BAD
    [(x ,) for x in s] # good

That much is already implemented in the patch currently on SourceForge.

> but apparently you mean requiring at least a single 'for' statement
> in a listcomp ?

No too <wink>, but closer:  it's that the leftmost ("first") clause must be
a "for".  So, yes, at least one for, but also that an "if" can't precede
*all* the "for"s:

   [x for x in s if x & 1] # good
   [x if x & 1 for x in s] # BAD
   [x for x in s]  # good
   [x if y & 1] # BAD

Since the leftmost clause can't refer to any bindings established "to its
right", an "if" as the leftmost clause can't act to filter the elements
generated by the iterators, and so Guido (me too) feels it's almost
certainly an error on the user's part if Python sees an "if" in the leftmost
position.  The current patch allows all of these, though.

In (mathematical) set-builder notation, you certainly see things like

    odds = {x | mod(x, 2) = 1}

That is, with "just a condition".  But in such cases the universe over which
x ranges is implicit from context (else the expression is not
well-defined!), and can always be made explicit; e.g., perhaps the above is
in a text where it's assumed everything is a natural number, and then it can
be made explicit via

    odds = {x in Natural | mod(x, 2) = 1|

In the concrete implementation afforded by listcomps, there is no notion of
an implicit universal set, so (as in SETL too, where this all came from
originally) explicit naming of the universe is required.

The way listcomps are implemented *can* make

   [x if y]

"mean something" (namely one of [x] or [], depending on y's value), but that
has nothing to do with its set-builder heritage.  Looks to me like the user
is just confused!  To Guido too.  Hence the desire not to allow this form at
all.







From ping at lfw.org  Wed Aug 16 08:23:57 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 15 Aug 2000 23:23:57 -0700 (PDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in
 the Ref manual docs on listcomprehensions
In-Reply-To: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008152212170.416-100000@skuld.lfw.org>

On Wed, 16 Aug 2000, Greg Ewing wrote:
> > [x if 6] should not be a legal expression but the grammar allows it today.
> 
> Why shouldn't it be legal?
[...]
> Excluding it will make both the implementation and documentation
> more complicated, with no benefit that I can see.

I don't have a strong opinion on this either way, but i can state
pretty confidently that the change would be tiny and simple: just
replace "list_iter" in the listmaker production with "list_for",
and you are done.


-- ?!ng

"I'm not trying not to answer the question; i'm just not answering it."
    -- Lenore Snell





From tim_one at email.msn.com  Wed Aug 16 08:59:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 02:59:06 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160602.SAA15239@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEAPHAAA.tim_one@email.msn.com>

[Tim]
>> Guido hates it *because* it's almost certainly an error.

[Greg Ewing]
> Yes, I know what you meant. I was just trying to point out
> that, as far as I can see, it's only Guido's *opinion* that
> it's almost certainly an error.

Well, it's mine too, but I always yield to him on stuff like that anyway;
and I guess I *have* to now, because he's my boss <wink>.

> Let n1 be the number of times that [x if y] appears in some
> program and the programmer actually meant to write something
> else. Let n2 be the number of times [x if y] appears and
> the programmer really meant it.
>
> Now, I agree that n1+n2 will probably be a very small number.
> But from that alone it doesn't follow that a given instance
> of [x if y] is probably an error. That is only true if
> n1 is much greater than n2, and in the absence of any
> experience, there's no reason to believe that.

I argued that one all I'm going to -- I think there is.

>> ... The *intent* here is to supply a flexible and highly expressive
> way to build lists out of other sequences; no other sequences, use
> something else.

> That's a reasonable argument. It might even convince me if
> I think about it some more. I'll think about it some more.

Please do, because ...

>> if you choose not to do the work anymore, you took yourself out
>> of the loop.

> You're absolutely right. I'll shut up now.

Please don't!  This patch is not without opposition, and while consensus is
rarely reached on Python-Dev, I think that's partly because "the BDFL ploy"
is overused to avoid the pain of principled compromise.  If this ends in a
stalement among the strongest proponents, it may not be such a good idea
after all.

> (By the way, I think your mail must have gone astray, Tim --
> I don't recall ever being offered ownership of a PEP, whatever
> that might entail.)

All explained at

    http://python.sourceforge.net/peps/

Although in this particular case, I haven't done anything with the PEP
except argue in favor of what I haven't yet written!  Somebody else filled
in the skeletal text that's there now.  If you still want it, it's yours;
I'll attach the email in question.

ok-that's-16-hours-of-python-today-in-just-a-few-more-i'll-
    have-to-take-a-pee<wink>-ly y'rs  - tim


-----Original Message-----

From: Tim Peters [mailto:tim_one at email.msn.com]
Sent: Wednesday, July 26, 2000 1:25 AM
To: Greg Ewing <greg at cosc.canterbury.ac.nz>
Subject: RE: [Python-Dev] PEP202


Greg, nice to see you on Python-Dev!  I became the PEP202 shepherd because
nobody else volunteered, and I want to see the patch get into 2.0.  That's
all there was to it, though:  if you'd like to be its shepherd, happy to
yield to you.  You've done the most to make this happen!  Hmm -- but maybe
that also means you don't *want* to do more.  That's OK too.





From bwarsaw at beopen.com  Wed Aug 16 15:21:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 09:21:59 -0400 (EDT)
Subject: [Python-Dev] Re: Call for reviewer!
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
	<B5BF7652.7B39%dgoodger@bigfoot.com>
Message-ID: <14746.38263.433927.239480@anthem.concentric.net>

I used to think getopt needed a lot of changes, but I'm not so sure
anymore.  getopt's current API works fine for me and I use it in all
my scripts.  However,

>>>>> "DG" == David Goodger <dgoodger at bigfoot.com> writes:

    DG> The incompatibility was introduced because the current
    DG> getopt() returns an empty string as the optarg (second element
    DG> of the tuple) for an argumentless option. I changed it to
    DG> return None. Otherwise, it's impossible to differentiate
    DG> between an argumentless option '-a' and an empty string
    DG> argument '-a ""'. But I could rework it to remove the
    DG> incompatibility.

I don't think that's necessary.  In my own use, if I /know/ -a doesn't
have an argument (because I didn't specify as "a:"), then I never
check the optarg.  And it's bad form for a flag to take an optional
argument; it either does or it doesn't and you know that in advance.

-Barry



From bwarsaw at beopen.com  Wed Aug 16 15:23:45 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 09:23:45 -0400 (EDT)
Subject: [Python-Dev] Fate of Include/my*.h
References: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <14746.38369.116212.875999@anthem.concentric.net>

>>>>> "AMK" == A M Kuchling <amk at s205.tnt6.ann.va.dialup.rcn.com> writes:

    AMK> The now-redundant Include/my*.h files in Include should
    AMK> either be deleted

+1

-Barry



From fdrake at beopen.com  Wed Aug 16 16:26:29 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 10:26:29 -0400 (EDT)
Subject: [Python-Dev] socket module changes
Message-ID: <14746.42133.355087.417895@cj42289-a.reston1.va.home.com>

  The changes to the socket module are now complete.  Note two changes
to yesterdays plan:
  - there is no _dupless_socket; I just merged that into socket.py
  - make_fqdn() got renamed to getfqdn() for consistency with the rest
of the module.
  I also remembered to update smptlib.  ;)
  I'll be away from email during the day; Windows & BeOS users, please
test this!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Wed Aug 16 16:46:26 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 10:46:26 -0400 (EDT)
Subject: [Python-Dev] socket module changes
References: <14746.42133.355087.417895@cj42289-a.reston1.va.home.com>
Message-ID: <14746.43330.134066.238781@anthem.concentric.net>

    >> - there is no _dupless_socket; I just merged that into socket.py -

Thanks, that's the one thing I was going to complain about. :)

-Barry



From bwarsaw at beopen.com  Wed Aug 16 17:11:57 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 11:11:57 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
Message-ID: <14746.44861.78992.343012@anthem.concentric.net>

After channeling and encouragement by Tim Peters, I've updated PEP
214, the extended print statement.  Text is included below, but is
also available at

    http://python.sourceforge.net/peps/pep-0214.html

SourceForge patch #100970 contains the patch to apply against the
current CVS tree if you want to play with it

    http://sourceforge.net/patch/download.php?id=100970

-Barry

-------------------- snip snip --------------------
PEP: 214
Title: Extended Print Statement
Version: $Revision: 1.3 $
Author: bwarsaw at beopen.com (Barry A. Warsaw)
Python-Version: 2.0
Status: Draft
Created: 24-Jul-2000
Post-History: 16-Aug-2000


Introduction

    This PEP describes a syntax to extend the standard `print'
    statement so that it can be used to print to any file-like object,
    instead of the default sys.stdout.  This PEP tracks the status and
    ownership of this feature.  It contains a description of the
    feature and outlines changes necessary to support the feature.
    This PEP summarizes discussions held in mailing list forums, and
    provides URLs for further information, where appropriate.  The CVS
    revision history of this file contains the definitive historical
    record.


Proposal

    This proposal introduces a syntax extension to the print
    statement, which allows the programmer to optionally specify the
    output file target.  An example usage is as follows:

        print >> mylogfile, 'this message goes to my log file'

    Formally, the syntax of the extended print statement is
    
        print_stmt: ... | '>>' test [ (',' test)+ [','] ] )

    where the ellipsis indicates the original print_stmt syntax
    unchanged.  In the extended form, the expression just after >>
    must yield an object with a write() method (i.e. a file-like
    object).  Thus these two statements are equivalent:

	print 'hello world'
        print >> sys.stdout, 'hello world'

    As are these two statements:

        print
        print >> sys.stdout

    These two statements are syntax errors:

        print ,
        print >> sys.stdout,


Justification

    `print' is a Python keyword and introduces the print statement as
    described in section 6.6 of the language reference manual[1].
    The print statement has a number of features:

    - it auto-converts the items to strings
    - it inserts spaces between items automatically
    - it appends a newline unless the statement ends in a comma

    The formatting that the print statement performs is limited; for
    more control over the output, a combination of sys.stdout.write(),
    and string interpolation can be used.

    The print statement by definition outputs to sys.stdout.  More
    specifically, sys.stdout must be a file-like object with a write()
    method, but it can be rebound to redirect output to files other
    than specifically standard output.  A typical idiom is

        sys.stdout = mylogfile
	try:
	    print 'this message goes to my log file'
	finally:
	    sys.stdout = sys.__stdout__

    The problem with this approach is that the binding is global, and
    so affects every statement inside the try: clause.  For example,
    if we added a call to a function that actually did want to print
    to stdout, this output too would get redirected to the logfile.

    This approach is also very inconvenient for interleaving prints to
    various output streams, and complicates coding in the face of
    legitimate try/except or try/finally clauses.


Reference Implementation

    A reference implementation, in the form of a patch against the
    Python 2.0 source tree, is available on SourceForge's patch
    manager[2].  This approach adds two new opcodes, PRINT_ITEM_TO and
    PRINT_NEWLINE_TO, which simply pop the file like object off the
    top of the stack and use it instead of sys.stdout as the output
    stream.


Alternative Approaches

    An alternative to this syntax change has been proposed (originally
    by Moshe Zadka) which requires no syntax changes to Python.  A
    writeln() function could be provided (possibly as a builtin), that
    would act much like extended print, with a few additional
    features.

	def writeln(*args, **kws):
	    import sys
	    file = sys.stdout
	    sep = ' '
	    end = '\n'
	    if kws.has_key('file'):
		file = kws['file']
		del kws['file']
	    if kws.has_key('nl'):
		if not kws['nl']:
		    end = ' '
		del kws['nl']
	    if kws.has_key('sep'):
		sep = kws['sep']
		del kws['sep']
	    if kws:
		raise TypeError('unexpected keywords')
	    file.write(sep.join(map(str, args)) + end)

    writeln() takes a three optional keyword arguments.  In the
    context of this proposal, the relevant argument is `file' which
    can be set to a file-like object with a write() method.  Thus

        print >> mylogfile, 'this goes to my log file'

    would be written as

        writeln('this goes to my log file', file=mylogfile)

    writeln() has the additional functionality that the keyword
    argument `nl' is a flag specifying whether to append a newline or
    not, and an argument `sep' which specifies the separator to output
    in between each item.


References

    [1] http://www.python.org/doc/current/ref/print.html
    [2] http://sourceforge.net/patch/download.php?id=100970



From gvwilson at nevex.com  Wed Aug 16 17:49:06 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Wed, 16 Aug 2000 11:49:06 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
Message-ID: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>

> Barry Warsaw wrote:
> [extended print PEP]

+1 --- it'll come in handy when teaching newbies on Windows and Unix
simultaneously.

Greg




From skip at mojam.com  Wed Aug 16 18:33:30 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 11:33:30 -0500 (CDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
References: <200008151746.KAA06454@bush.i.sourceforge.net>
	<20000815202414.B376@xs4all.nl>
Message-ID: <14746.49754.697401.684106@beluga.mojam.com>

    Thomas> A comment by someone (?!ng ?) who forgot to login, at the
    Thomas> original list-comprehensions patch suggests that Skip forgot to
    Thomas> include the documentation patch to listcomps he provided. Ping,
    Thomas> Skip, can you sort this out and check in the rest of that
    Thomas> documentation (which supposedly includes a tutorial section as
    Thomas> well) ?

Ping & I have already taken care of this off-list.  His examples should be
checked in shortly, if not already.

Skip



From skip at mojam.com  Wed Aug 16 18:43:44 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 11:43:44 -0500 (CDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
References: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
	<LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <14746.50368.982239.813435@beluga.mojam.com>

    Tim> Well, Jeez, Greg -- Skip took over the patch, Ping made changes to
    Tim> it after, I got stuck with the PEP and the Python-Dev rah-rah
    Tim> stuff, and you just sit back and snipe.  That's fine, you're
    Tim> entitled, but if you choose not to do the work anymore, you took
    Tim> yourself out of the loop.

Tim,

I think that's a bit unfair to Greg.  Ages ago Greg offered up a prototype
implementation of list comprehensions based upon a small amount of
discussion on c.l.py.  I took over the patch earlier because I wanted to see
it added to Python (originally 1.7, which is now 2.0).  I knew it would
languish or die if someone on python-dev didn't shepherd it.  I was just get
the thing out there for discussion, and I knew that Greg wasn't on
python-dev to do it himself, which is where most of the discussion about
list comprehensions has taken place.  When I've remembered to, I've tried to
at least CC him on threads I've started so he could participate.  My
apologies to Greg for not being more consistent in that regard.  I don't
think we can fault him for not having been privy to all the discussion.

Skip




From gward at mems-exchange.org  Wed Aug 16 19:34:02 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Wed, 16 Aug 2000 13:34:02 -0400
Subject: [Python-Dev] Python 1.6 & Distutils 0.9.1: success
Message-ID: <20000816133401.F16672@ludwig.cnri.reston.va.us>

[oops, screwed up the cc: python-dev when I sent this to Fred.  let's
 try again, shall we?]

Hi Fred --

I went ahead and tried out the current cnri-16-start branch on Solaris
2.6.  (I figured you guys are all using Linux by now, so you might want
to hear back how it works on Solaris.)

In short: no problem!  It built, tested, and installed just fine.

Oops, just noticed that my configure.in fix from late May didn't make
the cut:

  revision 1.124
  date: 2000/05/26 12:22:54;  author: gward;  state: Exp;  lines: +6 -2
  When building on Solaris and the compiler is GCC, use '$(CC) -G' to
  create shared extensions rather than 'ld -G'.  This ensures that shared
  extensions link against libgcc.a, in case there are any functions in the
  GCC runtime not already in the Python core.

Oh well.  This means that Robin Dunn's bsddb extension won't work with
Python 1.6 under Solaris.

So then I tried Distutils 0.9.1 with the new build: again, it worked
just fine.  I was able to build and install the Distutils proper, and
then NumPy.  And I made a NumPy source dist.  Looks like it works just
fine, although this is hardly a rigorous test (sigh).

I'd say go ahead and release Distutils 0.9.1 with Python 1.6...

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From thomas at xs4all.net  Wed Aug 16 22:55:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 16 Aug 2000 22:55:53 +0200
Subject: [Python-Dev] Pending patches for 2.0
Message-ID: <20000816225552.H376@xs4all.nl>

I have a small problem with the number of pending patches that I wrote, and
haven't fully finished yet: I'm going to be away on vacation from about
September 1st until about October 1st or so ;P I'll try to finish them as
much as possible before then (they mostly need just documentation anyway)
but if Guido decides to go for a different approach for one or more of them
(like allowing floats and/or longs in range literals) someone will have to
take them over to finish them in time for 2.0.

I'm not sure when I'll be leaving my internet connection behind, where we'll
be going or when I'll be back, but I won't be able to do too much rewriting
in the next two weeks either -- work is killing me. (Which is one of the
reasons I'm going to try to be as far away from work as possible, on
September 2nd ;) However, if a couple of patches are rejected/postponed and
others don't require substantial changes, and if those decisions are made
before, say, August 30th, I think I can move them into the CVS tree before
leaving and just shove the responsibility for them on the entire dev team ;)

This isn't a push to get them accepted ! Just a warning that if they aren't
accepted before then, someone will have to take over the breastfeeding ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Wed Aug 16 23:07:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 17:07:35 -0400 (EDT)
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <20000816225552.H376@xs4all.nl>
References: <20000816225552.H376@xs4all.nl>
Message-ID: <14747.663.260950.537440@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > much as possible before then (they mostly need just documentation anyway)
                                                  ^^^^^^^^^^^^^^^^^^

  Don't underestimate that requirement!

 > This isn't a push to get them accepted ! Just a warning that if they aren't
 > accepted before then, someone will have to take over the breastfeeding ;)

  I don't think I want to know too much about your development tools!
;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Wed Aug 16 23:24:19 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 17:24:19 -0400 (EDT)
Subject: [Python-Dev] Python 1.6 & Distutils 0.9.1: success
In-Reply-To: <20000816133401.F16672@ludwig.cnri.reston.va.us>
References: <20000816133401.F16672@ludwig.cnri.reston.va.us>
Message-ID: <14747.1667.252426.489530@cj42289-a.reston1.va.home.com>

Greg Ward writes:
 > I went ahead and tried out the current cnri-16-start branch on Solaris
 > 2.6.  (I figured you guys are all using Linux by now, so you might want
 > to hear back how it works on Solaris.)

  Great!  I've just updated 1.6 to include the Distutils-0_9_1 tagged
version of the distutils package and the documentation.  I'm
rebuilding our release candidates now.

 > In short: no problem!  It built, tested, and installed just fine.

  Great!  Thanks!

 > Oops, just noticed that my configure.in fix from late May didn't make
 > the cut:
...
 > Oh well.  This means that Robin Dunn's bsddb extension won't work with
 > Python 1.6 under Solaris.

  That's unfortunate.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 17 00:22:05 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 00:22:05 +0200
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <14747.663.260950.537440@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Wed, Aug 16, 2000 at 05:07:35PM -0400
References: <20000816225552.H376@xs4all.nl> <14747.663.260950.537440@cj42289-a.reston1.va.home.com>
Message-ID: <20000817002205.I376@xs4all.nl>

On Wed, Aug 16, 2000 at 05:07:35PM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > much as possible before then (they mostly need just documentation anyway)
>                                                   ^^^^^^^^^^^^^^^^^^
>   Don't underestimate that requirement!

I'm not, especially since the things that need documentation (if they are in
principle acceptable to Guido) are range literals (tutorials and existing
code examples), 'import as' (ref, tut), augmented assignment (ref, tut, lib,
api, ext, existing examples), the getslice->getitem change (tut, lib, all
other references to getslice/extended slices and existing example code) and
possibly the 'indexing for' patch (ref, tut, a large selection of existing
example code.)

Oh, and I forgot, some patches would benefit from more library changes, too,
like augmented assignment and getslice-to-getitem. That can always be done
after the patches are in, by other people (if they can't, the patch
shouldn't go in in the first place!)

I guess I'll be doing one large, intimate pass over all documentation, do
everything at once, and later split it up. I also think I'm going to post
them seperately, to allow for easier proofreading. I also think I'm in need
of some sleep, and will think about this more tomorrow, after I get
LaTeX2HTML working on my laptop, so I can at least review my own changes ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Thu Aug 17 01:55:42 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 16 Aug 2000 16:55:42 -0700
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
Message-ID: <20000816165542.D29260@ActiveState.com>

Hello autoconf-masters,

I am currently trying to port Python to Monterey (64-bit AIX) and I need to
add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
whatever appropriate variables for all 'cc' and 'ld' invocations) but it is
not obvious *at all* how to do that in configure.in. Can anybody helpme on
that?

ANother issue that I am having. This is how the python executable is linked
on Linux with gcc:

gcc  -Xlinker -export-dynamic python.o ../libpython2.0.a -lpthread -ldl  -lutil -lm  -o python
          
It, of course, works fine, but shouldn't the proper (read "portable")
invocation to include the python2.0 library be

gcc  -Xlinker -export-dynamic python.o -L.. -lpython2.0 -lpthread -ldl  -lutil -lm  -o python

That invocation form (i.e. with the '-L.. -lpython2.0') works on Linux, and
is *required* on Monterey. Does this problem not show up with other Unix
compilers. My hunch is that simply listing library (*.a) arguments on the gcc
command line is a GNU gcc/ld shortcut to the more portable usage of -L and
-l. Any opinions. I would either like to change the form to the latter or
I'll have to special case the invocation for Monterey. ANy opinions on which
is worse.


Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Thu Aug 17 02:24:25 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 16 Aug 2000 17:24:25 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
Message-ID: <20000816172425.A32338@ActiveState.com>

I am porting Python to Monterey (64-bit AIX) and have a small (hopefully)
question about POSIX threads. I have Monterey building and passing the
threads test suite using Python/thread_pthread.h with just one issue:


-------------- snipped from current thread_pthread.h ---------------
long
PyThread_get_thread_ident(void)
{
    volatile pthread_t threadid;
    if (!initialized)
        PyThread_init_thread();
    /* Jump through some hoops for Alpha OSF/1 */
    threadid = pthread_self();
    return (long) *(long *) &threadid;
}
-------------------------------------------------------------------

Does the POSIX threads spec specify a C type or minimum size for
pthread_t? Or can someone point me to the appropriate resource to look
this up. On Linux (mine at least):
  /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int pthread_t;

On Monterey:
  typedef unsigned int pthread_t;
 
That is fine, they are both 32-bits, however Monterey is an LP64 platform
(sizeof(long)==8, sizeof(int)=4), which brings up the question:

WHAT IS UP WITH THAT return STATEMENT?
  return (long) *(long *) &threadid;

My *guess* is that this is an attempt to just cast 'threadid' (a pthread_t)
to a long and go through hoops to avoid compiler warnings. I dont' know what
else it could be. Is that what the "Alpha OSF/1" comment is about? Anybody
have an Alpha OSF/1 hanging around. The problem is that when
sizeof(pthread_t) != sizeof(long) this line is just broken.

Could this be changed to
  return threadid;
safely?


Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From greg at cosc.canterbury.ac.nz  Thu Aug 17 02:33:40 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 12:33:40 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEAPHAAA.tim_one@email.msn.com>
Message-ID: <200008170033.MAA15351@s454.cosc.canterbury.ac.nz>

> If this ends in a stalement among the strongest proponents, it may not
> be such a good idea after all.

Don't worry, I really don't have any strong objection to
either of these changes. They're only cosmetic, after all.
It's still a good idea.

Just one comment: even if the first clause *is* a 'for',
there's no guarantee that the rest of the clauses have
to have anything to do with what it produces. E.g.

   [x for y in [1] if z]

The [x if y] case is only one of an infinite number of
possible abuses. Do you still think it's worth taking
special steps to catch that particular one?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 17 03:17:53 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 13:17:53 +1200 (NZST)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>
Message-ID: <200008170117.NAA15360@s454.cosc.canterbury.ac.nz>

Looks reasonably good. Not entirely sure I like the look
of >> though -- a bit too reminiscent of C++.

How about

   print to myfile, x, y, z

with 'to' as a non-reserved keyword. Or even

   print to myfile: x, y, z

but that might be a bit too radical!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From m.favas at per.dem.csiro.au  Thu Aug 17 03:17:42 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 17 Aug 2000 09:17:42 +0800
Subject: [Python-Dev] [Fwd: segfault in sre on 64-bit plats]
Message-ID: <399B3D36.6921271@per.dem.csiro.au>

 
-------------- next part --------------
An embedded message was scrubbed...
From: Mark Favas <m.favas at per.dem.csiro.au>
Subject: Re: segfault in sre on 64-bit plats
Date: Thu, 17 Aug 2000 09:15:22 +0800
Size: 2049
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000817/4238d330/attachment.eml>

From greg at cosc.canterbury.ac.nz  Thu Aug 17 03:26:59 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 13:26:59 +1200 (NZST)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com>
Message-ID: <200008170126.NAA15363@s454.cosc.canterbury.ac.nz>

> My hunch is that simply listing library (*.a) arguments on the gcc
> command line is a GNU gcc/ld shortcut to the more portable usage of -L
> and -l.

I've never encountered a Unix that wouldn't let you explicity
give .a files to cc or ld. It's certainly not a GNU invention.

Sounds like Monterey is the odd one out here. ("Broken" is
another word that comes to mind.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 03:41:48 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 03:41:48 +0200 (CEST)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000816172425.A32338@ActiveState.com> from "Trent Mick" at Aug 16, 2000 05:24:25 PM
Message-ID: <200008170141.DAA17229@python.inrialpes.fr>

Trent Mick wrote:
> 
> I am porting Python to Monterey (64-bit AIX) and have a small (hopefully)
> question about POSIX threads. I have Monterey building and passing the
> threads test suite using Python/thread_pthread.h with just one issue:
> 
> -------------- snipped from current thread_pthread.h ---------------
> long
> PyThread_get_thread_ident(void)
> {
>     volatile pthread_t threadid;
>     if (!initialized)
>         PyThread_init_thread();
>     /* Jump through some hoops for Alpha OSF/1 */
>     threadid = pthread_self();
>     return (long) *(long *) &threadid;
> }
> -------------------------------------------------------------------
> 
> ...
> 
> WHAT IS UP WITH THAT return STATEMENT?
>   return (long) *(long *) &threadid;

I don't know and I had the same question at the time when there was some
obscure bug on my AIX combo at this location. I remember that I had played
with the debugger and the only workaround at the time which solved the
mystery was to add the 'volatile' qualifier. So if you're asking yourself
what that 'volatile' is for, you have one question less...

> 
> My *guess* is that this is an attempt to just cast 'threadid' (a pthread_t)
> to a long and go through hoops to avoid compiler warnings. I dont' know what
> else it could be. Is that what the "Alpha OSF/1" comment is about? Anybody
> have an Alpha OSF/1 hanging around. The problem is that when
> sizeof(pthread_t) != sizeof(long) this line is just broken.
> 
> Could this be changed to
>   return threadid;
> safely?

I have the same question. If Guido can't answer this straight, we need
to dig the CVS logs.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 03:43:33 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 03:43:33 +0200 (CEST)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com> from "Trent Mick" at Aug 16, 2000 04:55:42 PM
Message-ID: <200008170143.DAA17238@python.inrialpes.fr>

Trent Mick wrote:
> 
> Hello autoconf-masters,
> 
> I am currently trying to port Python to Monterey (64-bit AIX) and I need to
> add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> whatever appropriate variables for all 'cc' and 'ld' invocations) but it is
> not obvious *at all* how to do that in configure.in. Can anybody helpme on
> that?

How can we help? What do want to do, exactly?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From fdrake at beopen.com  Thu Aug 17 03:40:32 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 21:40:32 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <200008170141.DAA17229@python.inrialpes.fr>
References: <20000816172425.A32338@ActiveState.com>
	<200008170141.DAA17229@python.inrialpes.fr>
Message-ID: <14747.17040.968927.914435@cj42289-a.reston1.va.home.com>

Vladimir Marangozov writes:
 > I have the same question. If Guido can't answer this straight, we need
 > to dig the CVS logs.

  Guido is out of town right now, and doesn't have his usual email
tools with him, so he may not respond this week.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 04:12:18 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 04:12:18 +0200 (CEST)
Subject: [Python-Dev] shm + Win32 + docs (was: Adding library modules to the core)
In-Reply-To: <20000808114655.C29686@thyrsus.com> from "Eric S. Raymond" at Aug 08, 2000 11:46:55 AM
Message-ID: <200008170212.EAA17523@python.inrialpes.fr>

Eric S. Raymond wrote:
> 
> Vladimir, I suggest that the most useful thing you could do to advance
> the process at this point would be to document shm in core-library style.

Eric, I'm presently suffering from chronic lack of time (as you probably
do too) so if you could write the docs for me and take all associated
credits for them, please do so (shouldn't be that hard, after all -- the
web page and the comments are self-explanatory :-). I'm willing to "unblock"
you on this, but I can hardly make the time for it -- it's low-priority on
my dynamic task schedule. :(

I'd also love to assemble the win32 bits on the matter (what's in win32event
for the semaphore interface + my Windows book) to add shm and sem for
Windows and rewrite the interface, but I have no idea on when this could
happen.

I will disappear from the face of the World sometime soon and it's
unclear on when I'll be able to reappear (nor how soon I'll disappear, btw)
So, be aware of that. I hope to be back again before 2.1 so if we can
wrap up a Unix + win32 shm, that would be much appreciated!

> 
> At the moment, core Python has nothing (with the weak and nonportable 
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

Indeed. This is currently being discussed on the french Python list,
where Richard Gruet (rgruet at ina.fr) posted the following code for
inter-process locks: glock.py

I don't have the time to look at it in detail, just relaying here
for food and meditation :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252

------------------------------[ glock.py ]----------------------------
#!/usr/bin/env python
#----------------------------------------------------------------------------
# glock.py: 				Global mutex
#
# Prerequisites:
#    - Python 1.5.2 or newer (www.python.org)
#    - On windows: win32 extensions installed
#			(http://www.python.org/windows/win32all/win32all.exe)
#    - OS: Unix, Windows.
#
# History:
#	-22 Jan 2000 (R.Gruet): creation
#
# Limitations:
# TODO:
#----------------------------------------------------------------------------
'''	This module defines the class GlobalLock that implements a global
	(inter-process) mutex that works on Windows and Unix, using
	file-locking on Unix (I also tried this approach on Windows but got
	some tricky problems so I ended using Win32 Mutex)..
	See class GlobalLock for more details.
'''
__version__ = 1,0,2
__author__ = ('Richard Gruet', 'rgruet at ina.fr')

# Imports:
import sys, string, os

# System-dependent imports for locking implementation:
_windows = (sys.platform == 'win32')

if _windows:
	try:
		import win32event, win32api, pywintypes
	except ImportError:
		sys.stderr.write('The win32 extensions need to be installed!')
else:	# assume Unix
	try:
		import fcntl
	except ImportError:
		sys.stderr.write("On what kind of OS am I ? (Mac?) I should be on "
						 "Unix but can't import fcntl.\n")
		raise
	import threading

# Exceptions :
# ----------
class GlobalLockError(Exception):
	''' Error raised by the glock module.
	'''
	pass

class NotOwner(GlobalLockError):
	''' Attempt to release somebody else's lock.
	'''
	pass


# Constants
# ---------:
true=-1
false=0

#----------------------------------------------------------------------------
class GlobalLock:
#----------------------------------------------------------------------------
	''' A global mutex.

		*Specification:
		 -------------
		 The lock must act as a global mutex, ie block between different
		 candidate processus, but ALSO between different candidate
		 threads of the same process.
		 It must NOT block in case of recursive lock request issued by
		 the SAME thread.
		 Extraneous unlocks should be ideally harmless.

		*Implementation:
		 --------------
		 In Python there is no portable global lock AFAIK.
		 There is only a LOCAL/ in-process Lock mechanism
		 (threading.RLock), so we have to implement our own solution.

		Unix: use fcntl.flock(). Recursive calls OK. Different process OK.
			  But <> threads, same process don't block so we have to
			  use an extra threading.RLock to fix that point.
		Win: We use WIN32 mutex from Python Win32 extensions. Can't use
			 std module msvcrt.locking(), because global lock is OK, but
			 blocks also for 2 calls from the same thread!
	'''
	def __init__(self, fpath, lockInitially=false):
		'''	Creates (or opens) a global lock.

			@param fpath Path of the file used as lock target. This is also
						 the global id of the lock. The file will be created
						 if non existent.
			@param lockInitially if true locks initially.
		'''
		if _windows:
			self.name = string.replace(fpath, '\\', '_')
			self.mutex = win32event.CreateMutex(None, lockInitially, self.name)
		else: # Unix
			self.name = fpath
			self.flock = open(fpath, 'w')
			self.fdlock = self.flock.fileno()
			self.threadLock = threading.RLock()
		if lockInitially:
			self.acquire()

	def __del__(self):
		#print '__del__ called' ##
		try: self.release()
		except: pass
		if _windows:
			win32api.CloseHandle(self.mutex)
		else:
			try: self.flock.close()
			except: pass

	def __repr__(self):
		return '<Global lock @ %s>' % self.name

	def acquire(self):
		''' Locks. Suspends caller until done.

			On windows an IOError is raised after ~10 sec if the lock
			can't be acquired.
			@exception GlobalLockError if lock can't be acquired (timeout)
		'''
		if _windows:
			r = win32event.WaitForSingleObject(self.mutex, win32event.INFINITE)
			if r == win32event.WAIT_FAILED:
				raise GlobalLockError("Can't acquire mutex.")
		else:
			# Acquire 1st the global (inter-process) lock:
			try:
				fcntl.flock(self.fdlock, fcntl.LOCK_EX)	# blocking
			except IOError:	#(errno 13: perm. denied,
							#		36: Resource deadlock avoided)
				raise GlobalLockError('Cannot acquire lock on "file" %s\n' %
										self.name)
			#print 'got file lock.' ##
			# Then acquire the local (inter-thread) lock:
			self.threadLock.acquire()
			#print 'got thread lock.' ##

	def release(self):
		''' Unlocks. (caller must own the lock!)

			@return The lock count.
			@exception IOError if file lock can't be released
			@exception NotOwner Attempt to release somebody else's lock.
		'''
		if _windows:
			try:
				win32event.ReleaseMutex(self.mutex)
			except pywintypes.error, e:
				errCode, fctName, errMsg =  e.args
				if errCode == 288:
					raise NotOwner("Attempt to release somebody else's lock")
				else:
					raise GlobalLockError('%s: err#%d: %s' % (fctName, errCode,
															  errMsg))
		else:
			# Acquire 1st the local (inter-thread) lock:
			try:
				self.threadLock.release()
			except AssertionError:
				raise NotOwner("Attempt to release somebody else's lock")

			# Then release the global (inter-process) lock:
			try:
				fcntl.flock(self.fdlock, fcntl.LOCK_UN)
			except IOError:	# (errno 13: permission denied)
				raise GlobalLockError('Unlock of file "%s" failed\n' %
															self.name)

#----------------------------------------------------------------------------
#		M A I N
#----------------------------------------------------------------------------
def main():
	# unfortunately can't test inter-process lock here!
	lockName = 'myFirstLock'
	l = GlobalLock(lockName)
	if not _windows:
		assert os.path.exists(lockName)
	l.acquire()
	l.acquire()	# rentrant lock, must not block
	l.release()
	l.release()
	if _windows:
		try: l.release()
		except NotOwner: pass
		else: raise Exception('should have raised a NotOwner exception')

	# Check that <> threads of same process do block:
	import threading, time
	thread = threading.Thread(target=threadMain, args=(l,))
	print 'main: locking...',
	l.acquire()
	print ' done.'
	thread.start()
	time.sleep(3)
	print '\nmain: unlocking...',
	l.release()
	print ' done.'
	time.sleep(0.1)
	del l	# to close file
	print 'tests OK.'

def threadMain(lock):
	print 'thread started(%s).' % lock
	print 'thread: locking (should stay blocked for ~ 3 sec)...',
	lock.acquire()
	print 'thread: locking done.'
	print 'thread: unlocking...',
	lock.release()
	print ' done.'
	print 'thread ended.'

if __name__ == "__main__":
	main()



From bwarsaw at beopen.com  Thu Aug 17 05:17:23 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 23:17:23 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>
	<200008170117.NAA15360@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.22851.266303.28877@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Looks reasonably good. Not entirely sure I like the look
    GE> of >> though -- a bit too reminiscent of C++.

    GE> How about

    GE>    print to myfile, x, y, z

Not bad at all.  Seems quite Pythonic to me.

    GE> with 'to' as a non-reserved keyword. Or even

    GE>    print to myfile: x, y, z

    GE> but that might be a bit too radical!

Definitely so.

-Barry



From bwarsaw at beopen.com  Thu Aug 17 05:19:25 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 23:19:25 -0400 (EDT)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
References: <20000816165542.D29260@ActiveState.com>
	<200008170126.NAA15363@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.22973.502494.739270@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    >> My hunch is that simply listing library (*.a) arguments on the
    >> gcc command line is a GNU gcc/ld shortcut to the more portable
    >> usage of -L and -l.

    GE> I've never encountered a Unix that wouldn't let you explicity
    GE> give .a files to cc or ld. It's certainly not a GNU invention.

That certainly jives with my experience.  All the other non-gcc C
compilers I've used (admittedly only on *nix) have always accepted
explicit .a files on the command line.

-Barry



From MarkH at ActiveState.com  Thu Aug 17 05:32:25 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 17 Aug 2000 13:32:25 +1000
Subject: [Python-Dev] os.path.commonprefix breakage
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>

Hi,
	I believe that Skip recently made a patch to os.path.commonprefix to only
return the portion of the common prefix that corresponds to a directory.

I have just dicovered some code breakage from this change.  On 1.5.2, the
behaviour was:

>>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
'../foo/'

While since the change we have:
'../foo'

Note that the trailing slash has been dropped.

The code this broke did similar to:

prefix = os.path.commonprefix(files)
for file in files:
  tail_portion = file[len(prefix):]

In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
"/spam", respectively.  The intent was obviously to get absolute path names
back ("bar" and "spam")

The code that broke is not mine, so you can safely be horrified at how
broken it is :-)  The point, however, is that code like this does exist out
there.

I'm obviously going to change the code that broke, and don't have time to
look into the posixpath.py code - but is this level of possible breakage
acceptable?

Thanks,

Mark.





From tim_one at email.msn.com  Thu Aug 17 05:34:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 23:34:12 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000816172425.A32338@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>

[Trent Mick]
> I am porting Python to Monterey (64-bit AIX) and have a small
> (hopefully) question about POSIX threads.

POSIX threads. "small question".  HAHAHAHAHAHA.  Thanks, that felt good
<wink>.

> I have Monterey building and passing the threads test suite using
> Python/thread_pthread.h with just one issue:
>
>
> -------------- snipped from current thread_pthread.h ---------------
> long
> PyThread_get_thread_ident(void)
> {
>     volatile pthread_t threadid;
>     if (!initialized)
>         PyThread_init_thread();
>     /* Jump through some hoops for Alpha OSF/1 */
>     threadid = pthread_self();
>     return (long) *(long *) &threadid;
> }
> -------------------------------------------------------------------
>
> Does the POSIX threads spec specify a C type or minimum size for
> pthread_t?

Which POSIX threads spec?  There are so very many (it went thru many
incompatible changes).  But, to answer your question, I don't know but doubt
it.  In practice, some implementations return pointers into kernel space,
others pointers into user space, others small integer indices into kernel-
or user-space arrays of structs.  So I think it's *safe* to assume it will
always fit in an integral type large enough to hold a pointer, but not
guaranteed.  Plain "long" certainly isn't safe in theory.

> Or can someone point me to the appropriate resource to look
> this up. On Linux (mine at least):
>   /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int
> pthread_t;

And this is a 32- or 64-bit Linux?

> On Monterey:
>   typedef unsigned int pthread_t;
>
> That is fine, they are both 32-bits, however Monterey is an LP64 platform
> (sizeof(long)==8, sizeof(int)=4), which brings up the question:
>
> WHAT IS UP WITH THAT return STATEMENT?
>   return (long) *(long *) &threadid;

Heh heh.  Thanks for the excuse!  I contributed the pthreads implementation
originally, and that eyesore sure as hell wasn't in it when I passed it on.
That's easy for me to be sure of, because that entire function was added by
somebody after me <wink>.  I've been meaning to track down where that crap
line came from for *years*, but never had a good reason before.

So, here's the scoop:

+ The function was added in revision 2.3, more than 6 years ago.  At that
time, the return had a direct cast to long.

+ The "Alpha OSF/1" horror was the sole change made to get revision 2.5.

Back in those days, the "patches list" was Guido's mailbox, and *all* CVS
commits were done by him.  So he checked in everything everyone could
convince them they needed, and sometimes without knowing exactly why.  So I
strongly doubt he'll even remember this change, and am certain it's not his
code.

> My *guess* is that this is an attempt to just cast 'threadid' (a
> pthread_t) to a long and go through hoops to avoid compiler warnings. I
> dont' know what else it could be.

Me neither.

> Is that what the "Alpha OSF/1" comment is about?

That comment was introduced by the commit that added the convoluted casting,
so yes, that's what the comment is talking about.

> Anybody have an Alpha OSF/1 hanging around. The problem is that when
> sizeof(pthread_t) != sizeof(long) this line is just broken.
>
> Could this be changed to
>   return threadid;
> safely?

Well, that would return it to exactly the state it was in at revision 2.3,
except with the cast to long left implicit.  Apparently that "didn't work"!

Something else is broken here, too, and has been forever:  the thread docs
claim that thread.get_ident() returns "a nonzero integer".  But across all
the thread implementations, there's nothing that guarantees that!  It's a
goof, based on the first thread implementation in which it just happened to
be true for that platform.

So thread.get_ident() is plain braindead:  if Python wants to return a
unique non-zero long across platforms, the current code doesn't guarantee
any of that.

So one of two things can be done:

1. Bite the bullet and do it correctly.  For example, maintain a static
   dict mapping the native pthread_self() return value to Python ints,
   and return the latter as Python's thread.get_ident() value.  Much
   better would to implement a x-platform thread-local storage
   abstraction, and use that to hold a Python-int ident value.

2. Continue in the tradition already established <wink>, and #ifdef the
   snot out of it for Monterey.

In favor of #2, the code is already so hosed that making it hosier won't be
a significant relative increase in its inherent hosiness.

spoken-like-a-true-hoser-ly y'rs  - tim





From tim_one at email.msn.com  Thu Aug 17 05:47:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 23:47:04 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.22851.266303.28877@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDGHAAA.tim_one@email.msn.com>

[Greg Ewing]
> Looks reasonably good. Not entirely sure I like the look
> of >> though -- a bit too reminiscent of C++.
>
> How about
>
>    print to myfile, x, y, z

[Barry Warsaw]
> Not bad at all.  Seems quite Pythonic to me.

Me too!  +1 on changing ">>" to "to" here.  Then we can introduce

   x = print from myfile, 3

as a synonym for

   x = myfile.read(3)

too <wink>.

People should know that Guido doesn't seem to like the idea of letting print
specify the output target at all.  "Why not?"  "Because people say print is
pretty useless anyway, for example, when they want to write to something
other than stdout."  "But that's the whole point of this change!  To make
print more useful!"  "Well, but then ...".  After years of channeling, you
get a feel for when to change the subject and bring it up again later as if
it were brand new <wink>.

half-of-channeling-is-devious-persuasion-ly y'rs  - tim





From skip at mojam.com  Thu Aug 17 06:04:54 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:04:54 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
Message-ID: <14747.25702.435148.549678@beluga.mojam.com>

>>>>> "Mark" == Mark Hammond <MarkH at ActiveState.com> writes:

    Mark> I believe that Skip recently made a patch to os.path.commonprefix
    Mark> to only return the portion of the common prefix that corresponds
    Mark> to a directory.

    Mark> I have just dicovered some code breakage from this change.  On
    Mark> 1.5.2, the behaviour was:

    >>>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
    Mark> '../foo/'

    Mark> While since the change we have:
    Mark> '../foo'

I'm sure it can be argued that the slash should be there.  The previous
behavior was clearly broken, however, because it was advancing
character-by-character instead of directory-by-directory.  Consequently,
calling 

    os.path.commonprefix(["/home/swen", "/home/swenson"])

would yield the most likely invalid path "/home/sw" as the common prefix.

It would be easy enough to append the appropriate path separator to the the
result before returning.  I have no problem with that.  Others with more
knowledge of path semantics should chime in.  Also, should the behavior be
consistent across platforms or should it do what is correct for each
platform on which it's implemented (dospath, ntpath, macpath)?

Skip




From tim_one at email.msn.com  Thu Aug 17 06:05:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 00:05:12 -0400
Subject: [Python-Dev] os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDIHAAA.tim_one@email.msn.com>

I agree this is Bad Damage, and should be fixed before 2.0b1 goes out.  Can
you enter a bug report?

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Mark Hammond
> Sent: Wednesday, August 16, 2000 11:32 PM
> To: python-dev at python.org
> Subject: [Python-Dev] os.path.commonprefix breakage
>
>
> Hi,
> 	I believe that Skip recently made a patch to
> os.path.commonprefix to only
> return the portion of the common prefix that corresponds to a directory.
>
> I have just dicovered some code breakage from this change.  On 1.5.2, the
> behaviour was:
>
> >>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
> '../foo/'
>
> While since the change we have:
> '../foo'
>
> Note that the trailing slash has been dropped.
>
> The code this broke did similar to:
>
> prefix = os.path.commonprefix(files)
> for file in files:
>   tail_portion = file[len(prefix):]
>
> In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
> "/spam", respectively.  The intent was obviously to get absolute
> path names
> back ("bar" and "spam")
>
> The code that broke is not mine, so you can safely be horrified at how
> broken it is :-)  The point, however, is that code like this does
> exist out
> there.
>
> I'm obviously going to change the code that broke, and don't have time to
> look into the posixpath.py code - but is this level of possible breakage
> acceptable?
>
> Thanks,
>
> Mark.





From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:11:51 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:11:51 +1200 (NZST)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEDGHAAA.tim_one@email.msn.com>
Message-ID: <200008170411.QAA15381@s454.cosc.canterbury.ac.nz>

tim_one:

> +1 on changing ">>" to "to" here.

Your +1 might be a bit hasty. I've just realised that
a non-reserved word in that position would be ambiguous,
as can be seen by considering

   print to(myfile), x, y, z

> Then we can introduce
>
>   x = print from myfile, 3

Actually, for the sake of symmetry, I was going to suggest

    input from myfile, x, y ,z

except that the word 'input' is already taken. Bummer.

But wait a moment, we could have

    from myfile input x, y, z

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake at beopen.com  Thu Aug 17 06:11:44 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 00:11:44 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
	<14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > I'm sure it can be argued that the slash should be there.  The previous
 > behavior was clearly broken, however, because it was advancing
 > character-by-character instead of directory-by-directory.  Consequently,
 > calling 
 > 
 >     os.path.commonprefix(["/home/swen", "/home/swenson"])
 > 
 > would yield the most likely invalid path "/home/sw" as the common prefix.

  You have a typo in there... ;)

 > It would be easy enough to append the appropriate path separator to the the
 > result before returning.  I have no problem with that.  Others with more
 > knowledge of path semantics should chime in.  Also, should the behavior be

  I'd guess that the path separator should only be appended if it's
part of the passed-in strings; that would make it a legitimate part of
the prefix.  If it isn't present for all of them, it shouldn't be part
of the result:

>>> os.path.commonprefix(["foo", "foo/bar"])
'foo'


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Thu Aug 17 06:23:37 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:23:37 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
	<14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <14747.26825.977663.599413@beluga.mojam.com>

    Skip> os.path.commonprefix(["/home/swen", "/home/swenson"])

    Skip> would yield the most likely invalid path "/home/sw" as the common
    Skip> prefix.

Ack!  I meant to use this example:

    os.path.commonprefix(["/home/swen", "/home/swanson"])

which would yield "/home/sw"...

S



From m.favas at per.dem.csiro.au  Thu Aug 17 06:27:20 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 17 Aug 2000 12:27:20 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
Message-ID: <399B69A8.4A94337C@per.dem.csiro.au>

[Trent}
-------------- snipped from current thread_pthread.h ---------------
long
PyThread_get_thread_ident(void)
{
    volatile pthread_t threadid;
    if (!initialized)
        PyThread_init_thread();
    /* Jump through some hoops for Alpha OSF/1 */
    threadid = pthread_self();
    return (long) *(long *) &threadid;
}
-------------------------------------------------------------------
WHAT IS UP WITH THAT return STATEMENT?
  return (long) *(long *) &threadid;

My *guess* is that this is an attempt to just cast 'threadid' (a
pthread_t)
to a long and go through hoops to avoid compiler warnings. I dont' know
what
else it could be. Is that what the "Alpha OSF/1" comment is about?
Anybody
have an Alpha OSF/1 hanging around. The problem is that when
sizeof(pthread_t) != sizeof(long) this line is just broken.

Could this be changed to
  return threadid;
safely?

This is a DEC-threads thing... (and I'm not a DEC-threads savant). 
Making the suggested change gives the compiler warning:

cc -O -Olimit 1500 -I./../Include -I.. -DHAVE_CONFIG_H   -c thread.c -o
thread.o
cc: Warning: thread_pthread.h, line 182: In this statement, "threadid"
of type "
pointer to struct __pthread_t", is being converted to "long".
(cvtdiftypes)
        return threadid;
---------------^

The threads test still passes with this change.



From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:28:19 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:28:19 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>

Skip:

> Also, should the behavior be
> consistent across platforms or should it do what is correct for each
> platform on which it's implemented (dospath, ntpath, macpath)?

Obviously it should do what's correct for each platform,
although more than one thing can be correct for a
given platform -- e.g Unix doesn't care whether there's a
trailing slash on a pathname.

In the Unix case it's probably less surprising if the trailing
slash is removed, because it's redundant.

The "broken" code referred to in the original message highlights
another problem, however: there is no platform-independent way
provided to remove a prefix from a pathname, given the prefix
as returned by one of the other platform-independent path
munging functions.

So maybe there should be an os.path.removeprefix(prefix, path)
function.

While we're on the subject, another thing that's missing is
a platform-independent way of dealing with the notion of
"up one directory".

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:34:01 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:34:01 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>

Skip:

> The previous behavior was clearly broken, however, because it was
> advancing character-by-character instead of directory-by-directory.

I've just looked at the 1.5.2 docs and realised that this is
what it *says* it does! So it's right according to the docs,
although it's obviously useless as a pathname manipulating
function.

The question now is, do we change both the specification and the
behaviour, which could break existing code, or leave it be and
add a new function which does the right thing?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From skip at mojam.com  Thu Aug 17 06:41:59 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:41:59 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
	<14747.25702.435148.549678@beluga.mojam.com>
	<14747.26112.609255.338170@cj42289-a.reston1.va.home.com>
Message-ID: <14747.27927.170223.873328@beluga.mojam.com>

    Fred> I'd guess that the path separator should only be appended if it's
    Fred> part of the passed-in strings; that would make it a legitimate
    Fred> part of the prefix.  If it isn't present for all of them, it
    Fred> shouldn't be part of the result:

    >>> os.path.commonprefix(["foo", "foo/bar"])
    'foo'

Hmmm... I think you're looking at it character-by-character again.  I see
three possibilities:

    * it's invalid to have a path with a trailing separator

    * it's okay to have a path with a trailing separator

    * it's required to have a path with a trailing separator

In the first and third cases, you have no choice.  In the second you have to
decide which would be best.

On Unix my preference would be to not include the trailing "/" for aesthetic
reasons.  The shell's pwd command, the os.getcwd function and the
os.path.normpath function all return directories without the trailing slash.
Also, while Python may not have this problem (and os.path.join seems to
normalize things), some external tools will interpret doubled "/" characters
as single characters while others (most notably Emacs), will treat the
second slash as "erase the prefix and start from /".  

In fact, the more I think of it, the more I think that Mark's reliance on
the trailing slash is a bug waiting to happen (in fact, it just happened
;-).  There's certainly nothing wrong (on Unix anyway) with paths that don't
contain a trailing slash, so if you're going to join paths together, you
ought to be using os.path.join.  To whack off prefixes, perhaps we need
something more general than os.path.split, so instead of

    prefix = os.path.commonprefix(files)
    for file in files:
       tail_portion = file[len(prefix):]

Mark would have used

    prefix = os.path.commonprefix(files)
    for file in files:
       tail_portion = os.path.splitprefix(prefix, file)[1]

The assumption being that

    os.path.splitprefix("/home", "/home/beluga/skip")

would return

    ["/home", "beluga/skip"]

Alternatively, how about os.path.suffixes?  It would work similar to
os.path.commonprefix, but instead of returning the prefix of a group of
files, return a list of the suffixes resulting in the application of the
common prefix:

    >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
    >>> prefix = os.path.commonprefix(files)
    >>> print prefix
    "/home"
    >>> suffixes = os.path.suffixes(prefix, files)
    >>> print suffixes
    ["swen", "swanson", "jules"]

Skip




From fdrake at beopen.com  Thu Aug 17 06:49:24 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 00:49:24 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>
References: <14747.25702.435148.549678@beluga.mojam.com>
	<200008170434.QAA15389@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > I've just looked at the 1.5.2 docs and realised that this is
 > what it *says* it does! So it's right according to the docs,
 > although it's obviously useless as a pathname manipulating
 > function.

  I think we should now fix the docs; Skip's right about the desired
functionality.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:53:05 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:53:05 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.27927.170223.873328@beluga.mojam.com>
Message-ID: <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>

Skip:

> Alternatively, how about os.path.suffixes?  It would work similar to
> os.path.commonprefix, but instead of returning the prefix of a group of
> files, return a list of the suffixes resulting in the application of the
> common prefix:

To avoid duplication of effort, how about a single function
that does both:

   >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
   >>> os.path.factorize(files)
   ("/home", ["swen", "swanson", "jules"])

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From nowonder at nowonder.de  Thu Aug 17 09:13:08 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 17 Aug 2000 07:13:08 +0000
Subject: [Python-Dev] timeout support for socket.py? (was: [ANN] TCP socket timeout module -->
 timeoutsocket.py)
References: <300720002054234840%timo@alum.mit.edu>
Message-ID: <399B9084.C068DCE3@nowonder.de>

As the socketmodule is now exported as _socket and a seperate socket.py
file wrapping _socket is now available in the standard library,
wouldn't it be possible to include timeout capabilities like in
  http://www.timo-tasi.org/python/timeoutsocket.py

If the default behaviour would be "no timeout", I would think this
would not break any code. But it would give an easy(and documentable)
solution) to people who e.g. have their
  urllib.urlopen("http://spam.org").read()
hang on them. (Actually the approach should work for all streaming
socket connections, as far as I understand it.)

Are there any efficiency concerns? If so, would it be possible to
include a second socket class timeoutsocket in socket.py, so that
this could be used instead of the normal socket class? [In this case
a different default timeout than "None" could be chosen.]

Peter

P.S.: For your convenience a quote of the announcement on c.l.py,
      for module documentation (== endlessly long doc string) look in
        http://www.timo-tasi.org/python/timeoutsocket.py

Timothy O'Malley wrote:
> 
> Numerous times I have seen people request a solution for TCP socket
> timeouts in conjunction with urllib.  Recently, I found myself in the
> same boat.  I wrote a server that would connect to skytel.com and send
> automated pages.  Periodically, the send thread in the server would
> hang for a long(!) time.  Yup -- I was bit by a two hour timeout built
> into tcp sockets.
> 
> Thus, I wrote timeoutsocket.py
> 
> With timeoutsocket.py, you can force *all* TCP sockets to have a
> timeout.  And, this is all accomplished without interfering with the
> standard python library!
> 
> Here's how to put a 20 second timeout on all TCP sockets for urllib:
> 
>    import timeoutsock
>    import urllib
>    timeoutsocket.setDefaultSocketTimeout(20)
> 
> Just like that, any TCP connection made by urllib will have a 20 second
> timeout.  If a connect(), read(), or write() blocks for more than 20
> seconds, then a socket.Timeout error will be raised.
> 
> Want to see how to use this in ftplib?
> 
>    import ftplib
>    import timeoutsocket
>    timeoutsocket.setDefaultSocketTimeout(20)
> 
> Wasn't that easy!
> The timeoutsocket.py module acts as a shim on top of the standard
> socket module.  Whenever a TCP socket is requested, an instance of
> TimeoutSocket is returned.  This wrapper class adds timeout support to
> the standard TCP socket.
> 
> Where can you get this marvel of modern engineering?
> 
>    http://www.timo-tasi.org/python/timeoutsocket.py
> 
> And it will very soon be found on the Vaults of Parnassus.
> 
> Good Luck!
> 
> --
> --
> happy daze
>   -tim O
> --
> http://www.python.org/mailman/listinfo/python-list

-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From moshez at math.huji.ac.il  Thu Aug 17 08:16:29 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 17 Aug 2000 09:16:29 +0300 (IDT)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.22851.266303.28877@anthem.concentric.net>
Message-ID: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>

On Wed, 16 Aug 2000, Barry A. Warsaw wrote:

> 
> >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:
> 
>     GE> Looks reasonably good. Not entirely sure I like the look
>     GE> of >> though -- a bit too reminiscent of C++.
> 
>     GE> How about
> 
>     GE>    print to myfile, x, y, z
> 
> Not bad at all.  Seems quite Pythonic to me.

Ummmmm......

print to myfile  (print a newline on myfile)
print to, myfile (print to+" "+myfile to stdout)

Perl has similar syntax, and I always found it horrible.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Thu Aug 17 08:30:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 08:30:23 +0200
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 17, 2000 at 09:16:29AM +0300
References: <14747.22851.266303.28877@anthem.concentric.net> <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
Message-ID: <20000817083023.J376@xs4all.nl>

On Thu, Aug 17, 2000 at 09:16:29AM +0300, Moshe Zadka wrote:
> On Wed, 16 Aug 2000, Barry A. Warsaw wrote:

> > >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

> >     GE> How about
> >     GE>    print to myfile, x, y, z

> > Not bad at all.  Seems quite Pythonic to me.

> print to myfile  (print a newline on myfile)
> print to, myfile (print to+" "+myfile to stdout)

> Perl has similar syntax, and I always found it horrible.

Agreed. It might be technically unambiguous, but I think it's too hard for a
*human* to parse this correctly. The '>>' version might seem more C++ish and
less pythonic, but it also stands out a lot more. The 'print from' statement
could easily (and more consistently, IMHO ;) be written as 'print <<' (not
that I like the 'print from' idea, though.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Thu Aug 17 08:41:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 02:41:29 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>

[Greg Ewing]
> I've just looked at the 1.5.2 docs and realised that this is
> what it *says* it does! So it's right according to the docs,
> although it's obviously useless as a pathname manipulating
> function.

[Fred Drake]
>   I think we should now fix the docs; Skip's right about the desired
> functionality.

Oddly enough, I don't:  commonprefix worked exactly as documented for at
least 6 years and 5 months (which is when CVS shows Guido checking in
ntpath.py with the character-based functionality), and got out of synch with
the docs about 5 weeks ago when Skip changed to this other algorithm.  Since
the docs *did* match the code, there's no reason to believe the original
author was confused, and no reason to believe users aren't relying on it
(they've had over 6 years to gripe <wink>).

I think it's wrong to change what released code or docs do or say in
non-trivial ways when they weren't ever in conflict.  We have no idea who
may be relying on the old behavior!  Bitch all you like about MarkH's test
case, it used to work, it doesn't now, and that sucks for the user.

I appreciate that some other behavior may be more useful more often, but if
you can ever agree on what that is across platforms, it should be spelled
via a new function name ("commonpathprefix" comes to mind), or optional flag
(defaulting to "old behavior") on commonprefix (yuck!).  BTW, the presence
or absence of a trailing path separator makes a *big* difference to many
cmds on Windows, and you can't tell me nobody isn't currently doing e.g.

    commonprefix(["blah.o", "blah", "blah.cpp"])

on Unix either.





From thomas at xs4all.net  Thu Aug 17 08:55:41 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 08:55:41 +0200
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com>; from trentm@ActiveState.com on Wed, Aug 16, 2000 at 04:55:42PM -0700
References: <20000816165542.D29260@ActiveState.com>
Message-ID: <20000817085541.K376@xs4all.nl>

On Wed, Aug 16, 2000 at 04:55:42PM -0700, Trent Mick wrote:

> I am currently trying to port Python to Monterey (64-bit AIX) and I need
> to add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> whatever appropriate variables for all 'cc' and 'ld' invocations) but it
> is not obvious *at all* how to do that in configure.in. Can anybody helpme
> on that?

You'll have to write a shell 'case' for AIX Monterey, checking to make sure
it is monterey, and setting LDFLAGS accordingly. If you look around in
configure.in, you'll see a few other 'special cases', all to tune the
way the compiler is called. Depending on what you need to do to detect
monterey, you could fit it in one of those. Just search for 'Linux' or
'bsdos' to find a couple of those cases.

> ANother issue that I am having. This is how the python executable is linked
> on Linux with gcc:

> gcc  -Xlinker -export-dynamic python.o ../libpython2.0.a -lpthread -ldl  -lutil -lm  -o python

> It, of course, works fine, but shouldn't the proper (read "portable")
> invocation to include the python2.0 library be

> gcc  -Xlinker -export-dynamic python.o -L.. -lpython2.0 -lpthread -ldl  -lutil -lm  -o python

> That invocation form (i.e. with the '-L.. -lpython2.0') works on Linux, and
> is *required* on Monterey. Does this problem not show up with other Unix
> compilers. My hunch is that simply listing library (*.a) arguments on the gcc
> command line is a GNU gcc/ld shortcut to the more portable usage of -L and
> -l. Any opinions. I would either like to change the form to the latter or
> I'll have to special case the invocation for Monterey. ANy opinions on which
> is worse.

Well, as far as I know, '-L.. -lpython2.0' does something *different* than
just '../libpython2.0.a' ! When supplying the static library on the command
line, the library is always statically linked. When using -L/-l, it is
usually dynamically linked, unless a dynamic library doesn't exist. We
currently don't have a libpython2.0.so, but a patch to add it is on Barry's
plate. Also, I'm not entirely sure about the search order in such a case:
gcc's docs seem to suggest that the systemwide library directories are
searched before the -L directories. I'm not sure on that, though.

Also, listing the library on the command line is not a gcc shortcut, but
other people already said that :) I'd be suprised if AIX removed it (but not
especially so; my girlfriend works with AIX machines a lot, and she already
showed me some suprising things ;) but perhaps there is another workaround ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gstein at lyra.org  Thu Aug 17 09:01:22 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 00:01:22 -0700
Subject: [Python-Dev] os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Thu, Aug 17, 2000 at 01:32:25PM +1000
References: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>
Message-ID: <20000817000122.L17689@lyra.org>

>>> os.path.split('/foo/bar/')
('/foo/bar', '')
>>> 

Jamming a trailing slash on the end is a bit wonky. I'm with Skip on saying
that the slash should probably *not* be appended. It gives funny behavior
with the split. Users should use .join() to combine the resulting with
something else.

The removal of a prefix is an interesting issue. No opinions there.

Cheers,
-g

On Thu, Aug 17, 2000 at 01:32:25PM +1000, Mark Hammond wrote:
> Hi,
> 	I believe that Skip recently made a patch to os.path.commonprefix to only
> return the portion of the common prefix that corresponds to a directory.
> 
> I have just dicovered some code breakage from this change.  On 1.5.2, the
> behaviour was:
> 
> >>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
> '../foo/'
> 
> While since the change we have:
> '../foo'
> 
> Note that the trailing slash has been dropped.
> 
> The code this broke did similar to:
> 
> prefix = os.path.commonprefix(files)
> for file in files:
>   tail_portion = file[len(prefix):]
> 
> In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
> "/spam", respectively.  The intent was obviously to get absolute path names
> back ("bar" and "spam")
> 
> The code that broke is not mine, so you can safely be horrified at how
> broken it is :-)  The point, however, is that code like this does exist out
> there.
> 
> I'm obviously going to change the code that broke, and don't have time to
> look into the posixpath.py code - but is this level of possible breakage
> acceptable?
> 
> Thanks,
> 
> Mark.
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From thomas at xs4all.net  Thu Aug 17 09:09:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 09:09:42 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Aug 17, 2000 at 04:28:19PM +1200
References: <14747.25702.435148.549678@beluga.mojam.com> <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>
Message-ID: <20000817090942.L376@xs4all.nl>

On Thu, Aug 17, 2000 at 04:28:19PM +1200, Greg Ewing wrote:

> given platform -- e.g Unix doesn't care whether there's a
> trailing slash on a pathname.

Bzzzt. This is unfortunately not true. Observe:

daemon2:~/python > mkdir perl
daemon2:~/python > rm perl/
rm: perl/: is a directory
daemon2:~/python > rmdir perl/
rmdir: perl/: Is a directory
daemon2:~/python > rm -rf perl/
rm: perl/: Is a directory
daemon2:~/python > su
# rmdir perl/
rmdir: perl/: Is a directory
# rm -rf perl/
rm: perl/: Is a directory
# ^D
daemon2:~/python > rmdir perl
daemon2:~/python >

Note that the trailing slash is added by all tab-completing shells that I
know. And the problem *really* is that trailing slash, I shit you not.
Needless to say, every one of us ran into this at one time or another, and
spent an hour figuring out *why* the rmdir wouldn't remove a directory.

Consequently, I'm all for removing trailing slashes, but not enough to break
existing code. I wonder howmuch breakage there really is, though.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Thu Aug 17 09:49:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 03:49:33 -0400
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <20000816225552.H376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEBHAAA.tim_one@email.msn.com>

[Thomas Wouters, needs a well-earned vacation!]
> ...
> and if those decisions are made before, say, August 30th, I think
> I can move them into the CVS tree before leaving and just shove
> the responsibility for them on the entire dev team ;)
>
> This isn't a push to get them accepted ! Just a warning that if
> they aren't accepted before then, someone will have to take over
> the breastfeeding ;)

Guido will be back from his travels next week, and PythonLabs will have an
intense 2.0 release meeting on Tuesday or Wednesday (depending also on
exactly when Jeremy gets back).  I expect all go/nogo decisions will be made
then.  Part of deciding on a patch that isn't fully complete is deciding
whether others can take up the slack in time.  That's just normal release
business as usual -- nothing to worry about.  Well, not for *you*, anyway.

BTW, there's a trick few have learned:  get the doc patches in *first*, and
then we look like idiots if we release without code to implement it.  And
since this trick has hardly ever been tried, I bet you can sneak it by Fred
Drake for at least a year before anyone at PythonLabs catches on to it <0.9
wink>.

my-mouth-is-sealed-ly y'rs  - tim





From tim_one at email.msn.com  Thu Aug 17 09:29:05 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 03:29:05 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>

[Moshe Zadka]
> Ummmmm......
>
> print to myfile  (print a newline on myfile)
> print to, myfile (print to+" "+myfile to stdout)

Like I (and Greg too) clearly said all along, -1 on changing ">>" to "to"!





From mal at lemburg.com  Thu Aug 17 09:31:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 17 Aug 2000 09:31:55 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
		<14747.25702.435148.549678@beluga.mojam.com>
		<14747.26112.609255.338170@cj42289-a.reston1.va.home.com> <14747.27927.170223.873328@beluga.mojam.com>
Message-ID: <399B94EB.E95260EE@lemburg.com>

Skip Montanaro wrote:
> 
>     Fred> I'd guess that the path separator should only be appended if it's
>     Fred> part of the passed-in strings; that would make it a legitimate
>     Fred> part of the prefix.  If it isn't present for all of them, it
>     Fred> shouldn't be part of the result:
> 
>     >>> os.path.commonprefix(["foo", "foo/bar"])
>     'foo'
> 
> Hmmm... I think you're looking at it character-by-character again.  I see
> three possibilities:
> 
>     * it's invalid to have a path with a trailing separator
> 
>     * it's okay to have a path with a trailing separator
> 
>     * it's required to have a path with a trailing separator
> 
> In the first and third cases, you have no choice.  In the second you have to
> decide which would be best.
> 
> On Unix my preference would be to not include the trailing "/" for aesthetic
> reasons.

Wait, Skip :-) By dropping the trailing slash from the path
you are removing important information from the path information.

This information can only be regained by performing an .isdir()
check and then only of the directory exists somewhere. If it
doesn't you are loosing valid information here.

Another aspect: 
Since posixpath is also used by URL handling code,
I would suspect that this results in some nasty problems too.
You'd have to actually ask the web server to give you back the
information you already had.

Note that most web-servers send back a redirect in case a
directory is queried without ending slash in the URL. They
do this for exactly the reason stated above: to add the
.isdir() information to the path itself.

Conclusion:
Please don't remove the slash -- at least not in posixpath.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Thu Aug 17 11:54:16 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 11:54:16 +0200
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 03:29:05AM -0400
References: <Pine.GSO.4.10.10008170915050.24783-100000@sundial> <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>
Message-ID: <20000817115416.M376@xs4all.nl>

On Thu, Aug 17, 2000 at 03:29:05AM -0400, Tim Peters wrote:
> [Moshe Zadka]
> > Ummmmm......
> >
> > print to myfile  (print a newline on myfile)
> > print to, myfile (print to+" "+myfile to stdout)
> 
> Like I (and Greg too) clearly said all along, -1 on changing ">>" to "to"!

Really ? Hmmmm...

[Greg Ewing]
> Looks reasonably good. Not entirely sure I like the look
> of >> though -- a bit too reminiscent of C++.
>
> How about
>
>    print to myfile, x, y, z

[Barry Warsaw]
> Not bad at all.  Seems quite Pythonic to me.

[Tim Peters]
> Me too!  +1 on changing ">>" to "to" here.  Then we can introduce
[print from etc]

I guessed I missed the sarcasm ;-P

Gullib-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Thu Aug 17 13:58:26 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 07:58:26 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>
References: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <1245608987-154490918@hypernet.com>

Greg Ewing wrote:
[snip]
> While we're on the subject, another thing that's missing is
> a platform-independent way of dealing with the notion of
> "up one directory".

os.chdir(os.pardir)

- Gordon



From paul at prescod.net  Thu Aug 17 14:56:23 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:56:23 -0400
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net> <20000815195751.A16100@ludwig.cnri.reston.va.us>
Message-ID: <399BE0F7.F00765DA@prescod.net>

Greg Ward wrote:
> 
> ...
> I'm all in favour of high-level interfaces, and I'm also in favour of
> speaking the local tongue -- when in Windows, follow the Windows API (at
> least for features that are totally Windows-specific, like the
> registry).

At this point, the question is not whether to follow the Microsoft API
or not (any more). It is whether to follow the early 1990s Microsoft API
for C programmers or the new Microsoft API for Visual Basic, C#, Eiffel
and Javascript programmers.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html





From paul at prescod.net  Thu Aug 17 14:57:08 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:57:08 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com>
Message-ID: <399BE124.9920B0B6@prescod.net>

Tim Peters wrote:
> 
> ...
> > If you want a more efficient way to do it, it's available (just not as
> > syntactically beautiful -- same as range/xrangel).
> 
> Which way would that be?  I don't know of one, "efficient" either in the
> sense of runtime speed or of directness of expression.  

One of the reasons for adding range literals was for efficiency.

So

for x in [:len(seq)]:
  ...

should be efficient.

> The "loop index" isn't an accident of the way Python happens to implement
> "for" today, it's the very basis of Python's thing.__getitem__(i)/IndexError
> iteration protocol.  Exposing it is natural, because *it* is natural.

I don't think of iterators as indexing in terms of numbers. Otherwise I
could do this:

>>> a={0:"zero",1:"one",2:"two",3:"three"}
>>> for i in a:
...     print i
...

So from a Python user's point of view, for-looping has nothing to do
with integers. From a Python class/module creator's point of view it
does have to do with integers. I wouldn't be either surprised nor
disappointed if that changed one day.

> Sorry, but seq.keys() just makes me squirm.  It's a little step down the
> Lispish path of making everything look the same.  I don't want to see
> float.write() either <wink>.

You'll have to explain your squeamishness better if you expect us to
channel you in the future. Why do I use the same syntax for indexing
sequences and dictionaries and for deleting sequence and dictionary
items? Is the rule: "syntax can work across types but method names
should never be shared"?

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html





From paul at prescod.net  Thu Aug 17 14:58:00 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:58:00 -0400
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net> <045901c00414$27a67010$8119fea9@neil>
Message-ID: <399BE158.C2216D34@prescod.net>

Neil Hodgson wrote:
> 
> ...
> 
>    The registry is just not important enough to have this much attention or
> work.

I remain unreconstructed. My logic is as follows:

 * The registry is important enough to be in the standard library ...
unlike, let's say, functions to operate the Remote Access Service.

 * The registry is important enough that the interface to it is
documented (partially)

 * Therefore, the registry is important enough to have a decent API with
complete documentation.

You know the old adage: "anything worth doing..."

If the registry is just supposed to expose one or two functions for
distutils then it could expose one or two functions for distutils, be
called _distreg and be undocumented and formally unsupported.

>    The Microsoft.Win32.Registry* API appears to be a hacky legacy API to me.
> Its there for compatibility during the transition to the
> System.Configuration API. Read the blurb for ConfigManager to understand the
> features of System.Configuration. Its all based on XML files. What a
> surprise.

Nobody on Windows is going to migrate to XML configuration files this
year or next year. The change-over is going to be too difficult.
Predicting Microsoft configuration ideology in 2002 is highly risky. If
we need to do the registry today then we can do the registry right
today.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html





From skip at mojam.com  Thu Aug 17 14:50:28 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 17 Aug 2000 07:50:28 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
References: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
Message-ID: <14747.57236.264324.165612@beluga.mojam.com>

    Tim> Oddly enough, I don't: commonprefix worked exactly as documented
    Tim> for at least 6 years and 5 months (which is when CVS shows Guido
    Tim> checking in ntpath.py with the character-based functionality), and
    Tim> got out of synch with the docs about 5 weeks ago when Skip changed
    Tim> to this other algorithm.  Since the docs *did* match the code,
    Tim> there's no reason to believe the original author was confused, and
    Tim> no reason to believe users aren't relying on it (they've had over 6
    Tim> years to gripe <wink>).

I don't realize that because a bug wasn't noticed for a long time was any
reason not to fix it.  Guido was also involved in the repair of the bug, and
had no objections to the fix I eventually arrived at.  Also, when I
announced my original patch the subject of the message was

    patch for os.path.commonprefix (changes semantics - pls review)

In the body of the message I said

    Since my patch changes the semantics of the function, I submitted a
    patch via SF that implements what I believe to be the correct behavior
    instead of just checking in the change, so people could comment on it.

I don't think I could have done much more to alert people to the change than
I did.  I didn't expect the patch to go into 1.6.  (Did it?  It shouldn't
have.)  I see nothing wrong with correcting the semantics of a function that
is broken when we increment the major version number of the code.

    Tim> I appreciate that some other behavior may be more useful more
    Tim> often, but if you can ever agree on what that is across platforms,
    Tim> it should be spelled via a new function name ("commonpathprefix"
    Tim> comes to mind), or optional flag (defaulting to "old behavior") on
    Tim> commonprefix (yuck!).  BTW, the presence or absence of a trailing
    Tim> path separator makes a *big* difference to many cmds on Windows,
    Tim> and you can't tell me nobody isn't currently doing e.g.

    Tim>     commonprefix(["blah.o", "blah", "blah.cpp"])

    Tim> on Unix either.

Fine.  Let's preserve the broken implementation and not break any broken
usage.  Switch it back then.

Taking a look at the copious documentation for posixpath.commonprefix:

    Return the longest string that is a prefix of all strings in
    list.  If list is empty, return the empty string ''.

I see no mention of anything in this short bit of documentation taken
completely out of context that suggests that posixpath.commonprefix has
anything to do with paths, so maybe we should move it to some other module
that has no directory path implications.  That way nobody can make the
mistake of trying to assume it operates on paths.  Perhaps string?  Oh,
that's deprecated.  Maybe we should undeprecate it or make commonprefix a
string method.  Maybe I'll just reopen the patch and assign it to Barry
since he's the string methods guru.

On a more realistic note, perhaps I should submit a patch that corrects the
documentation.

Skip



From skip at mojam.com  Thu Aug 17 14:19:46 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 17 Aug 2000 07:19:46 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>
References: <14747.27927.170223.873328@beluga.mojam.com>
	<200008170453.QAA15394@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.55394.783997.167234@beluga.mojam.com>

    Greg> To avoid duplication of effort, how about a single function that
    Greg> does both:

    >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
    >>> os.path.factorize(files)
    ("/home", ["swen", "swanson", "jules"])

Since we already have os.path.commonprefix and it's not going away, it
seemed to me that just adding a complementary function to return the
suffixes made sense.  Also, there's nothing in the name factorize that
suggests that it would split the paths at the common prefix.

It could easily be written in terms of the two:

    def factorize(files):
	pfx = os.path.commonprefix(files)
	suffixes = os.path.suffixes(pfx, files)
	return (pfx, suffixes)

Skip




From bwarsaw at beopen.com  Thu Aug 17 16:35:03 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 10:35:03 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <14747.22851.266303.28877@anthem.concentric.net>
	<Pine.GSO.4.10.10008170915050.24783-100000@sundial>
	<20000817083023.J376@xs4all.nl>
Message-ID: <14747.63511.725610.771162@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Agreed. It might be technically unambiguous, but I think it's
    TW> too hard for a *human* to parse this correctly. The '>>'
    TW> version might seem more C++ish and less pythonic, but it also
    TW> stands out a lot more. The 'print from' statement could easily
    TW> (and more consistently, IMHO ;) be written as 'print <<' (not
    TW> that I like the 'print from' idea, though.)

I also played around with trying to get the grammar and parser to
recognize 'print to' and variants, and it seemed difficult and
complicated.  So I'm back to -0 on 'print to' and +1 on 'print >>'.

-Barry



From bwarsaw at beopen.com  Thu Aug 17 16:43:02 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 10:43:02 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>
	<20000817115416.M376@xs4all.nl>
Message-ID: <14747.63990.296049.566791@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Really ? Hmmmm...

    TW> [Tim Peters]
    >> Me too!  +1 on changing ">>" to "to" here.  Then we can
    >> introduce

    TW> I guessed I missed the sarcasm ;-P

No, Tim just forgot to twist the blue knob while he was pressing the
shiny pedal on Guido's time machine.  I've made the same mistake
myself before -- the VRTM can be as inscrutable as the BDFL himself at
times.  Sadly, changing those opinions now would cause an irreparable
time paradox, the outcome of which would force Python to be called
Bacon and require you to type `albatross' instead of colons to start
every block.

good-thing-tim-had-the-nose-plugs-in-or-Python-would-only-work-on-
19-bit-architectures-ly y'rs,
-Barry



From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 17:09:44 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 17:09:44 +0200 (CEST)
Subject: [Python-Dev] PyErr_NoMemory
Message-ID: <200008171509.RAA20891@python.inrialpes.fr>

The current PyErr_NoMemory() function reads:

PyObject *
PyErr_NoMemory(void)
{
        /* raise the pre-allocated instance if it still exists */
        if (PyExc_MemoryErrorInst)
                PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
        else
                /* this will probably fail since there's no memory and hee,
                   hee, we have to instantiate this class
                */
                PyErr_SetNone(PyExc_MemoryError);

        return NULL;
}

thus overriding any previous exceptions unconditionally. This is a
problem when the current exception already *is* PyExc_MemoryError,
notably when we have a chain (cascade) of memory errors. It is a
problem because the original memory error and eventually its error
message is lost.

I suggest to make this code look like:

PyObject *
PyErr_NoMemory(void)
{
	if (PyErr_ExceptionMatches(PyExc_MemoryError))
		/* already current */
		return NULL;

        /* raise the pre-allocated instance if it still exists */
        if (PyExc_MemoryErrorInst)
                PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
...


If nobody sees a problem with this, I'm very tempted to check it in.
Any objections?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gmcm at hypernet.com  Thu Aug 17 17:22:27 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 11:22:27 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.63990.296049.566791@anthem.concentric.net>
Message-ID: <1245596748-155226852@hypernet.com>

> No, Tim just forgot to twist the blue knob while he was pressing
> the shiny pedal on Guido's time machine.  I've made the same
> mistake myself before -- the VRTM can be as inscrutable as the
> BDFL himself at times.  Sadly, changing those opinions now would
> cause an irreparable time paradox, the outcome of which would
> force Python to be called Bacon and require you to type
> `albatross' instead of colons to start every block.

That accounts for the strange python.ba (mtime 1/1/70) I 
stumbled across this morning:

#!/usr/bin/env bacon
# released to the public domain at least one Tim Peters
import sys, string, tempfile
txt = string.replace(open(sys.argv[1]).read(), ':', ' albatross')
fnm = tempfile.mktemp() + '.ba'
open(fnm, 'w').write(txt)
os.system('bacon %s %s' % (fnm, string.join(sys.argv[2:]))



- Gordon



From nowonder at nowonder.de  Thu Aug 17 21:30:13 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 17 Aug 2000 19:30:13 +0000
Subject: [Python-Dev] PyErr_NoMemory
References: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <399C3D45.95ED79D8@nowonder.de>

Vladimir Marangozov wrote:
> 
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

This change makes sense to me. I can't see any harm in checking
it in.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From tim_one at email.msn.com  Thu Aug 17 19:58:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 13:58:25 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.63990.296049.566791@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>

> >>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:
>
>     TW> Really ? Hmmmm...
>
>     TW> [Tim Peters]
>     >> Me too!  +1 on changing ">>" to "to" here.  Then we can
>     >> introduce
>     TW> I guessed I missed the sarcasm ;-P

[Barry A. Warsaw]
> No, Tim just forgot to twist the blue knob while he was pressing the
> shiny pedal on Guido's time machine.  I've made the same mistake
> myself before -- the VRTM can be as inscrutable as the BDFL himself at
> times.  Sadly, changing those opinions now would cause an irreparable
> time paradox, the outcome of which would force Python to be called
> Bacon and require you to type `albatross' instead of colons to start
> every block.
>
> good-thing-tim-had-the-nose-plugs-in-or-Python-would-only-work-on-
> 19-bit-architectures-ly y'rs,

I have no idea what this is about.  I see an old msg from Barry voting "-1"
on changing ">>" to "to", but don't believe any such suggestion was ever
made.  And I'm sure that had such a suggestion ever been made, it would have
been voted down at once by everyone.

OTOH, there is *some* evidence that an amateur went mucking with the time
machine! No 19-bit architectures, but somewhere in a reality distortion
field around Vancouver, it appears that AIX actually survived long enough to
see the 64-bit world, and that some yahoo vendor decided to make a version
of C where sizeof(void*) > sizeof(long).  There's no way either of those
could have happened naturally.

even-worse-i-woke-up-today-*old*!-ly y'rs  - tim





From trentm at ActiveState.com  Thu Aug 17 20:21:22 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 11:21:22 -0700
Subject: screwin' with the time machine in Canada, eh (was: Re: [Python-Dev] PEP 214, extended print statement)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 01:58:25PM -0400
References: <14747.63990.296049.566791@anthem.concentric.net> <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>
Message-ID: <20000817112122.A27284@ActiveState.com>

On Thu, Aug 17, 2000 at 01:58:25PM -0400, Tim Peters wrote:
> 
> OTOH, there is *some* evidence that an amateur went mucking with the time
> machine! No 19-bit architectures, but somewhere in a reality distortion
> field around Vancouver, it appears that AIX actually survived long enough to
> see the 64-bit world, and that some yahoo vendor decided to make a version
> of C where sizeof(void*) > sizeof(long).  There's no way either of those
> could have happened naturally.
> 

And though this place is supposed to be one the more successful pot havens on
the planet I just can't seem to compete with the stuff those "vendors" in
Austin (AIX) and Seattle must have been smokin'.

<puff>-<inhale>-if-i-wasn't-seeing-flying-bunnies-i-would-swear-that-compiler
is-from-SCO-ly-y'rs - trent


> even-worse-i-woke-up-today-*old*!-ly y'rs  - tim

Come on up for a visit and we'll make you feel young again. :)


Trent

-- 
Trent Mick



From akuchlin at mems-exchange.org  Thu Aug 17 22:40:35 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 16:40:35 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
Message-ID: <E13PWSp-0006w9-00@kronos.cnri.reston.va.us>

Tim O'Malley finally mailed me the correct URL for the latest version
of the cookie module: http://www.timo-tasi.org/python/Cookie.py 

*However*...  I think the Web support in Python needs more work in
generally, and certainly more than can be done for 2.0.  One of my
plans for the not-too-distant future is to start writing a Python/CGI
guide, and the process of writing it is likely to shake out more
ugliness that should be fixed.

I'd like to propose a 'Web Library Enhancement PEP', and offer to
champion and write it.  Its goal would be to identify missing features
and specify them, and list other changes to improve Python as a
Web/CGI language.  Possibly the PEP would also drop
backward-compatibility cruft.

(Times like this I wish the Web-SIG hadn't been killed...)

--amk



From akuchlin at mems-exchange.org  Thu Aug 17 22:43:45 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 16:43:45 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
Message-ID: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>

Tim O'Malley finally mailed me the correct URL for the latest version
of the cookie module: http://www.timo-tasi.org/python/Cookie.py 

*However*...  I think the Web support in Python needs more work in
generally, and certainly more than can be done for 2.0.  One of my
plans for the not-too-distant future is to start writing a Python/CGI
guide, and the process of writing it is likely to shake out more
ugliness that should be fixed.

I'd like to propose a 'Web Library Enhancement PEP', and offer to
champion and write it.  Its goal would be to identify missing features
and specify them, and list other changes to improve Python as a
Web/CGI language.  Possibly the PEP would also drop
backward-compatibility cruft.

(Times like this I wish the Web-SIG hadn't been killed...)

--amk



From bwarsaw at beopen.com  Thu Aug 17 23:05:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 17:05:10 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
Message-ID: <14748.21382.305979.784637@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> Tim O'Malley finally mailed me the correct URL for the latest
    AK> version of the cookie module:
    AK> http://www.timo-tasi.org/python/Cookie.py

    AK> *However*...  I think the Web support in Python needs more
    AK> work in generally, and certainly more than can be done for
    AK> 2.0.

I agree, but I still think Cookie.py should go in the stdlib for 2.0.

-Barry



From akuchlin at mems-exchange.org  Thu Aug 17 23:13:52 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 17:13:52 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <14748.21382.305979.784637@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 05:05:10PM -0400
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us> <14748.21382.305979.784637@anthem.concentric.net>
Message-ID: <20000817171352.B26730@kronos.cnri.reston.va.us>

On Thu, Aug 17, 2000 at 05:05:10PM -0400, Barry A. Warsaw wrote:
>I agree, but I still think Cookie.py should go in the stdlib for 2.0.

Fine.  Shall I just add it as-is?  (Opinion was generally positive as
I recall, unless the BDFL wants to exercise his veto for some reason.)

--amk



From thomas at xs4all.net  Thu Aug 17 23:19:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:19:42 +0200
Subject: [Python-Dev] 'import as'
Message-ID: <20000817231942.O376@xs4all.nl>

I have two remaining issues regarding the 'import as' statement, which I'm
just about ready to commit. The first one is documentation: I have
documentation patches, to the ref and the libdis sections, but I can't
really test them :P I *think* they are fine, though, and they aren't really
complicated. Should I upload a patch for them, so Fred or someone else can
look at them, or just check them in ?

The other issue is the change in semantics for 'from-import'. Currently,
'IMPORT_FROM' is a single operation that retrieves a name (possible '*')
from the module object at TOS, and stores it directly in the local
namespace. This is contrary to 'import <module>', which pushes it onto the
stack and uses a normal STORE operation to store it. It's also necessary for
'from ... import *', which can load any number of objects.

After the patch, 'IMPORT_FROM' is only used to load normal names, and a new
opcode, 'IMPORT_STAR' (with no argument) is used for 'from module import *'.
'IMPORT_FROM' pushes the result on the stack, instead of modifying the local
namespace directly, so that it's possible to store it to a different name.
This also means that a 'global' statement now has effect on objects
'imported from' a module, *except* those imported by '*'.

I don't think that's a big issue. 'global' is not that heavily used, and old
code mixing 'import from' and 'global' statements on the same identifier
would not have been doing what the programmer intended. However, if it *is*
a big issue, I can revert to an older version of the patch, that added a new
bytecode to handle 'from x import y as z', and leave the bytecode for the
currently valid cases unchanged. That would mean that only the '... as z'
would be effected by 'global' statements. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Thu Aug 17 23:22:07 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 14:22:07 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 11:34:12PM -0400
References: <20000816172425.A32338@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>
Message-ID: <20000817142207.A5592@ActiveState.com>

On Wed, Aug 16, 2000 at 11:34:12PM -0400, Tim Peters wrote:
> [Trent Mick]
> > I am porting Python to Monterey (64-bit AIX) and have a small
> > (hopefully) question about POSIX threads.
> 
> POSIX threads. "small question".  HAHAHAHAHAHA.  Thanks, that felt good
> <wink>.

Happy to provide you with cheer. <grumble>



> > Does the POSIX threads spec specify a C type or minimum size for
> > pthread_t?
> 
> or user-space arrays of structs.  So I think it's *safe* to assume it will
> always fit in an integral type large enough to hold a pointer, but not
> guaranteed.  Plain "long" certainly isn't safe in theory.

Not for pthread ports to Win64 anyway. But that is not my concern right now.
I'll let the pthreads-on-Windows fans worry about that when the time comes.


> > this up. On Linux (mine at least):
> >   /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int
> > pthread_t;
> 
> And this is a 32- or 64-bit Linux?

That was 32-bit Linux. My 64-bit Linux box is down right now, I can tell
later if you really want to know.


> > WHAT IS UP WITH THAT return STATEMENT?
> >   return (long) *(long *) &threadid;
> 
<snip>
> 
> So, here's the scoop:
> 
<snip>

Thanks for trolling the cvs logs, Tim!

> 
> So one of two things can be done:
> 
> 1. Bite the bullet and do it correctly.  For example, maintain a static
>    dict mapping the native pthread_self() return value to Python ints,
>    and return the latter as Python's thread.get_ident() value.  Much
>    better would to implement a x-platform thread-local storage
>    abstraction, and use that to hold a Python-int ident value.
> 
> 2. Continue in the tradition already established <wink>, and #ifdef the
>    snot out of it for Monterey.
> 
> In favor of #2, the code is already so hosed that making it hosier won't be
> a significant relative increase in its inherent hosiness.
> 
> spoken-like-a-true-hoser-ly y'rs  - tim
> 

I'm all for being a hoser then. #ifdef's a-comin' down the pipe. One thing,
the only #define that I know I have a handle on for Monterey is '_LP64'. Do
you have an objection to that (seeing at is kind of misleading)? I will
accompany it with an explicative comment of course.


take-off-you-hoser-ly y'rs - wannabe Bob & Doug fan

-- 
Trent Mick
TrentM at ActiveState.com



From fdrake at beopen.com  Thu Aug 17 23:35:14 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 17:35:14 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>
References: <20000817231942.O376@xs4all.nl>
Message-ID: <14748.23186.372772.48426@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > really test them :P I *think* they are fine, though, and they aren't really
 > complicated. Should I upload a patch for them, so Fred or someone else can
 > look at them, or just check them in ?

  Just check them in; I'll catch problems before anyone else tries to
format the stuff at any rate.
  With regard to your semantics question, I think your proposed
solution is fine.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 17 23:38:21 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:38:21 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 11:19:42PM +0200
References: <20000817231942.O376@xs4all.nl>
Message-ID: <20000817233821.P376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:19:42PM +0200, Thomas Wouters wrote:

> This also means that a 'global' statement now has effect on objects
> 'imported from' a module, *except* those imported by '*'.

And while I was checking my documentation patches, I found this:

Names bound by \keyword{import} statements may not occur in
\keyword{global} statements in the same scope.
\stindex{global}

But there doesn't seem to be anything to prevent it ! On my RedHat supplied
Python 1.5.2:

>>> def test():
...     global sys
...     import sys
... 
>>> test()
>>> sys
<module 'sys' (built-in)>

And on a few weeks old CVS Python:

>>> def test():
...     global sys
...     import sys
...
>>> test()
>>> sys
<module 'sys' (built-in)>

Also, mixing 'global' and 'from-import' wasn't illegal, it was just
ineffective. (That is, it didn't make the variable 'global', but it didn't
raise an exception either!)

How about making 'from module import *' a special case in this regard, and
letting 'global' operate fine on normal 'import' and 'from-import'
statements ? I can definately see a use for it, anyway. Is this workable
(and relevant) for JPython / #Py ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Thu Aug 17 23:41:04 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 14:41:04 -0700
Subject: [Python-Dev] [Fwd: segfault in sre on 64-bit plats]
In-Reply-To: <399B3D36.6921271@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Thu, Aug 17, 2000 at 09:17:42AM +0800
References: <399B3D36.6921271@per.dem.csiro.au>
Message-ID: <20000817144104.B7658@ActiveState.com>

On Thu, Aug 17, 2000 at 09:17:42AM +0800, Mark Favas wrote:
> [Trent]
> > This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> > SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> > 7500. I don't want to just willy-nilly drop the recursion limit down to make
> > the problem go away.
> > 
> 
> Sorry for the delay - yes, I had these segfaults due to exceeding the
> stack size on Tru64 Unix (which, by default, is 2048 kbytes) before
> Fredrick introduced the recusrion limit of 10000 in _sre.c. You'd expect
> a 64-bit OS to use a bit more bytes of the stack when handling recursive
> calls, but your 7500 down from 10000 sounds a bit much - unless the

Actually with pointers being twice the size the stack will presumably get
comsumed more quickly (right?), so all other things being equal the earlier
stack overflow is expected.

> stack size limit you're using on Linux64 is smaller than that for
> Linux32 - what are they?

------------------- snip --------- snip ----------------------
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>

int main(void)
{
    struct rlimit lims;
    if (getrlimit(RLIMIT_STACK, &lims) != 0) {
        printf("error in getrlimit\n");
        exit(1);
    }
    printf("cur stack limit = %d, max stack limit = %d\n",
        lims.rlim_cur, lims.rlim_max);
    return 0;
}
------------------- snip --------- snip ----------------------

On Linux32:

    cur stack limit = 8388608, max stack limit = 2147483647

On Linux64:

    cur stack limit = 8388608, max stack limit = -1


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From cgw at fnal.gov  Thu Aug 17 23:43:38 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:43:38 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <20000817125903.2C29E1D0F5@dinsdale.python.org>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
Message-ID: <14748.23690.632808.944375@buffalo.fnal.gov>

This has probably been noted by somebody else already - somehow a
config.h showed up in the Include directory when I did a cvs update
today.  I assume this is an error.  It certainly keeps Python from
building on my system!



From thomas at xs4all.net  Thu Aug 17 23:46:07 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:46:07 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817233821.P376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 11:38:21PM +0200
References: <20000817231942.O376@xs4all.nl> <20000817233821.P376@xs4all.nl>
Message-ID: <20000817234607.Q376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:38:21PM +0200, Thomas Wouters wrote:
> On Thu, Aug 17, 2000 at 11:19:42PM +0200, Thomas Wouters wrote:
> 
> > This also means that a 'global' statement now has effect on objects
> > 'imported from' a module, *except* those imported by '*'.
> 
> And while I was checking my documentation patches, I found this:

> Names bound by \keyword{import} statements may not occur in
> \keyword{global} statements in the same scope.
> \stindex{global}

And about five lines lower, I saw this:

(The current implementation does not enforce the latter two
restrictions, but programs should not abuse this freedom, as future
implementations may enforce them or silently change the meaning of the
program.)

My only excuse is that all that TeX stuff confuzzles my eyes ;) In any case,
my point still stands: 1) can we change this behaviour even if it's
documented to be impossible, and 2) should it be documented differently,
allowing mixing of 'global' and 'import' ?

Multip-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug 17 23:52:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 17:52:28 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.23690.632808.944375@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
Message-ID: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > This has probably been noted by somebody else already - somehow a
 > config.h showed up in the Include directory when I did a cvs update
 > today.  I assume this is an error.  It certainly keeps Python from
 > building on my system!

  This doesn't appear to be in CVS.  If you delete the file and the do
a CVS update, does it reappear?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From cgw at fnal.gov  Thu Aug 17 23:56:55 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:56:55 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
Message-ID: <14748.24487.903334.663705@buffalo.fnal.gov>


And it's not that sticky date, either (no idea how that got set!)

buffalo:Include$  cvs update -A
cvs server: Updating .
U config.h

buffalo:Include$ cvs status config.h 
===================================================================
File: config.h          Status: Up-to-date

   Working revision:    2.1
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)




From cgw at fnal.gov  Thu Aug 17 23:58:40 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:58:40 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
Message-ID: <14748.24592.448009.515511@buffalo.fnal.gov>


Fred L. Drake, Jr. writes:
 > 
 >   This doesn't appear to be in CVS.  If you delete the file and the do
 > a CVS update, does it reappear?
 > 

Yes.

buffalo:src$ pwd
/usr/local/src/Python-CVS/python/dist/src

buffalo:src$ cd Include/

buffalo:Include$ cvs update
cvs server: Updating .
U config.h

buffalo:Include$ cvs status config.h
===================================================================
File: config.h          Status: Up-to-date

   Working revision:    2.1
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v
   Sticky Tag:          (none)
   Sticky Date:         2000.08.17.05.00.00
   Sticky Options:      (none)




From fdrake at beopen.com  Fri Aug 18 00:02:39 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 18:02:39 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24487.903334.663705@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
	<14748.24487.903334.663705@buffalo.fnal.gov>
Message-ID: <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > And it's not that sticky date, either (no idea how that got set!)

  -sigh-  Is there an entry for config.h in the CVS/entries file?  If
so, surgically remove it, then delete the config.h, then try the
update again.
  *This* is getting mysterious.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From cgw at fnal.gov  Fri Aug 18 00:07:28 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 17:07:28 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
	<14748.24487.903334.663705@buffalo.fnal.gov>
	<14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
Message-ID: <14748.25120.807735.628798@buffalo.fnal.gov>

Fred L. Drake, Jr. writes:

 >   -sigh-  Is there an entry for config.h in the CVS/entries file?  If
 > so, surgically remove it, then delete the config.h, then try the
 > update again.

Yes, this entry was present, I removed it as you suggested.

Now, when I do cvs update the config.h doesn't reappear, but I still
see "needs checkout" if I ask for cvs status:


buffalo:Include$ cvs status config.h
===================================================================
File: no file config.h          Status: Needs Checkout

   Working revision:    No entry for config.h
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v

I keep my local CVS tree updated daily, I never use any kind of sticky
tags, and haven't seen this sort of problem at all, up until today.
Today I also noticed the CVS server responding very slowly, so I
suspect that something may be wrong with the server.





From fdrake at beopen.com  Fri Aug 18 00:13:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 18:13:28 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.25120.807735.628798@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
	<14748.24487.903334.663705@buffalo.fnal.gov>
	<14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
	<14748.25120.807735.628798@buffalo.fnal.gov>
Message-ID: <14748.25480.976849.825016@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > Now, when I do cvs update the config.h doesn't reappear, but I still
 > see "needs checkout" if I ask for cvs status:
[...output elided...]

  I get exactly the same output from "cvs status", and "cvs update"
doesn't produce the file.
  Now, if I say "cvs update config.h", it shows up and doesn't get
deleted by "cvs update", but after removing the line from CVS/Entries
and removing the file, it doesn't reappear.  So you're probably set
for now.

 > I keep my local CVS tree updated daily, I never use any kind of sticky
 > tags, and haven't seen this sort of problem at all, up until today.
 > Today I also noticed the CVS server responding very slowly, so I
 > suspect that something may be wrong with the server.

  This is weird, but that doesn't sound like the problem; the SF
servers can be very slow some days, but we suspect it's just load.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Fri Aug 18 00:15:08 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 15:15:08 -0700
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000817085541.K376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 08:55:41AM +0200
References: <20000816165542.D29260@ActiveState.com> <20000817085541.K376@xs4all.nl>
Message-ID: <20000817151508.C7658@ActiveState.com>

On Thu, Aug 17, 2000 at 08:55:41AM +0200, Thomas Wouters wrote:
> On Wed, Aug 16, 2000 at 04:55:42PM -0700, Trent Mick wrote:
> 
> > I am currently trying to port Python to Monterey (64-bit AIX) and I need
> > to add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> > whatever appropriate variables for all 'cc' and 'ld' invocations) but it
> > is not obvious *at all* how to do that in configure.in. Can anybody helpme
> > on that?
> 
> You'll have to write a shell 'case' for AIX Monterey, checking to make sure
> it is monterey, and setting LDFLAGS accordingly. If you look around in
> configure.in, you'll see a few other 'special cases', all to tune the
> way the compiler is called. Depending on what you need to do to detect
> monterey, you could fit it in one of those. Just search for 'Linux' or
> 'bsdos' to find a couple of those cases.

Right, thanks. I was looking at first to modify CFLAGS and LDFLAGS (as I
thought would be cleaner) but I have got it working by just modifying CC and
LINKCC instead (following the crowd on that one).



[Trent blames placing *.a on the cc command line for his problems and Thomas
and Barry, etc. tell Trent that that cannot be]

Okay, I don't know what I was on. I think I was flailing for things to blame.
I have got it working with simply listing the .a on the command line.



Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Fri Aug 18 00:26:42 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 18:26:42 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
	<14748.21382.305979.784637@anthem.concentric.net>
	<20000817171352.B26730@kronos.cnri.reston.va.us>
Message-ID: <14748.26274.949428.733639@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> Fine.  Shall I just add it as-is?  (Opinion was generally
    AK> positive as I recall, unless the BDFL wants to exercise his
    AK> veto for some reason.)

Could you check and see if there are any substantial differences
between the version you've got and the version in the Mailman tree?
If there are none, then I'm +1.

Let me know if you want me to email it to you.
-Barry



From MarkH at ActiveState.com  Fri Aug 18 01:07:38 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 09:07:38 +1000
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.57236.264324.165612@beluga.mojam.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEFODFAA.MarkH@ActiveState.com>

> I don't realize that because a bug wasn't noticed for a long time was any
> reason not to fix it.  Guido was also involved in the repair of
> the bug, and

I think most people agreed that the new semantics were preferable to the
old.  I believe Tim was just having a dig at the fact the documentation was
not changed, and also wearing his grumpy-conservative hat (well, it is
election fever in the US!)

But remember - the original question was if the new semantics should return
the trailing "\\" as part of the common prefix, due to the demonstrated
fact that at least _some_ code out there depends on it.

Tim wanted a bug filed, but a few other people have chimed in saying
nothing needs fixing.

So what is it?  Do I file the bug as Tim requested?   Maybe I should just
do it, and assign the bug to Guido - at least that way he can make a quick
decision?

At-least-my-code-works-again ly,

Mark.




From akuchlin at cnri.reston.va.us  Fri Aug 18 01:27:06 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 17 Aug 2000 19:27:06 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <14748.26274.949428.733639@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 06:26:42PM -0400
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us> <14748.21382.305979.784637@anthem.concentric.net> <20000817171352.B26730@kronos.cnri.reston.va.us> <14748.26274.949428.733639@anthem.concentric.net>
Message-ID: <20000817192706.A28225@newcnri.cnri.reston.va.us>

On Thu, Aug 17, 2000 at 06:26:42PM -0400, Barry A. Warsaw wrote:
>Could you check and see if there are any substantial differences
>between the version you've got and the version in the Mailman tree?
>If there are none, then I'm +1.

If you're referring to misc/Cookie.py in Mailman, the two files are
vastly different (though not necessarily incompatible).  The Mailman
version derives from a version of Cookie.py dating from 1998,
according to the CVS tree.  Timo's current version has three different
flavours of cookie, the Mailman version doesn't, so you wind up with a
1000-line long diff between the two.

--amk




From tim_one at email.msn.com  Fri Aug 18 01:29:16 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 19:29:16 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEFODFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>

[Skip, as quoted by MarkH]
> I don't realize that because a bug wasn't noticed for a long
> time was any reason not to fix it.  Guido was also involved in the
> repair of the bug, and

[MarkH]
> I think most people agreed that the new semantics were preferable to the
> old.  I believe Tim was just having a dig at the fact the  documentation
> was not changed, and also wearing his grumpy-conservative hat (well, it is
> election fever in the US!)

Not at all, I meant it.  When the code and the docs have matched for more
than 6 years, there is no bug by any rational definition of the term, and
you can be certain that changing the library semantics then will break
existing code.  Presuming to change it anyway is developer arrogance of the
worst kind, no matter how many developers cheer it on.  The docs are a
contract, and if they were telling the truth, we have a responsibility to
stand by them -- and whether we like it or not (granted, I am overly
sensitive to contractual matters these days <0.3 wink>).

The principled solution is to put the new functionality in a new function.
Then nobody's code breaks, no user feels abused, and everyone gets what they
want.  If you despise what the old function did, that's fine too, deprecate
it -- but don't screw people who were using it happily for what it was
documented to do.

> But remember - the original question was if the new semantics
> should return the trailing "\\" as part of the common prefix, due
> to the demonstrated fact that at least _some_ code out there
> depends on it.
>
> Tim wanted a bug filed, but a few other people have chimed in saying
> nothing needs fixing.
>
> So what is it?  Do I file the bug as Tim requested?   Maybe I should just
> do it, and assign the bug to Guido - at least that way he can make a quick
> decision?

By my count, Unix and Windows people have each voted for both answers, and
the Mac contingent is silently laughing <wink>.

hell-stick-in-fifty-new-functions-if-that's-what-it-takes-but-leave-
    the-old-one-alone-ly y'rs  - tim





From gstein at lyra.org  Fri Aug 18 01:41:37 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 16:41:37 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 11:34:12PM -0400
References: <20000816172425.A32338@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>
Message-ID: <20000817164137.U17689@lyra.org>

On Wed, Aug 16, 2000 at 11:34:12PM -0400, Tim Peters wrote:
>...
> So one of two things can be done:
> 
> 1. Bite the bullet and do it correctly.  For example, maintain a static
>    dict mapping the native pthread_self() return value to Python ints,
>    and return the latter as Python's thread.get_ident() value.  Much
>    better would to implement a x-platform thread-local storage
>    abstraction, and use that to hold a Python-int ident value.
> 
> 2. Continue in the tradition already established <wink>, and #ifdef the
>    snot out of it for Monterey.
> 
> In favor of #2, the code is already so hosed that making it hosier won't be
> a significant relative increase in its inherent hosiness.

The x-plat thread-local storage idea is the best thing to do. That will be
needed for some of the free-threading work in Python.

IOW, an x-plat TLS is going to be done at some point. If you need it now,
then please do it now. That will help us immeasurably in the long run.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From MarkH at ActiveState.com  Fri Aug 18 01:59:18 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 09:59:18 +1000
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817164137.U17689@lyra.org>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>

> IOW, an x-plat TLS is going to be done at some point. If you need it now,
> then please do it now. That will help us immeasurably in the long run.

I just discovered the TLS code in the Mozilla source tree.  This could be a
good place to start.

The definitions are in mozilla/nsprpub/pr/include/prthread.h, and I include
some of this file below...  I can confirm this code works _with_ Python -
but I have no idea how hard it would be to distill it _into_ Python!

Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

Mark.

/*
 * The contents of this file are subject to the Netscape Public License
 * Version 1.1 (the "NPL"); you may not use this file except in
 * compliance with the NPL.  You may obtain a copy of the NPL at
 * http://www.mozilla.org/NPL/
 *

[MarkH - it looks good to me - very open license]
...

/*
** This routine returns a new index for per-thread-private data table.
** The index is visible to all threads within a process. This index can
** be used with the PR_SetThreadPrivate() and PR_GetThreadPrivate()
routines
** to save and retrieve data associated with the index for a thread.
**
** Each index is associationed with a destructor function ('dtor'). The
function
** may be specified as NULL when the index is created. If it is not NULL,
the
** function will be called when:
**      - the thread exits and the private data for the associated index
**        is not NULL,
**      - new thread private data is set and the current private data is
**        not NULL.
**
** The index independently maintains specific values for each binding
thread.
** A thread can only get access to its own thread-specific-data.
**
** Upon a new index return the value associated with the index for all
threads
** is NULL, and upon thread creation the value associated with all indices
for
** that thread is NULL.
**
** Returns PR_FAILURE if the total number of indices will exceed the
maximun
** allowed.
*/
typedef void (PR_CALLBACK *PRThreadPrivateDTOR)(void *priv);

NSPR_API(PRStatus) PR_NewThreadPrivateIndex(
    PRUintn *newIndex, PRThreadPrivateDTOR destructor);

/*
** Define some per-thread-private data.
**     "tpdIndex" is an index into the per-thread private data table
**     "priv" is the per-thread-private data
**
** If the per-thread private data table has a previously registered
** destructor function and a non-NULL per-thread-private data value,
** the destructor function is invoked.
**
** This can return PR_FAILURE if the index is invalid.
*/
NSPR_API(PRStatus) PR_SetThreadPrivate(PRUintn tpdIndex, void *priv);

/*
** Recover the per-thread-private data for the current thread. "tpdIndex"
is
** the index into the per-thread private data table.
**
** The returned value may be NULL which is indistinguishable from an error
** condition.
**
** A thread can only get access to its own thread-specific-data.
*/
NSPR_API(void*) PR_GetThreadPrivate(PRUintn tpdIndex);




From gstein at lyra.org  Fri Aug 18 02:19:17 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 17:19:17 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 18, 2000 at 09:59:18AM +1000
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
Message-ID: <20000817171917.V17689@lyra.org>

On Fri, Aug 18, 2000 at 09:59:18AM +1000, Mark Hammond wrote:
> > IOW, an x-plat TLS is going to be done at some point. If you need it now,
> > then please do it now. That will help us immeasurably in the long run.
> 
> I just discovered the TLS code in the Mozilla source tree.  This could be a
> good place to start.
> 
> The definitions are in mozilla/nsprpub/pr/include/prthread.h, and I include
> some of this file below...  I can confirm this code works _with_ Python -
> but I have no idea how hard it would be to distill it _into_ Python!
> 
> Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

The NPL is not compatible with the Python license. While we could use their
API as a guide for our own code, we cannot use their code.


The real question is whether somebody has the time/inclination to sit down
now and write an x-plat TLS for Python. Always the problem :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From tim_one at email.msn.com  Fri Aug 18 02:18:08 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 20:18:08 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGIHAAA.tim_one@email.msn.com>

[MarkH]
> I just discovered the TLS code in the Mozilla source tree.  This
> could be a good place to start.
> ...
> Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

Jesus, Mark, I haven't even been able to figure what the license means by
"you" yet:

    1. Definitions
    ...
    1.12. "You'' (or "Your") means an individual or a legal entity
    exercising rights under, and complying with all of the terms of,
    this License or a future version of this License issued under
    Section 6.1. For legal entities, "You'' includes any entity which
    controls, is controlled by, or is under common control with You.
    For purposes of this definition, "control'' means (a) the power,
    direct or indirect, to cause the direction or management of such
    entity, whether by contract or otherwise, or (b) ownership of more
    than fifty percent (50%) of the outstanding shares or beneficial
    ownership of such entity.

at-least-they-left-little-doubt-about-the-meaning-of-"fifty-percent"-ly
    y'rs  - tim (tee eye em, and neither you nor You.  I think.)





From bwarsaw at beopen.com  Fri Aug 18 02:18:34 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:18:34 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
	<14748.21382.305979.784637@anthem.concentric.net>
	<20000817171352.B26730@kronos.cnri.reston.va.us>
	<14748.26274.949428.733639@anthem.concentric.net>
	<20000817192706.A28225@newcnri.cnri.reston.va.us>
Message-ID: <14748.32986.835733.255687@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at cnri.reston.va.us> writes:

    >> Could you check and see if there are any substantial
    >> differences between the version you've got and the version in
    >> the Mailman tree?  If there are none, then I'm +1.

    AK> If you're referring to misc/Cookie.py in Mailman,

That's the one.
    
    AK> the two files are vastly different (though not necessarily
    AK> incompatible).  The Mailman version derives from a version of
    AK> Cookie.py dating from 1998, according to the CVS tree.  Timo's
    AK> current version has three different flavours of cookie, the
    AK> Mailman version doesn't, so you wind up with a 1000-line long
    AK> diff between the two.

Okay, don't sweat it.  If the new version makes sense to you, I'll
just be sure to make any Mailman updates that are necessary.  I'll
take a look once it's been checked in.

-Barry



From tim_one at email.msn.com  Fri Aug 18 02:24:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 20:24:04 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817171917.V17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEGIHAAA.tim_one@email.msn.com>

[Gret Stein]
> The NPL is not compatible with the Python license.

Or human comprehensibility either, far as I can tell.

> While we could use their API as a guide for our own code, we cannot
> use their code.
>
> The real question is whether somebody has the time/inclination to sit
> down now and write an x-plat TLS for Python. Always the problem :-)

The answer to Trent's original question is determined by whether he wants to
get a Monterey hack in as a bugfix for 2.0, or can wait a few years <0.9
wink> (the 2.0 feature set is frozen now).

If somebody wants to *buy* the time/inclination to get x-plat TLS, I'm sure
BeOpen or ActiveState would be keen to cash the check.  Otherwise ... don't
know.

all-it-takes-is-50-people-to-write-50-one-platform-packages-and-
    then-50-years-to-iron-out-their-differences-ly y'rs  - tim





From bwarsaw at beopen.com  Fri Aug 18 02:26:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:26:10 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000817164137.U17689@lyra.org>
	<ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
	<20000817171917.V17689@lyra.org>
Message-ID: <14748.33442.7609.588513@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> The NPL is not compatible with the Python license. While we
    GS> could use their API as a guide for our own code, we cannot use
    GS> their code.

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> Jesus, Mark, I haven't even been able to figure what the
    TP> license means by "you" yet:

Is the NPL compatible with /anything/? :)

-Barry



From trentm at ActiveState.com  Fri Aug 18 02:41:37 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 17:41:37 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <14748.33442.7609.588513@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 08:26:10PM -0400
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com> <20000817171917.V17689@lyra.org> <14748.33442.7609.588513@anthem.concentric.net>
Message-ID: <20000817174137.B18811@ActiveState.com>

On Thu, Aug 17, 2000 at 08:26:10PM -0400, Barry A. Warsaw wrote:
> 
> Is the NPL compatible with /anything/? :)
> 


Mozilla will be dual licenced with the GPL. But you already read that.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From gstein at lyra.org  Fri Aug 18 02:55:56 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 17:55:56 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <14748.33442.7609.588513@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 08:26:10PM -0400
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com> <20000817171917.V17689@lyra.org> <14748.33442.7609.588513@anthem.concentric.net>
Message-ID: <20000817175556.Y17689@lyra.org>

On Thu, Aug 17, 2000 at 08:26:10PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "GS" == Greg Stein <gstein at lyra.org> writes:
> 
>     GS> The NPL is not compatible with the Python license. While we
>     GS> could use their API as a guide for our own code, we cannot use
>     GS> their code.
> 
> >>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:
> 
>     TP> Jesus, Mark, I haven't even been able to figure what the
>     TP> license means by "you" yet:
> 
> Is the NPL compatible with /anything/? :)

All kinds of stuff. It is effectively a non-viral GPL. Any changes to the
NPL/MPL licensed stuff must be released. It does not affect the stuff that
it is linked/dist'd with.

However, I was talking about the Python source code base. The Python license
and the NPL/MPL are definitely compatible. I mean that we don't want both
licenses in the Python code base.

Hmm. Should have phrased that differently.

And one nit: the NPL is very different from the MPL. NPL x.x is nasty, while
MPL 1.1 is very nice.

Note the whole MPL/GPL dual-license stuff that you see (Expat and now
Mozilla) is not because they are trying to be nice, but because they are
trying to compensate for the GPL's nasty viral attitude. You cannot use MPL
code in a GPL product because the *GPL* says so. The MPL would be perfectly
happy, but no... Therefore, people dual-license so that you can choose the
GPL when linking with GPL code.

Ooops. I'll shut up now. :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Fri Aug 18 02:49:17 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:49:17 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000817164137.U17689@lyra.org>
	<ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
	<20000817171917.V17689@lyra.org>
	<14748.33442.7609.588513@anthem.concentric.net>
	<20000817174137.B18811@ActiveState.com>
Message-ID: <14748.34829.130052.124407@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Mozilla will be dual licenced with the GPL. But you already
    TM> read that.

Yup, but it'll still be a big hurdle to include any GPL'd code in
Python.

-Barry



From MarkH at ActiveState.com  Fri Aug 18 02:55:02 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 10:55:02 +1000
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817175556.Y17689@lyra.org>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEGDDFAA.MarkH@ActiveState.com>

[Greg]
> However, I was talking about the Python source code base. The
> Python license
> and the NPL/MPL are definitely compatible.

Phew.  Obviously IANAL, but I thought I was going senile.  I didn't seek
clarification for fear of further demonstrating my ignorance :-)

> I mean that we don't want both licenses in the Python code base.

That makes much more sense to me!

Thanks for the clarification.

Mark.




From greg at cosc.canterbury.ac.nz  Fri Aug 18 03:01:17 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:01:17 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <20000817090942.L376@xs4all.nl>
Message-ID: <200008180101.NAA15496@s454.cosc.canterbury.ac.nz>

Thomas Wouters:

> Bzzzt. This is unfortunately not true. Observe:
>
> daemon2:~/python > rmdir perl/
> rmdir: perl/: Is a directory

I'd say that's a bug in rmdir in whatever Unix you're using.
Solaris doesn't have the problem:

s454% cd ~/tmp
s454% mkdir foo
s454% rmdir foo/
s454% 

There's always room for a particular program to screw up.  However,
the usual principle in Unices is that trailing slashes are optional.

> Note that the trailing slash is added by all tab-completing shells that I
> know.

This is for the convenience of the user, who is probably going to type
another pathname component, and also to indicate that the object found
is a directory. It makes sense in an interactive tool, but not
necessarily in other places.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 03:27:33 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:27:33 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <1245608987-154490918@hypernet.com>
Message-ID: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>

Gordon:

> os.chdir(os.pardir)

Ah, I missed that somehow. Probably I was looking in os.path
instead of os.

Shouldn't everything to do with pathname semantics be in os.path?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 03:52:32 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:52:32 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.55394.783997.167234@beluga.mojam.com>
Message-ID: <200008180152.NAA15507@s454.cosc.canterbury.ac.nz>

Skip:

> Since we already have os.path.commonprefix and it's not going away,

If it's to stay the way it is, we need another function to
do what it should have been designed to do in the first place.
That means two new functions, one to find a common prefix,
and one to remove a given prefix.

But it's not clear exactly what a function such as

   removeprefix(prefix, path)

should do. What happens, for instance, if 'prefix' is not actually a
prefix of 'path', or only part of it is a prefix?

A reasonable definition might be that however much of 'prefix' is
a prefix of 'path' is removed. But that requires finding the common
prefix of the prefix and the path, which is intruding on commonprefix's 
territory!

This is what led me to think of combining the two operations
into one, which would have a clear, unambiguous definition
covering all cases.

> there's nothing in the name factorize that suggests that it would
> split the paths at the common prefix.

I'm not particularly attached to that name. Call it
'splitcommonprefix' or something if you like.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 04:02:09 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:02:09 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.57236.264324.165612@beluga.mojam.com>
Message-ID: <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>

Skip:

> maybe we should move it to some other module
> that has no directory path implications.

I agree!

> Perhaps string?  Oh, that's deprecated.

Is the whole string module deprecated, or only those parts
which are now available as string methods? I think trying to
eliminate the string module altogether would be a mistake,
since it would leave nowhere for string operations that don't
make sense as methods of a string.

The current version of commonprefix is a case in point,
since it operates symmetrically on a collection of strings.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+





From gmcm at hypernet.com  Fri Aug 18 04:07:04 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 22:07:04 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>
Message-ID: <1245558070-157553278@hypernet.com>

Thomas Wouters wrote:

> The other issue is the change in semantics for 'from-import'.

Um, maybe I'm not seeing something, but isn't the effect of 
"import goom.bah as snarf" the same as "from goom import 
bah as snarf"? Both forms mean that we don't end up looking 
for (the aliased) bah in another namespace, (thus both forms 
fall prey to the circular import problem).

Why not just disallow "from ... import ... as ..."?



- Gordon



From fdrake at beopen.com  Fri Aug 18 04:13:25 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:13:25 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>
References: <14747.57236.264324.165612@beluga.mojam.com>
	<200008180202.OAA15511@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.39877.3411.744665@cj42289-a.reston1.va.home.com>

Skip:
 > Perhaps string?  Oh, that's deprecated.

Greg Ewing writes:
 > Is the whole string module deprecated, or only those parts
 > which are now available as string methods? I think trying to

  I wasn't aware of any actual deprecation, just a shift of
preference.  There's not a notice of the deprecation in the docs.  ;)
In fact, there are things that are in the module that are not
available as string methods.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Aug 18 04:38:06 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:38:06 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
References: <1245608987-154490918@hypernet.com>
	<200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.41358.61606.202184@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > Gordon:
 > 
 > > os.chdir(os.pardir)
 > 
 > Ah, I missed that somehow. Probably I was looking in os.path
 > instead of os.
 > 
 > Shouldn't everything to do with pathname semantics be in os.path?

  Should be, yes.  I'd vote that curdir, pardir, sep, altsep, and
pathsep be added to the *path modules, and os could pick them up from
there.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From amk at s154.tnt3.ann.va.dialup.rcn.com  Fri Aug 18 04:46:32 2000
From: amk at s154.tnt3.ann.va.dialup.rcn.com (A.M. Kuchling)
Date: Thu, 17 Aug 2000 22:46:32 -0400
Subject: [Python-Dev] Request for help w/ bsddb module
Message-ID: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>

[CC'ed to python-dev, python-list]

I've started writing a straight C version of Greg Smith's BSDDB3
module (http://electricrain.com/greg/python/bsddb3/), which currently
uses SWIG.  The code isn't complete enough to do anything yet, though
it does compile.  

Now I'm confronted with writing around 60 different methods for 3
different types; the job doesn't look difficult, but it does look
tedious and lengthy.  Since the task will parallelize well, I'm asking
if anyone wants to help out by writing the code for one of the types.

If you want to help, grab Greg's code from the above URL, and my
incomplete module from
ftp://starship.python.net/pub/crew/amk/new/_bsddb.c.  Send me an
e-mail telling me which set of methods (those for the DB, DBC, DB_Env
types) you want to implement before starting to avoid duplicating
work.  I'll coordinate, and will debug the final product.

(Can this get done in time for Python 2.0?  Probably.  Can it get
tested in time for 2.0?  Ummm....)

--amk









From greg at cosc.canterbury.ac.nz  Fri Aug 18 04:45:46 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:45:46 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>
Message-ID: <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>

Tim Peters:

> The principled solution is to put the new functionality in a new
> function.

I agree with that.

> By my count, Unix and Windows people have each voted for both answers, and
> the Mac contingent is silently laughing <wink>.

The Mac situation is somewhat complicated. Most of the time
a single trailing colon makes no difference, but occasionally
it does. For example, "abc" is a relative pathname, but
"abc:" is an absolute pathname!

The best way to resolve this, I think, is to decree that it
should do the same as what os.path.split does, on all
platforms. That function seems to know how to deal with 
all the tricky cases correctly.

Don't-even-think-of-asking-about-VMS-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake at beopen.com  Fri Aug 18 04:55:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:55:59 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>
References: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>
	<200008180245.OAA15517@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.42431.165537.946022@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > Don't-even-think-of-asking-about-VMS-ly,

  Really!  I looked at some docs for the path names on that system,
and didn't come away so much as convinced DEC/Compaq knew what they
looked like.  Or where they stopped.  Or started.
  I think a fully general path algebra will be *really* hard to do,
but it's something I've thought about a little.  Don't know when I'll
have time to dig back into it.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From greg at cosc.canterbury.ac.nz  Fri Aug 18 04:57:34 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:57:34 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399B94EB.E95260EE@lemburg.com>
Message-ID: <200008180257.OAA15523@s454.cosc.canterbury.ac.nz>

M.-A. Lemburg:

> By dropping the trailing slash from the path
> you are removing important information from the path information.

No, you're not. A trailing slash on a Unix pathname doesn't
tell you anything about whether it refers to a directory.
Actually, it doesn't tell you anything at all. Slashes
simply delimit pathname components, nothing more.

A demonstration of this:

s454% cat > foo/
asdf
s454% cat foo/
asdf
s454% 

A few utilites display pathnames with trailing slashes in
order to indicate that they refer to directories, but that's
a special convention confined to those tools. It doesn't
apply in general.

The only sure way to find out whether a given pathname refers 
to a directory or not is to ask the filesystem. And if the 
object referred to doesn't exist, the question of whether it's 
a directory is meaningless.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 05:34:57 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 15:34:57 +1200 (NZST)
Subject: [Python-Dev] 'import as'
In-Reply-To: <1245558070-157553278@hypernet.com>
Message-ID: <200008180334.PAA15543@s454.cosc.canterbury.ac.nz>

Gordon McMillan <gmcm at hypernet.com>:

> isn't the effect of "import goom.bah as snarf" the same as "from goom
> import bah as snarf"?

Only if goom.bah is a submodule or subpackage, I think.
Otherwise "import goom.bah" doesn't work in the first place.

I'm not sure that "import goom.bah as snarf" should
be allowed, even if goom.bah is a module. Should the
resulting object be referred to as snarf, snarf.bah
or goom.snarf?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Fri Aug 18 05:39:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 23:39:29 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817142207.A5592@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>

[Trent Mick]
> ...
> I'm all for being a hoser then.

Canadian <wink>.

> #ifdef's a-comin' down the pipe.
> One thing, the only #define that I know I have a handle on for
> Monterey is '_LP64'. Do you have an objection to that (seeing at
> is kind of misleading)? I will accompany it with an explicative
> comment of course.

Hmm!  I hate "mystery #defines", even when they do make sense.  In my last
commerical project, we had a large set of #defines in its equivalent of
pyport.h, along the lines of Py_COMPILER_MSVC, Py_COMPILER_GCC, Py_ARCH_X86,
Py_ARCH_KATMAI, etc etc.  Over time, *nobody* can remember what goofy
combinations of mystery preprocessor symbols vendors define, and vendors
come and go, and you're left with piles of code you can't make head or tail
of.  "#ifdef __SC__" -- what?

So there was A Rule that vendor-supplied #defines could *only* appear in
(that project's version of) pyport.h, used there to #define symbols whose
purpose was clear from extensive comments and naming conventions.  That
proved to be an excellent idea over years of practice!

So I'll force Python to do that someday too.  In the meantime, _LP64 is a
terrible name for this one, because its true *meaning* (the way you want to
use it) appears to be "sizeof(pthread_t) < sizeof(long)", and that's
certainly not a property of all LP64 platforms.  So how about a runtime test
for what's actually important (and it's not Monterey!)?

	if (sizeof(threadid) <= sizeof(long))
		return (long)threadid;

End of problem, right?  It's a cheap runtime test in a function whose speed
isn't critical anyway.  And it will leave the God-awful casting to the one
platform where it appears to be needed -- while also (I hope) making it
clearer that that's absolutely the wrong thing to be doing on that platform
(throwing away half the bits in the threadid value is certain to make
get_ident return the same value for two distinct threads sooner or later
...).

less-preprocessor-more-sense-ly y'rs  - tim





From tim_one at email.msn.com  Fri Aug 18 05:58:13 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 23:58:13 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180257.OAA15523@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHFHAAA.tim_one@email.msn.com>

[Greg Ewing]
> ...
> A trailing slash on a Unix pathname doesn't tell you anything
> about whether it refers to a directory.

It does if it's also the only character in the pathname <0.5 wink>.  The
same thing bites people on Windows, except even worse, because in UNC
pathnames the leading

   \\machine\volume

"acts like a root", and the presence or absence of a trailing backslash
there makes a world of difference too.

> ...
> The only sure way to find out whether a given pathname refers
> to a directory or not is to ask the filesystem.

On Windows again,

>>> from os import path
>>> path.exists("/python16")
1
>>> path.exists("/python16/")
0
>>>

This insane behavior is displayed by the MS native APIs too, but isn't
documented (at least not last time I peed away hours looking for it).

just-more-evidence-that-windows-weenies-shouldn't-get-a-vote!-ly
    y'rs  - tim





From moshez at math.huji.ac.il  Fri Aug 18 06:39:18 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 18 Aug 2000 07:39:18 +0300 (IDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <E13PWSp-0006w9-00@kronos.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008180738100.23483-100000@sundial>

On Thu, 17 Aug 2000, Andrew Kuchling wrote:

> Tim O'Malley finally mailed me the correct URL for the latest version
> of the cookie module: http://www.timo-tasi.org/python/Cookie.py 
> 
> *However*...  I think the Web support in Python needs more work in
> generally, and certainly more than can be done for 2.0.

This is certainly true, but is that reason enough to keep Cookie.py 
out of 2.0?

(+1 on enhancing the Python standard library, of course)
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Aug 18 07:26:51 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 01:26:51 -0400
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <399BE124.9920B0B6@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>

Note that Guido rejected all the loop-gimmick proposals ("indexing",
indices(), irange(), and list.items()) on Thursday, so let's stifle this
debate until after 2.0 (or, even better, until after I'm dead <wink>).

hope-springs-eternal-ly y'rs  - tim





From tim_one at email.msn.com  Fri Aug 18 07:43:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 01:43:14 -0400
Subject: [Python-Dev] PyErr_NoMemory
In-Reply-To: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHJHAAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> The current PyErr_NoMemory() function reads:
>
> PyObject *
> PyErr_NoMemory(void)
> {
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
>         else
>                 /* this will probably fail since there's no
> memory and hee,
>                    hee, we have to instantiate this class
>                 */
>                 PyErr_SetNone(PyExc_MemoryError);
>
>         return NULL;
> }
>
> thus overriding any previous exceptions unconditionally. This is a
> problem when the current exception already *is* PyExc_MemoryError,
> notably when we have a chain (cascade) of memory errors. It is a
> problem because the original memory error and eventually its error
> message is lost.
>
> I suggest to make this code look like:
>
> PyObject *
> PyErr_NoMemory(void)
> {
> 	if (PyErr_ExceptionMatches(PyExc_MemoryError))
> 		/* already current */
> 		return NULL;
>
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
> ...
>
>
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

Looks good to me.  And if it breaks something, it will be darned hard to
tell <wink>.





From nowonder at nowonder.de  Fri Aug 18 10:06:23 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 18 Aug 2000 08:06:23 +0000
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
Message-ID: <399CEE7F.F2B865D2@nowonder.de>

Tim Peters wrote:
> 
> Note that Guido rejected all the loop-gimmick proposals ("indexing",
> indices(), irange(), and list.items()) on Thursday, so let's stifle this
> debate until after 2.0 (or, even better, until after I'm dead <wink>).

That's sad. :-/

One of the reasons I implemented .items() is that I wanted
to increase the probability that at least *something* is
available instead of:

  for i in range(len(list):
      e = list[i]
      ...

or

  for i, e in zip(range(len(list)), list):
      ...

I'm going to teach Python to a lot of newbies (ca. 30) in
October. From my experience (I already tried my luck on two
individuals from that group) 'range(len(list))' is one
of the harder concepts to get across. Even indices(list)
would help here.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From moshez at math.huji.ac.il  Fri Aug 18 08:12:39 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 18 Aug 2000 09:12:39 +0300 (IDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.GSO.4.10.10008180911430.23483-100000@sundial>

On Fri, 18 Aug 2000, Greg Ewing wrote:

> Gordon:
> 
> > os.chdir(os.pardir)
> 
> Ah, I missed that somehow. Probably I was looking in os.path
> instead of os.
> 
> Shouldn't everything to do with pathname semantics be in os.path?

Nope, os.path is just for "abstract pathname algebra". Anything
dealing with real paths on a real machine belongs elsewhere (os, probably)
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Aug 18 08:30:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 02:30:40 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817164137.U17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>

[Greg Stein]
> ...
> IOW, an x-plat TLS is going to be done at some point. If you need it now,
> then please do it now. That will help us immeasurably in the long run.

It appears that a correct thread.get_ident() for DEC threads needed it 6
years ago (or at least that would have been-- and remains --the most elegant
way to solve it).  Trent doesn't need it to fix Monterey, though -- his only
problem there is that the Alpha hack doesn't work on his platform, due to
the former's utter bogosity.  From Trent's POV, I bet the one-liner
workaround sounds more appealing.





From cgw at fnal.gov  Fri Aug 18 09:01:59 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 02:01:59 -0500 (CDT)
Subject: [Python-Dev] [Bug #111620] lots of use of send() without verifying amount
 of data sent
Message-ID: <14748.57191.25642.168078@buffalo.fnal.gov>

I'm jumping in late to this discussion to mention to mention that even
for sockets in blocking mode, you can do sends with the MSG_DONTWAIT
flag:

sock.send(msg, socket.MSG_DONTWAIT)

and this will send only as much data as can be written immediately.
I.E., a per-message non-blocking write, without putting the socket
into blocking mode.

So if somebody decides to raise an exception on short TCP writes, they
need to be aware of this.  Personally I think it's a bad idea to be
raising an exception at all for short writes.




From thomas at xs4all.net  Fri Aug 18 09:07:43 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:07:43 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <1245558070-157553278@hypernet.com>; from gmcm@hypernet.com on Thu, Aug 17, 2000 at 10:07:04PM -0400
References: <20000817231942.O376@xs4all.nl> <1245558070-157553278@hypernet.com>
Message-ID: <20000818090743.S376@xs4all.nl>

On Thu, Aug 17, 2000 at 10:07:04PM -0400, Gordon McMillan wrote:
> Thomas Wouters wrote:

> > The other issue is the change in semantics for 'from-import'.

> Um, maybe I'm not seeing something, but isn't the effect of "import
> goom.bah as snarf" the same as "from goom import bah as snarf"?

I don't understand what you're saying here. 'import goom.bah' imports goom,
then bah, and the resulting module in the local namespace is 'goom'. That's
existing behaviour (which I find perplexing, but I've never ran into before
;) which has changed in a reliable way: the local name being stored,
whatever it would have been in a normal import, is changed into the
"as-name" by "as <name>".

If you're saying that 'import goom.bah.baz as b' won't do what people
expect, I agree. (But neither does 'import goom.bah.baz', I think :-)

> Both forms mean that we don't end up looking for (the aliased) bah in
> another namespace, (thus both forms fall prey to the circular import
> problem).

Maybe it's the early hour, but I really don't understand the problem here.
Ofcourse we end up looking 'bah' in the other namespace, we have to import
it. And I don't know what it has to do with circular import either ;P

> Why not just disallow "from ... import ... as ..."?

That would kind of defeat the point of this change. I don't see any
unexpected behaviour with 'from .. import .. as ..'; the object mentioned
after 'import' and before 'as' is the object stored with the local name
which follows 'as'. Why would we want to disallow that ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 18 09:17:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:17:03 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 11:39:29PM -0400
References: <20000817142207.A5592@ActiveState.com> <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>
Message-ID: <20000818091703.T376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:

> So how about a runtime test for what's actually important (and it's not
> Monterey!)?
> 
> 	if (sizeof(threadid) <= sizeof(long))
> 		return (long)threadid;
> 
> End of problem, right?  It's a cheap runtime test in a function whose speed
> isn't critical anyway.

Note that this is what autoconf is for. It also helps to group all that
behaviour-testing code together, in one big lump noone pretends to
understand ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Fri Aug 18 09:35:17 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 09:35:17 +0200
Subject: [Python-Dev] 'import as'
References: <20000817231942.O376@xs4all.nl>
Message-ID: <001901c008e6$dc222760$f2a6b5d4@hagrid>

thomas wrote:
> I have two remaining issues regarding the 'import as' statement, which I'm
> just about ready to commit.

has this been tested with import hooks?

what's passed to the __import__ function's fromlist argument
if you do "from spam import egg as bacon"?

</F>




From thomas at xs4all.net  Fri Aug 18 09:30:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:30:49 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <001901c008e6$dc222760$f2a6b5d4@hagrid>; from effbot@telia.com on Fri, Aug 18, 2000 at 09:35:17AM +0200
References: <20000817231942.O376@xs4all.nl> <001901c008e6$dc222760$f2a6b5d4@hagrid>
Message-ID: <20000818093049.I27945@xs4all.nl>

On Fri, Aug 18, 2000 at 09:35:17AM +0200, Fredrik Lundh wrote:
> thomas wrote:
> > I have two remaining issues regarding the 'import as' statement, which I'm
> > just about ready to commit.

> has this been tested with import hooks?

Not really, I'm afraid. I don't know how to use import hooks ;-P But nothing
substantial changed, and I took care to make sure 'find_from_args' gave the
same information, still. For what it's worth, the test suite passed fine,
but I don't know if there's a test for import hooks in there.

> what's passed to the __import__ function's fromlist argument
> if you do "from spam import egg as bacon"?

The same as 'from spam import egg', currently. Better ideas are welcome, of
course, especially if you know how to use import hooks, and how they
generally are used. Pointers towards the right sections are also welcome.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Fri Aug 18 10:02:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 04:02:40 -0400
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]  Lockstep iteration - eureka!)
In-Reply-To: <399CEE7F.F2B865D2@nowonder.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>

I'm stifling it, but, FWIW, I've been trying to sell "indexing" for most of
my adult life <wink -- but yes, in my experience too range(len(seq)) is
extraordinarly hard to get across to newbies at first; and I *expect*
[:len(seq)] to be at least as hard>.


> -----Original Message-----
> From: nowonder at stud.ntnu.no [mailto:nowonder at stud.ntnu.no]On Behalf Of
> Peter Schneider-Kamp
> Sent: Friday, August 18, 2000 4:06 AM
> To: Tim Peters
> Cc: python-dev at python.org
> Subject: Re: indexing, indices(), irange(), list.items() (was RE:
> [Python-Dev] Lockstep iteration - eureka!)
>
>
> Tim Peters wrote:
> >
> > Note that Guido rejected all the loop-gimmick proposals ("indexing",
> > indices(), irange(), and list.items()) on Thursday, so let's stifle this
> > debate until after 2.0 (or, even better, until after I'm dead <wink>).
>
> That's sad. :-/
>
> One of the reasons I implemented .items() is that I wanted
> to increase the probability that at least *something* is
> available instead of:
>
>   for i in range(len(list):
>       e = list[i]
>       ...
>
> or
>
>   for i, e in zip(range(len(list)), list):
>       ...
>
> I'm going to teach Python to a lot of newbies (ca. 30) in
> October. From my experience (I already tried my luck on two
> individuals from that group) 'range(len(list))' is one
> of the harder concepts to get across. Even indices(list)
> would help here.
>
> Peter
> --
> Peter Schneider-Kamp          ++47-7388-7331
> Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
> N-7050 Trondheim              http://schneider-kamp.de





From mal at lemburg.com  Fri Aug 18 10:05:30 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 10:05:30 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <200008180101.NAA15496@s454.cosc.canterbury.ac.nz>
Message-ID: <399CEE49.8E646DC3@lemburg.com>

Greg Ewing wrote:
> 
> > Note that the trailing slash is added by all tab-completing shells that I
> > know.
> 
> This is for the convenience of the user, who is probably going to type
> another pathname component, and also to indicate that the object found
> is a directory. It makes sense in an interactive tool, but not
> necessarily in other places.

Oh, C'mon Greg... haven't you read my reply to this ?

The trailing slash contains important information which might
otherwise not be regainable or only using explicit queries to
the storage system.

The "/" tells the program that the last path component is
a directory. Removing the slash will also remove that information
from the path (and yes: files without extension are legal).

Now, since e.g. posixpath is also used as basis for fiddling
with URLs and other tools using Unix style paths, removing
the slash will result in problems... just look at what your
browser does when you request http://www.python.org/search ...
the server redirects you to search/ to make sure that the 
links embedded in the page are relative to search/ and not
www.python.org/.

Skip, have you already undone that change in CVS ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Aug 18 10:10:01 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 04:10:01 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818091703.T376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>

-0 on autoconf for this.

I doubt that Trent ever needs to know more than in this one place the
relative sizes of threadid and a long, and this entire function is braindead
(hence will be gutted someday) anyway.  Using the explicit test makes it
obvious to everyone; winding back thru layers of autoconf crust makes it A
Project and yet another goofy preprocessor symbol cluttering the code.

> -----Original Message-----
> From: Thomas Wouters [mailto:thomas at xs4all.net]
> Sent: Friday, August 18, 2000 3:17 AM
> To: Tim Peters
> Cc: Trent Mick; python-dev at python.org
> Subject: Re: [Python-Dev] pthreads question: typedef ??? pthread_t and
> hacky return statements
>
>
> On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
>
> > So how about a runtime test for what's actually important (and it's not
> > Monterey!)?
> >
> > 	if (sizeof(threadid) <= sizeof(long))
> > 		return (long)threadid;
> >
> > End of problem, right?  It's a cheap runtime test in a function
> > whose speed isn't critical anyway.
>
> Note that this is what autoconf is for. It also helps to group all that
> behaviour-testing code together, in one big lump noone pretends to
> understand ;)
>
> --
> Thomas Wouters <thomas at xs4all.net>
>
> Hi! I'm a .signature virus! copy me into your .signature file to
> help me spread!





From mal at lemburg.com  Fri Aug 18 10:30:51 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 10:30:51 +0200
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
Message-ID: <399CF43A.478D7165@lemburg.com>

Tim Peters wrote:
> 
> Note that Guido rejected all the loop-gimmick proposals ("indexing",
> indices(), irange(), and list.items()) on Thursday, so let's stifle this
> debate until after 2.0 (or, even better, until after I'm dead <wink>).

Hey, we still have mxTools which gives you most of those goodies 
and lots more ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Fri Aug 18 13:07:43 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 18 Aug 2000 11:07:43 +0000
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
Message-ID: <399D18FF.BD807ED5@nowonder.de>

What about 'indexing' xor 'in' ? Like that:

for i indexing sequence:      # good
for e in sequence:            # good
for i indexing e in sequence: # BAD!

This might help Guido to understand what it does in the
'indexing' case. I admit that the third one may be a
bit harder to parse, so why not *leave it out*?

But then I'm sure this has also been discussed before.
Nevertheless I'll mail Barry and volunteer for a PEP
on this.

[Tim Peters about his life]
> I've been trying to sell "indexing" for most of my adult life

then-I'll-have-to-spend-another-life-on-it-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From sjoerd at oratrix.nl  Fri Aug 18 11:42:38 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Fri, 18 Aug 2000 11:42:38 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
Message-ID: <20000818094239.A3A1931047C@bireme.oratrix.nl>

Your changes for the import X as Y feature introduced a serious bug:
I can no longer run Python at all.

The problem is that in line 2150 of compile.c com_addopname is called
with a NULL last argument, and the firat thing com_addopname does is
indirect off of that very argument.  On my machine (and on many other
machines) that results in a core dump.

In case it helps, here is the stack trace.  The crash happens when
importing site.py.  I have not made any changes to my site.py.

>  0 com_addopname(c = 0x7fff1e20, op = 90, n = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":738, 0x1006cb58]
   1 com_import_stmt(c = 0x7fff1e20, n = 0x101e2ad0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2150, 0x10071ecc]
   2 com_node(c = 0x7fff1e20, n = 0x101e2ad0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2903, 0x10074764]
   3 com_node(c = 0x7fff1e20, n = 0x101eaf68) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2855, 0x10074540]
   4 com_node(c = 0x7fff1e20, n = 0x101e2908) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2864, 0x100745b0]
   5 com_node(c = 0x7fff1e20, n = 0x1020d450) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2855, 0x10074540]
   6 com_file_input(c = 0x7fff1e20, n = 0x101e28f0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3137, 0x10075324]
   7 compile_node(c = 0x7fff1e20, n = 0x101e28f0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3241, 0x100759c0]
   8 jcompile(n = 0x101e28f0, filename = 0x7fff2430 = "./../Lib/site.py", base = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3400, 0x10076058]
   9 PyNode_Compile(n = 0x101e28f0, filename = 0x7fff2430 = "./../Lib/site.py") ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3378, 0x10075f7c]
   10 parse_source_module(pathname = 0x7fff2430 = "./../Lib/site.py", fp = 0xfb563c8) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":632, 0x100151a4]
   11 load_source_module(name = 0x7fff28d8 = "site", pathname = 0x7fff2430 = "./../Lib/site.py", fp = 0xfb563c8) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":722, 0x100154c8]
   12 load_module(name = 0x7fff28d8 = "site", fp = 0xfb563c8, buf = 0x7fff2430 = "./../Lib/site.py", type = 1) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1199, 0x1001629c]
   13 import_submodule(mod = 0x101b8478, subname = 0x7fff28d8 = "site", fullname = 0x7fff28d8 = "site") ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1727, 0x10017dc4]
   14 load_next(mod = 0x101b8478, altmod = 0x101b8478, p_name = 0x7fff2d04, buf = 0x7fff28d8 = "site", p_buflen = 0x7fff28d0) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1583, 0x100174c0]
   15 import_module_ex(name = (nil), globals = (nil), locals = (nil), fromlist = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1434, 0x10016d04]
   16 PyImport_ImportModuleEx(name = 0x101d9450 = "site", globals = (nil), locals = (nil), fromlist = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1475, 0x10016fe0]
   17 PyImport_ImportModule(name = 0x101d9450 = "site") ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1408, 0x10016c64]
   18 initsite() ["/ufs/sjoerd/src/Python/dist/src/Python/pythonrun.c":429, 0x10053148]
   19 Py_Initialize() ["/ufs/sjoerd/src/Python/dist/src/Python/pythonrun.c":166, 0x100529c8]
   20 Py_Main(argc = 1, argv = 0x7fff2ec4) ["/ufs/sjoerd/src/Python/dist/src/Modules/main.c":229, 0x10013690]
   21 main(argc = 1, argv = 0x7fff2ec4) ["/ufs/sjoerd/src/Python/dist/src/Modules/python.c":10, 0x10012f24]
   22 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x10012ec8]

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From fredrik at pythonware.com  Fri Aug 18 12:42:54 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 12:42:54 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000816172425.A32338@ActiveState.com>
Message-ID: <003001c00901$11fd8ae0$0900a8c0@SPIFF>

trent mick wrote:
>     return (long) *(long *) &threadid;

from what I can tell, pthread_t is a pointer under OSF/1.

I've been using OSF/1 since the early days, and as far as I can
remember, you've never needed to use stupid hacks like that
to convert a pointer to a long integer. an ordinary (long) cast
should be sufficient.

> Could this be changed to
>   return threadid;
> safely?

safely, yes.  but since it isn't a long on all platforms, you might
get warnings from the compiler (see Mark's mail).

:::

from what I can tell, it's compatible with a long on all sane plat-
forms (Win64 doesn't support pthreads anyway ;-), so I guess the
right thing here is to remove volatile and simply use:

    return (long) threadid;

(Mark: can you try this out on your box?  setting up a Python 2.0
environment on our alphas would take more time than I can spare
right now...)

</F>




From gstein at lyra.org  Fri Aug 18 13:00:34 2000
From: gstein at lyra.org (Greg Stein)
Date: Fri, 18 Aug 2000 04:00:34 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 04:10:01AM -0400
References: <20000818091703.T376@xs4all.nl> <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>
Message-ID: <20000818040034.F17689@lyra.org>

That is a silly approach, Tim. This is exactly what autoconf is for. Using
run-time logic to figure out something that is compile-time is Badness.

And the "but it will eventually be fixed" rationale is bogus. Gee, should we
just start loading bogus patches into Python, knowing that everything will
be fixed in the next version? Whoops. We forgot some. Oh, we can't change
those now. Well, gee. Maybe Py3K will fix it.

I realize that you're only -0 on this, but it should be at least +0...

Cheers,
-g

On Fri, Aug 18, 2000 at 04:10:01AM -0400, Tim Peters wrote:
> -0 on autoconf for this.
> 
> I doubt that Trent ever needs to know more than in this one place the
> relative sizes of threadid and a long, and this entire function is braindead
> (hence will be gutted someday) anyway.  Using the explicit test makes it
> obvious to everyone; winding back thru layers of autoconf crust makes it A
> Project and yet another goofy preprocessor symbol cluttering the code.
> 
> > -----Original Message-----
> > From: Thomas Wouters [mailto:thomas at xs4all.net]
> > Sent: Friday, August 18, 2000 3:17 AM
> > To: Tim Peters
> > Cc: Trent Mick; python-dev at python.org
> > Subject: Re: [Python-Dev] pthreads question: typedef ??? pthread_t and
> > hacky return statements
> >
> >
> > On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
> >
> > > So how about a runtime test for what's actually important (and it's not
> > > Monterey!)?
> > >
> > > 	if (sizeof(threadid) <= sizeof(long))
> > > 		return (long)threadid;
> > >
> > > End of problem, right?  It's a cheap runtime test in a function
> > > whose speed isn't critical anyway.
> >
> > Note that this is what autoconf is for. It also helps to group all that
> > behaviour-testing code together, in one big lump noone pretends to
> > understand ;)
> >
> > --
> > Thomas Wouters <thomas at xs4all.net>
> >
> > Hi! I'm a .signature virus! copy me into your .signature file to
> > help me spread!
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From gmcm at hypernet.com  Fri Aug 18 14:35:42 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 08:35:42 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818090743.S376@xs4all.nl>
References: <1245558070-157553278@hypernet.com>; from gmcm@hypernet.com on Thu, Aug 17, 2000 at 10:07:04PM -0400
Message-ID: <1245520353-159821909@hypernet.com>

Thomas Wouters wrote:
> On Thu, Aug 17, 2000 at 10:07:04PM -0400, Gordon McMillan wrote:

> > Um, maybe I'm not seeing something, but isn't the effect of
> > "import goom.bah as snarf" the same as "from goom import bah as
> > snarf"?
> 
> I don't understand what you're saying here. 'import goom.bah'
> imports goom, then bah, and the resulting module in the local
> namespace is 'goom'. That's existing behaviour (which I find
> perplexing, but I've never ran into before ;) which has changed
> in a reliable way: the local name being stored, whatever it would
> have been in a normal import, is changed into the "as-name" by
> "as <name>".

A whole lot rides on what you mean by "resulting" above. If by 
"resulting" you mean "goom", then "import goom.bah as snarf" 
would result in my namespace having "snarf" as an alias for 
"goom", and I would use "bah" as "snarf.bah". In which case 
Greg Ewing is right, and it's "import <dotted name> as ..." 
that should be outlawed, (since that's not what anyone would 
expect).

OTOH, if by "resulting" you meant "bah", things are much 
worse, because it means you must patched code you didn't 
understand ;-b.

> If you're saying that 'import goom.bah.baz as b' won't do what
> people expect, I agree. (But neither does 'import goom.bah.baz',
> I think :-)

I disagree with paranthetical comment. Maybe some don't 
understand the first time they see it, but it has precedent 
(Java), and it's the only form that works in circular imports.
 
> Maybe it's the early hour, but I really don't understand the
> problem here. Ofcourse we end up looking 'bah' in the other
> namespace, we have to import it. And I don't know what it has to
> do with circular import either ;P

"goom.bah" ends up looking in "goom" when *used*. If, in a 
circular import situation, "goom" is already being imported, an 
"import goom.bah" will succeed, even though it can't access 
"bah" in "goom". The code sees it in sys.modules, sees that 
it's being imported, and says, "Oh heck, lets keep going, it'll 
be there before it gets used".

But "from goom import bah" will fail with an import error 
because "goom" is an empty shell, so there's no way to grab 
"bah" and bind it into the local namespace.
 


- Gordon



From bwarsaw at beopen.com  Fri Aug 18 15:27:05 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 09:27:05 -0400 (EDT)
Subject: [Python-Dev] 'import as'
References: <1245558070-157553278@hypernet.com>
	<1245520353-159821909@hypernet.com>
Message-ID: <14749.14761.275554.898385@anthem.concentric.net>

>>>>> "Gordo" == Gordon McMillan <gmcm at hypernet.com> writes:

    Gordo> A whole lot rides on what you mean by "resulting" above. If
    Gordo> by "resulting" you mean "goom", then "import goom.bah as
    Gordo> snarf" would result in my namespace having "snarf" as an
    Gordo> alias for "goom", and I would use "bah" as "snarf.bah". In
    Gordo> which case Greg Ewing is right, and it's "import <dotted
    Gordo> name> as ..."  that should be outlawed, (since that's not
    Gordo> what anyone would expect).

Right.

    Gordo> OTOH, if by "resulting" you meant "bah", things are much 
    Gordo> worse, because it means you must patched code you didn't 
    Gordo> understand ;-b.

But I think it /is/ useful behavior for "import <dotted name> as" to
bind the rightmost attribute to the local name.  I agree though that
if that can't be done in a sane way, it has to raise an exception.
But that will frustrate users.

-Barry



From gmcm at hypernet.com  Fri Aug 18 15:28:00 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 09:28:00 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399CEE49.8E646DC3@lemburg.com>
Message-ID: <1245517214-160010723@hypernet.com>

M.-A. Lemburg wrote:

> ... just look at what your browser does
> when you request http://www.python.org/search ... the server
> redirects you to search/ to make sure that the links embedded in
> the page are relative to search/ and not www.python.org/.

While that seems to be what Apache does, I get 40x's from 
IIS and Netscape server. Greg Ewing's demonstrated a Unix 
where the trailing slash indicates nothing useful, Tim's 
demonstrated that Windows gets confused by a trailing slash 
unless we're talking about the root directory on a drive (and 
BTW, same results if you use backslash).

On WIndows, os.path.commonprefix doesn't use normcase 
and normpath, so it's completely useless anyway. (That is, it's 
really a "string" function and has nothing to do with paths).

- Gordon



From nascheme at enme.ucalgary.ca  Fri Aug 18 15:55:41 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 18 Aug 2000 07:55:41 -0600
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <399CF43A.478D7165@lemburg.com>; from M.-A. Lemburg on Fri, Aug 18, 2000 at 10:30:51AM +0200
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com> <399CF43A.478D7165@lemburg.com>
Message-ID: <20000818075541.A14919@keymaster.enme.ucalgary.ca>

On Fri, Aug 18, 2000 at 10:30:51AM +0200, M.-A. Lemburg wrote:
> Hey, we still have mxTools which gives you most of those goodies 
> and lots more ;-)

Yes, I don't understand what's wrong with a function.  It would be nice
if it was a builtin.  IMHO, all this new syntax is a bad idea.

  Neil



From fdrake at beopen.com  Fri Aug 18 16:12:24 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 10:12:24 -0400 (EDT)
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]  Lockstep iteration - eureka!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
References: <399CEE7F.F2B865D2@nowonder.de>
	<LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
Message-ID: <14749.17480.153311.549655@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > I'm stifling it, but, FWIW, I've been trying to sell "indexing" for most of
 > my adult life <wink -- but yes, in my experience too range(len(seq)) is
 > extraordinarly hard to get across to newbies at first; and I *expect*
 > [:len(seq)] to be at least as hard>.

  And "for i indexing o in ...:" is the best proposal I've seen to
resolve the whole problem in what *I* would describe as a Pythonic
way.  And it's not a new proposal.
  I haven't read the specific patch, but bugs can be fixed.  I guess a
lot of us will just have to disagree with the Guido on this one.  ;-(
  Linguistic coup, anyone?  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 18 16:17:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 16:17:46 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818094239.A3A1931047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Fri, Aug 18, 2000 at 11:42:38AM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
Message-ID: <20000818161745.U376@xs4all.nl>

On Fri, Aug 18, 2000 at 11:42:38AM +0200, Sjoerd Mullender wrote:

> Your changes for the import X as Y feature introduced a serious bug:
> I can no longer run Python at all.

> The problem is that in line 2150 of compile.c com_addopname is called
> with a NULL last argument, and the firat thing com_addopname does is
> indirect off of that very argument.  On my machine (and on many other
> machines) that results in a core dump.

Hm. That's very strange. Line 2150 of compile.c calls com_addopname with
'CHILD(CHILD(subn, 0), 0)' as argument. 'subn' is supposed to be a
'dotted_as_name', which always has at least one child (a dotted_name), which
also always has at least one child (a NAME). I don't see how dotted_as_name
and dotted_name can be valid nodes, but the first child of dotted_name be
NULL.

Can you confirm that the tree is otherwise unmodified ? If you have local
patches, can you try to compile and test a 'clean' tree ? I can't reproduce
this on the machines I have access to, so if you could find out what
statement exactly is causing this behaviour, I'd be very grateful. Something
like this should do the trick, changing:

                        } else
                                com_addopname(c, STORE_NAME,
                                              CHILD(CHILD(subn, 0),0));

into

                        } else {
                                if (CHILD(CHILD(subn, 0), 0) == NULL) {
                                        com_error(c, PyExc_SyntaxError,
                                                  "NULL name for import");
                                        return;
                                }
                                com_addopname(c, STORE_NAME,
                                              CHILD(CHILD(subn, 0),0));
                        }

And then recompile, and remove site.pyc if there is one. (Unlikely, if a
crash occured while compiling site.py, but possible.) This should raise a
SyntaxError on or about the appropriate line, at least identifying what the
problem *could* be ;)

If that doesn't yield anything obvious, and you have the time for it (and
want to do it), some 'print' statements in the debugger might help. (I'm
assuming it works more or less like GDB, in which case 'print n', 'print
n->n_child[1]', 'print subn', 'print subn->n_child[0]' and 'print
subn->n_child[1]' would be useful. I'm also assuming there isn't an easier
way to debug this, like you sending me a corefile, because corefiles
normally aren't very portable :P If it *is* portable, that'd be great.)

> In case it helps, here is the stack trace.  The crash happens when
> importing site.py.  I have not made any changes to my site.py.

Oh, it's probably worth it to re-make the Grammar (just to be sure) and
remove Lib/*.pyc. The bytecode magic changes in the patch, so that last
measure should be unecessary, but who knows :P

breaky-breaky-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 18 16:21:20 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 10:21:20 -0400 (EDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
In-Reply-To: <399D18FF.BD807ED5@nowonder.de>
References: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
	<399D18FF.BD807ED5@nowonder.de>
Message-ID: <14749.18016.323403.295212@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > What about 'indexing' xor 'in' ? Like that:
 > 
 > for i indexing sequence:      # good
 > for e in sequence:            # good
 > for i indexing e in sequence: # BAD!
 > 
 > This might help Guido to understand what it does in the
 > 'indexing' case. I admit that the third one may be a
 > bit harder to parse, so why not *leave it out*?

  I hadn't considered *not* using an "in" clause, but that is actually
pretty neat.  I'd like to see all of these allowed; disallowing "for i
indexing e in ...:" reduces the intended functionality substantially.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gmcm at hypernet.com  Fri Aug 18 16:42:20 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 10:42:20 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <14749.14761.275554.898385@anthem.concentric.net>
Message-ID: <1245512755-160278942@hypernet.com>

Barry "5 String" Warsaw wrote:

> But I think it /is/ useful behavior for "import <dotted name> as"
> to bind the rightmost attribute to the local name. 

That is certainly what I would expect (and thus the confusion 
created by my original post).

> I agree
> though that if that can't be done in a sane way, it has to raise
> an exception. But that will frustrate users.

"as" is minorly useful in dealing with name clashes between 
packages, and with reallyreallylongmodulename.

Unfortunately, it's also yet another way of screwing up circular 
imports and reloads, (unless you go whole hog, and use Jim 
Fulton's idea of an assoctiation object).

Then there's all the things that can go wrong with relative 
imports (loading the same module twice; masking anything 
outside the package with the same name).

It's not surprising that most import problems posted to c.l.py 
get more wrong answers than right. Unfortunately, there's no 
way to fix it in a backwards compatible way.

So I'm -0: it just adds complexity to an already overly complex 
area.

- Gordon



From bwarsaw at beopen.com  Fri Aug 18 16:55:18 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 10:55:18 -0400 (EDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
	<399CF43A.478D7165@lemburg.com>
	<20000818075541.A14919@keymaster.enme.ucalgary.ca>
Message-ID: <14749.20054.495550.467507@anthem.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

    NS> On Fri, Aug 18, 2000 at 10:30:51AM +0200, M.-A. Lemburg wrote:
    >> Hey, we still have mxTools which gives you most of those
    >> goodies and lots more ;-)

    NS> Yes, I don't understand what's wrong with a function.  It
    NS> would be nice if it was a builtin.  IMHO, all this new syntax
    NS> is a bad idea.

I agree, but Guido nixed even the builtin.  Let's move on; there's
always Python 2.1.

-Barry



From akuchlin at mems-exchange.org  Fri Aug 18 17:00:37 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 11:00:37 -0400
Subject: [Python-Dev] Adding insint() function
Message-ID: <20000818110037.C27419@kronos.cnri.reston.va.us>

Four modules define insint() functions to insert an integer into a
dictionary in order to initialize constants in their module
dictionaries:

kronos Modules>grep -l insint *.c
pcremodule.c
shamodule.c
socketmodule.c
zlibmodule.c
kronos Modules>          

(Hm... I was involved with 3 of them...)  Other modules don't use a
helper function, but just do PyDict_SetItemString(d, "foo",
PyInt_FromLong(...)) directly.  

This duplication bugs me.  Shall I submit a patch to add an API
convenience function to do this, and change the modules to use it?
Suggested prototype and name: PyDict_InsertInteger( dict *, string,
long)

--amk






From bwarsaw at beopen.com  Fri Aug 18 17:06:11 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 11:06:11 -0400 (EDT)
Subject: [Python-Dev] 'import as'
References: <14749.14761.275554.898385@anthem.concentric.net>
	<1245512755-160278942@hypernet.com>
Message-ID: <14749.20707.347217.763385@anthem.concentric.net>

>>>>> "Gordo" == Gordon "Punk Cellist" McMillan <gmcm at hypernet.com> writes:

    Gordo> So I'm -0: it just adds complexity to an already overly
    Gordo> complex area.

I agree, -0 from me too.



From sjoerd at oratrix.nl  Fri Aug 18 17:06:37 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Fri, 18 Aug 2000 17:06:37 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Fri, 18 Aug 2000 16:17:46 +0200.
             <20000818161745.U376@xs4all.nl> 
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> 
            <20000818161745.U376@xs4all.nl> 
Message-ID: <20000818150639.6685C31047C@bireme.oratrix.nl>

Ok, problem solved.

The problem was that because of your (I think it was you :-) earlier
change to have a Makefile in Grammar, I had an old graminit.c lying
around in my build directory.  I don't build in the source directory
and the changes for a Makefile in Grammar resulted in a file
graminit.c in the wrong place.  My subsequent change to this part of
the build process resulted in a different place for graminit.c and I
never removed the bogus graminit.c (because I didn't know about it).
However, the compiler found the bogus one so that's why python
crashed.

On Fri, Aug 18 2000 Thomas Wouters wrote:

> On Fri, Aug 18, 2000 at 11:42:38AM +0200, Sjoerd Mullender wrote:
> 
> > Your changes for the import X as Y feature introduced a serious bug:
> > I can no longer run Python at all.
> 
> > The problem is that in line 2150 of compile.c com_addopname is called
> > with a NULL last argument, and the firat thing com_addopname does is
> > indirect off of that very argument.  On my machine (and on many other
> > machines) that results in a core dump.
> 
> Hm. That's very strange. Line 2150 of compile.c calls com_addopname with
> 'CHILD(CHILD(subn, 0), 0)' as argument. 'subn' is supposed to be a
> 'dotted_as_name', which always has at least one child (a dotted_name), which
> also always has at least one child (a NAME). I don't see how dotted_as_name
> and dotted_name can be valid nodes, but the first child of dotted_name be
> NULL.
> 
> Can you confirm that the tree is otherwise unmodified ? If you have local
> patches, can you try to compile and test a 'clean' tree ? I can't reproduce
> this on the machines I have access to, so if you could find out what
> statement exactly is causing this behaviour, I'd be very grateful. Something
> like this should do the trick, changing:
> 
>                         } else
>                                 com_addopname(c, STORE_NAME,
>                                               CHILD(CHILD(subn, 0),0));
> 
> into
> 
>                         } else {
>                                 if (CHILD(CHILD(subn, 0), 0) == NULL) {
>                                         com_error(c, PyExc_SyntaxError,
>                                                   "NULL name for import");
>                                         return;
>                                 }
>                                 com_addopname(c, STORE_NAME,
>                                               CHILD(CHILD(subn, 0),0));
>                         }
> 
> And then recompile, and remove site.pyc if there is one. (Unlikely, if a
> crash occured while compiling site.py, but possible.) This should raise a
> SyntaxError on or about the appropriate line, at least identifying what the
> problem *could* be ;)
> 
> If that doesn't yield anything obvious, and you have the time for it (and
> want to do it), some 'print' statements in the debugger might help. (I'm
> assuming it works more or less like GDB, in which case 'print n', 'print
> n->n_child[1]', 'print subn', 'print subn->n_child[0]' and 'print
> subn->n_child[1]' would be useful. I'm also assuming there isn't an easier
> way to debug this, like you sending me a corefile, because corefiles
> normally aren't very portable :P If it *is* portable, that'd be great.)
> 
> > In case it helps, here is the stack trace.  The crash happens when
> > importing site.py.  I have not made any changes to my site.py.
> 
> Oh, it's probably worth it to re-make the Grammar (just to be sure) and
> remove Lib/*.pyc. The bytecode magic changes in the patch, so that last
> measure should be unecessary, but who knows :P
> 
> breaky-breaky-ly y'rs,
> -- 
> Thomas Wouters <thomas at xs4all.net>
> 
> Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From bwarsaw at beopen.com  Fri Aug 18 17:27:41 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 11:27:41 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <14749.21997.872741.463566@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> Four modules define insint() functions to insert an integer
    AK> into a dictionary in order to initialize constants in their
    AK> module dictionaries:

    | kronos Modules>grep -l insint *.c
    | pcremodule.c
    | shamodule.c
    | socketmodule.c
    | zlibmodule.c
    | kronos Modules>          

    AK> (Hm... I was involved with 3 of them...)  Other modules don't
    AK> use a helper function, but just do PyDict_SetItemString(d,
    AK> "foo", PyInt_FromLong(...)) directly.

    AK> This duplication bugs me.  Shall I submit a patch to add an
    AK> API convenience function to do this, and change the modules to
    AK> use it?  Suggested prototype and name: PyDict_InsertInteger(
    AK> dict *, string, long)

+0, but it should probably be called PyDict_SetItemSomething().  It
seems more related to the other PyDict_SetItem*() functions, even
though in those cases the `*' refers to the type of the key, not the
value.

-Barry



From akuchlin at mems-exchange.org  Fri Aug 18 17:29:47 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 11:29:47 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.21997.872741.463566@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:27:41AM -0400
References: <20000818110037.C27419@kronos.cnri.reston.va.us> <14749.21997.872741.463566@anthem.concentric.net>
Message-ID: <20000818112947.F27419@kronos.cnri.reston.va.us>

On Fri, Aug 18, 2000 at 11:27:41AM -0400, Barry A. Warsaw wrote:
>+0, but it should probably be called PyDict_SetItemSomething().  It
>seems more related to the other PyDict_SetItem*() functions, even
>though in those cases the `*' refers to the type of the key, not the
>value.

PyDict_SetItemInteger seems misleading; PyDict_SetItemStringToInteger 
is simply long.  PyDict_SetIntegerItem, maybe?  :)

Anyway, I'll start working on a patch and change the name later once
there's a good suggestion.

--amk



From mal at lemburg.com  Fri Aug 18 17:41:14 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 17:41:14 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <1245517214-160010723@hypernet.com>
Message-ID: <399D591A.F909CF9D@lemburg.com>

Gordon McMillan wrote:
> 
> M.-A. Lemburg wrote:
> 
> > ... just look at what your browser does
> > when you request http://www.python.org/search ... the server
> > redirects you to search/ to make sure that the links embedded in
> > the page are relative to search/ and not www.python.org/.
> 
> While that seems to be what Apache does, I get 40x's from
> IIS and Netscape server. Greg Ewing's demonstrated a Unix
> where the trailing slash indicates nothing useful, Tim's
> demonstrated that Windows gets confused by a trailing slash
> unless we're talking about the root directory on a drive (and
> BTW, same results if you use backslash).
> 
> On WIndows, os.path.commonprefix doesn't use normcase
> and normpath, so it's completely useless anyway. (That is, it's
> really a "string" function and has nothing to do with paths).

I still don't get it: what's the point in carelessly dropping
valid and useful information for no obvious reason at all ?

Besides the previous behaviour was documented and most probably
used in some apps. Why break those ?

And last not least: what if the directory in question doesn't
even exist anywhere and is only encoded in the path by the fact
that there is a slash following it ?

Puzzled by needless discussions ;-),
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Fri Aug 18 17:42:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 11:42:59 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.21997.872741.463566@anthem.concentric.net>
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
	<14749.21997.872741.463566@anthem.concentric.net>
Message-ID: <14749.22915.717712.613834@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > +0, but it should probably be called PyDict_SetItemSomething().  It
 > seems more related to the other PyDict_SetItem*() functions, even
 > though in those cases the `*' refers to the type of the key, not the
 > value.

  Hmm... How about PyDict_SetItemStringInt() ?  It's still long, but I
don't think that's actually a problem.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 18 18:22:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 18:22:46 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <14749.20707.347217.763385@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:06:11AM -0400
References: <14749.14761.275554.898385@anthem.concentric.net> <1245512755-160278942@hypernet.com> <14749.20707.347217.763385@anthem.concentric.net>
Message-ID: <20000818182246.V376@xs4all.nl>

On Fri, Aug 18, 2000 at 11:06:11AM -0400, Barry A. Warsaw wrote:

>     Gordo> So I'm -0: it just adds complexity to an already overly
>     Gordo> complex area.

> I agree, -0 from me too.

What are we voting on, here ?

import <name> as <localname> (in general)

or

import <name1>.<nameN>+ as <localname> (where localname turns out an alias
for name1)

or

import <name1>.<nameN>*.<nameX> as <localname> (where localname turns out an
alias for nameX, that is, the last part of the dotted name that's being
imported)

? 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Fri Aug 18 18:28:49 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 12:28:49 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399D591A.F909CF9D@lemburg.com>
Message-ID: <1245506365-160663281@hypernet.com>

M.-A. Lemburg wrote:
> Gordon McMillan wrote:
> > 
> > M.-A. Lemburg wrote:
> > 
> > > ... just look at what your browser does
> > > when you request http://www.python.org/search ... the server
> > > redirects you to search/ to make sure that the links embedded
> > > in the page are relative to search/ and not www.python.org/.
> > 
> > While that seems to be what Apache does, I get 40x's from
> > IIS and Netscape server. Greg Ewing's demonstrated a Unix
> > where the trailing slash indicates nothing useful, Tim's
> > demonstrated that Windows gets confused by a trailing slash
> > unless we're talking about the root directory on a drive (and
> > BTW, same results if you use backslash).
> > 
> > On WIndows, os.path.commonprefix doesn't use normcase
> > and normpath, so it's completely useless anyway. (That is, it's
> > really a "string" function and has nothing to do with paths).
> 
> I still don't get it: what's the point in carelessly dropping
> valid and useful information for no obvious reason at all ?

I wasn't advocating anything. I was pointing out that it's not 
necessarily "valid" or "useful" information in all contexts.
 
> Besides the previous behaviour was documented and most probably
> used in some apps. Why break those ?

I don't think commonprefix should be changed, precisely 
because it might break apps. I also think it should not live in 
os.path, because it is not an abstract path operation. It's just 
a string operation. But it's there, so the best I can advise is 
not to use it.
 
> And last not least: what if the directory in question doesn't
> even exist anywhere and is only encoded in the path by the fact
> that there is a slash following it ?

If it doesn't exist, it's not a directory with or without a slash 
following it. The fact that Python largely successfully reuses 
os.path code to deal with URLs does not mean that the 
syntax of URLs should be mistaken for the syntax of file 
systems, even at an abstract level. At the level where 
commonprefix operates, abstraction isn't even a concept.

- Gordon



From gmcm at hypernet.com  Fri Aug 18 18:33:12 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 12:33:12 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
References: <14749.20707.347217.763385@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:06:11AM -0400
Message-ID: <1245506103-160679086@hypernet.com>

Thomas Wouters wrote:
> On Fri, Aug 18, 2000 at 11:06:11AM -0400, Barry A. Warsaw wrote:
> 
> >     Gordo> So I'm -0: it just adds complexity to an already
> >     overly Gordo> complex area.
> 
> > I agree, -0 from me too.
> 
> What are we voting on, here ?
> 
> import <name> as <localname> (in general)

 -0

> import <name1>.<nameN>+ as <localname> (where localname turns out
> an alias for name1)

-1000

> import <name1>.<nameN>*.<nameX> as <localname> (where localname
> turns out an alias for nameX, that is, the last part of the
> dotted name that's being imported)

-0



- Gordon



From thomas at xs4all.net  Fri Aug 18 18:41:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 18:41:31 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818150639.6685C31047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Fri, Aug 18, 2000 at 05:06:37PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl>
Message-ID: <20000818184131.W376@xs4all.nl>

On Fri, Aug 18, 2000 at 05:06:37PM +0200, Sjoerd Mullender wrote:
> Ok, problem solved.

> The problem was that because of your (I think it was you :-) earlier
> change to have a Makefile in Grammar, I had an old graminit.c lying
> around in my build directory. 

Right. That patch was mine, and I think we should remove it again :P We
aren't changing Grammar *that* much, and we'll just have to 'make Grammar'
individually. Grammar now also gets re-made much too often (though that
doesn't really present any problems, it's just a tad sloppy.) Do we really
want that in the released package ?

The Grammar dependency can't really be solved until dependencies in general
are handled better (or at all), especially between directories. It'll only
create more Makefile spaghetti :P
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 18 18:41:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 12:41:35 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <1245506365-160663281@hypernet.com>
References: <399D591A.F909CF9D@lemburg.com>
	<1245506365-160663281@hypernet.com>
Message-ID: <14749.26431.198802.970572@cj42289-a.reston1.va.home.com>

Gordon McMillan writes:
 > I don't think commonprefix should be changed, precisely 
 > because it might break apps. I also think it should not live in 
 > os.path, because it is not an abstract path operation. It's just 
 > a string operation. But it's there, so the best I can advise is 
 > not to use it.

  This works.  Let's accept (some variant) or Skip's desired
functionality as os.path.splitprefix(); this avoid breaking existing
code and uses a name that's consistent with the others.  The result
can be (prefix, [list of suffixes]).  Trailing slashes should be
handled so that os.path.join(prefix, suffix) does the "right thing".


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Aug 18 18:37:02 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 12:37:02 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
References: <14749.14761.275554.898385@anthem.concentric.net>
	<1245512755-160278942@hypernet.com>
	<14749.20707.347217.763385@anthem.concentric.net>
	<20000818182246.V376@xs4all.nl>
Message-ID: <14749.26158.777771.458507@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > What are we voting on, here ?

  We should be really clear about this, since it is confusing.

 > import <name> as <localname> (in general)

  +1 for this basic usage.

 > import <name1>.<nameN>+ as <localname> (where localname turns out an alias
 > for name1)

  -1, because it's confusing for users

 > import <name1>.<nameN>*.<nameX> as <localname> (where localname turns out an
 > alias for nameX, that is, the last part of the dotted name that's being
 > imported)

  +1 on the idea, but the circular import issue is very real and I'm
not sure of the best way to solve it.
  For now, let's support:

	import name1 as localname

where neither name1 nor localname can be dotted.  The dotted-name1
case can be added when the circular import issue can be dealt with.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Fri Aug 18 18:54:12 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 09:54:12 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 02:30:40AM -0400
References: <20000817164137.U17689@lyra.org> <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>
Message-ID: <20000818095412.C11316@ActiveState.com>

On Fri, Aug 18, 2000 at 02:30:40AM -0400, Tim Peters wrote:
> [Greg Stein]
> > ...
> > IOW, an x-plat TLS is going to be done at some point. If you need it now,
> > then please do it now. That will help us immeasurably in the long run.
> 
> the former's utter bogosity.  From Trent's POV, I bet the one-liner
> workaround sounds more appealing.
> 

Yes.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Fri Aug 18 18:56:24 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 09:56:24 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818040034.F17689@lyra.org>; from gstein@lyra.org on Fri, Aug 18, 2000 at 04:00:34AM -0700
References: <20000818091703.T376@xs4all.nl> <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com> <20000818040034.F17689@lyra.org>
Message-ID: <20000818095624.D11316@ActiveState.com>

On Fri, Aug 18, 2000 at 04:00:34AM -0700, Greg Stein wrote:
> That is a silly approach, Tim. This is exactly what autoconf is for. Using
> run-time logic to figure out something that is compile-time is Badness.
> 
> > > On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
> > >
> > > > So how about a runtime test for what's actually important (and it's not
> > > > Monterey!)?
> > > >
> > > > 	if (sizeof(threadid) <= sizeof(long))
> > > > 		return (long)threadid;
> > > >
> > > > End of problem, right?  It's a cheap runtime test in a function
> > > > whose speed isn't critical anyway.
> > >

I am inclined to agrre with Thomas and Greg on this one. Why not check for
sizeof(pthread_t) if pthread.h exists and test:

#if SIZEOF_PTHREAD_T < SIZEOF_LONG
    return (long)threadid;
#endif


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Fri Aug 18 19:09:05 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:09:05 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818040034.F17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEIPHAAA.tim_one@email.msn.com>

[Greg Stein]
> That is a silly approach, Tim. This is exactly what autoconf is for.
> Using run-time logic to figure out something that is compile-time
> is Badness.

Remain -0.  autoconf may work slick as snot on Unix derivatives, but each
new symbol it introduces also serves to make people on other platforms
scratch their heads about what it means and what they're supposed to do with
it in their manual config files.  In this case, the alternative is an
obvious and isolated 1-liner that's transparent on inspection regardless of
the reader's background.  You haven't noted a *downside* to that approach
that I can see, and your technical objection is incorrect:  sizeof is not a
compile-time operation (try it in an #if, but make very sure it does what
you think it does <wink>).





From tim_one at email.msn.com  Fri Aug 18 19:09:07 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:09:07 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <003001c00901$11fd8ae0$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIPHAAA.tim_one@email.msn.com>

[/F]
> from what I can tell, it's compatible with a long on all sane plat-
> forms (Win64 doesn't support pthreads anyway ;-), so I guess the
> right thing here is to remove volatile and simply use:
>
>     return (long) threadid;

That's what the code originally did, and the casting was introduced in
version 2.5.  As for the "volatile", Vladimir reported that he needed that.

This isn't worth the brain cell it's getting.  Put in the hack and move on
already!





From trentm at ActiveState.com  Fri Aug 18 19:23:39 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 10:23:39 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <200008180501.WAA28237@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Thu, Aug 17, 2000 at 10:01:22PM -0700
References: <200008180501.WAA28237@slayer.i.sourceforge.net>
Message-ID: <20000818102339.E11316@ActiveState.com>

On Thu, Aug 17, 2000 at 10:01:22PM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/dist/src/Objects
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv28173
> 
> Modified Files:
> 	object.c 
> Log Message:
> make_pair(): When comparing the pointers, they must be cast to integer
> types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t).  ANSI
> specifies that pointer compares other than == and != to non-related
> structures are undefined.  This quiets an Insure portability warning.
> 
> 
> Index: object.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Objects/object.c,v
> retrieving revision 2.95
> retrieving revision 2.96
> diff -C2 -r2.95 -r2.96
> *** object.c	2000/08/16 12:24:51	2.95
> --- object.c	2000/08/18 05:01:19	2.96
> ***************
> *** 372,375 ****
> --- 372,377 ----
>   {
>   	PyObject *pair;
> + 	Py_uintptr_t iv = (Py_uintptr_t)v;
> + 	Py_uintptr_t iw = (Py_uintptr_t)w;
>   
>   	pair = PyTuple_New(2);
> ***************
> *** 377,381 ****
>   		return NULL;
>   	}
> ! 	if (v <= w) {
>   		PyTuple_SET_ITEM(pair, 0, PyLong_FromVoidPtr((void *)v));
>   		PyTuple_SET_ITEM(pair, 1, PyLong_FromVoidPtr((void *)w));
> --- 379,383 ----
>   		return NULL;
>   	}
> ! 	if (iv <= iw) {
>   		PyTuple_SET_ITEM(pair, 0, PyLong_FromVoidPtr((void *)v));
>   		PyTuple_SET_ITEM(pair, 1, PyLong_FromVoidPtr((void *)w));
> ***************
> *** 488,492 ****
>   	}
>   	if (vtp->tp_compare == NULL) {
> ! 		return (v < w) ? -1 : 1;
>   	}
>   	_PyCompareState_nesting++;
> --- 490,496 ----
>   	}
>   	if (vtp->tp_compare == NULL) {
> ! 		Py_uintptr_t iv = (Py_uintptr_t)v;
> ! 		Py_uintptr_t iw = (Py_uintptr_t)w;
> ! 		return (iv < iw) ? -1 : 1;
>   	}
>   	_PyCompareState_nesting++;
> 

Can't you just do the cast for the comparison instead of making new
variables?

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Fri Aug 18 19:41:50 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 13:41:50 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
References: <200008180501.WAA28237@slayer.i.sourceforge.net>
	<20000818102339.E11316@ActiveState.com>
Message-ID: <14749.30046.345520.779328@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Can't you just do the cast for the comparison instead of
    TM> making new variables?

Does it matter?



From trentm at ActiveState.com  Fri Aug 18 19:47:52 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 10:47:52 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <14749.30046.345520.779328@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 01:41:50PM -0400
References: <200008180501.WAA28237@slayer.i.sourceforge.net> <20000818102339.E11316@ActiveState.com> <14749.30046.345520.779328@anthem.concentric.net>
Message-ID: <20000818104752.A15002@ActiveState.com>

On Fri, Aug 18, 2000 at 01:41:50PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
> 
>     TM> Can't you just do the cast for the comparison instead of
>     TM> making new variables?
> 
> Does it matter?

No, I guess not. Just being a nitpicker first thing in the morning. Revving
up for real work. :)

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Fri Aug 18 19:52:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:52:20 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>

[Andrew Kuchling]
> Four modules define insint() functions to insert an integer into a

  Five                         or macro

> dictionary in order to initialize constants in their module
> dictionaries:
>
> kronos Modules>grep -l insint *.c
> pcremodule.c
> shamodule.c
> socketmodule.c
> zlibmodule.c
> kronos Modules>

It's actually a macro in shamodule.  Missing is _winreg.c (in the PC
directory).  The perils of duplication manifest in subtle differences among
these guys (like _winreg's inserts a long while the others insert an int --
and _winreg is certainly more correct here because a Python int *is* a C
long; and they differ in treatment of errors, and it's not at all clear
that's intentional).

> ...
> This duplication bugs me.  Shall I submit a patch to add an API
> convenience function to do this, and change the modules to use it?
> Suggested prototype and name: PyDict_InsertInteger( dict *, string,
> long)

+1, provided the treatment of errors is clearly documented.





From akuchlin at mems-exchange.org  Fri Aug 18 19:58:33 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 13:58:33 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 01:52:20PM -0400
References: <20000818110037.C27419@kronos.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>
Message-ID: <20000818135833.K27419@kronos.cnri.reston.va.us>

On Fri, Aug 18, 2000 at 01:52:20PM -0400, Tim Peters wrote:
>+1, provided the treatment of errors is clearly documented.

The treatment of errors in module init functions seems to be simply
charge ahead and do the inserts, and then do 'if (PyErr_Occurred())
Py_FatalError())'.  The new function will probably return NULL if
there's an error, but I doubt anyone will check it; it's too ungainly
to write 
  if ( (PyDict_SetItemStringInt(d, "foo", FOO)) == NULL ||
       (PyDict_SetItemStringInt(d, "bar", BAR)) == NULL || 
       ... repeat for 187 more constants ...

--amk
       




From Vladimir.Marangozov at inrialpes.fr  Fri Aug 18 20:17:53 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 18 Aug 2000 20:17:53 +0200 (CEST)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <20000818135833.K27419@kronos.cnri.reston.va.us> from "Andrew Kuchling" at Aug 18, 2000 01:58:33 PM
Message-ID: <200008181817.UAA07799@python.inrialpes.fr>

Andrew Kuchling wrote:
> 
> On Fri, Aug 18, 2000 at 01:52:20PM -0400, Tim Peters wrote:
> >+1, provided the treatment of errors is clearly documented.
> 
> The treatment of errors in module init functions seems to be simply
> charge ahead and do the inserts, and then do 'if (PyErr_Occurred())
> Py_FatalError())'.  The new function will probably return NULL if
> there's an error, but I doubt anyone will check it; it's too ungainly
> to write 
>   if ( (PyDict_SetItemStringInt(d, "foo", FOO)) == NULL ||
>        (PyDict_SetItemStringInt(d, "bar", BAR)) == NULL || 
>        ... repeat for 187 more constants ...

:-)

So name it PyModule_AddConstant(module, name, constant),
which fails with "can't add constant to module" err msg.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Fri Aug 18 20:24:57 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 14:24:57 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818095624.D11316@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEJGHAAA.tim_one@email.msn.com>

[Trent Mick]
> I am inclined to agrre with Thomas and Greg on this one. Why not check for
> sizeof(pthread_t) if pthread.h exists and test:
>
> #if SIZEOF_PTHREAD_T < SIZEOF_LONG
>     return (long)threadid;
> #endif

Change "<" to "<=" and I won't gripe.





From fdrake at beopen.com  Fri Aug 18 20:40:48 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 14:40:48 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <200008181817.UAA07799@python.inrialpes.fr>
References: <20000818135833.K27419@kronos.cnri.reston.va.us>
	<200008181817.UAA07799@python.inrialpes.fr>
Message-ID: <14749.33584.683341.684523@cj42289-a.reston1.va.home.com>

Vladimir Marangozov writes:
 > So name it PyModule_AddConstant(module, name, constant),
 > which fails with "can't add constant to module" err msg.

  Even better!  I expect there should be at least a couple of these;
one for ints, one for strings.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Fri Aug 18 20:37:19 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 14:37:19 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <20000818102339.E11316@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJGHAAA.tim_one@email.msn.com>

[Trent Mick]
> > ...
> >   	if (vtp->tp_compare == NULL) {
> > ! 		Py_uintptr_t iv = (Py_uintptr_t)v;
> > ! 		Py_uintptr_t iw = (Py_uintptr_t)w;
> > ! 		return (iv < iw) ? -1 : 1;
> >   	}
> Can't you just do the cast for the comparison instead of making new
> variables?

Any compiler worth beans will optimize them out of existence.  In the
meantime, it makes each line (to my eyes) short, clear, and something I can
set a debugger breakpoint on in debug mode if I suspect the cast isn't
working as intended.





From effbot at telia.com  Fri Aug 18 20:42:34 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 20:42:34 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>             <20000818161745.U376@xs4all.nl>  <20000818150639.6685C31047C@bireme.oratrix.nl>
Message-ID: <000001c00945$a8d37e40$f2a6b5d4@hagrid>

sjoerd wrote:

> The problem was that because of your (I think it was you :-) earlier
> change to have a Makefile in Grammar, I had an old graminit.c lying
> around in my build directory.  I don't build in the source directory
> and the changes for a Makefile in Grammar resulted in a file
> graminit.c in the wrong place.

is the Windows build system updated to generate new
graminit files if the Grammar are updated?

or is python development a unix-only thingie these days?

</F>




From tim_one at email.msn.com  Fri Aug 18 21:05:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:05:29 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJIHAAA.tim_one@email.msn.com>

[/F]
> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?

No, not yet.

> or is python development a unix-only thingie these days?

It pretty much always has been!  Just ask Jack <wink>.  It's unreasonable to
expect Unix(tm) weenies to keep the other builds working (although vital
that they tell Python-Dev when they do something "new & improved"), and
counterproductive to imply that their inability to do so should deter them
from improving the build process on their platform.  In some ways, building
is more pleasant under Windows, and if turnabout is fair play the Unix
droids could insist we build them a honking powerful IDE <wink>.





From m.favas at per.dem.csiro.au  Fri Aug 18 21:08:36 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 03:08:36 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky 
 return statements
References: <20000816172425.A32338@ActiveState.com> <003001c00901$11fd8ae0$0900a8c0@SPIFF>
Message-ID: <399D89B4.476FB5EF@per.dem.csiro.au>

OK - 

return (long) threadid;

compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
Removing the "volatile" is also fine for me, but may affect Vladimir.
I'm still a bit (ha!) confused by Tim's comments that the function is
bogus for OSF/1 because it throws away half the bits, and will therefore
result in id collisions - this will only happen on platforms where
sizeof(long) is less than sizeof(pointer), which is not OSF/1 (but is
Win64). Also, one of the suggested tests only cast the pointer to a long
SIZEOF_PTHREAD_T < SIZEOF_LONG - that should surely be <= ...

In summary, whatever issue there was for OSF/1 six (or so) years ago
appears to be no longer relevant - but there will be the truncation
issue for Win64-like platforms.

Mark

Fredrik Lundh wrote:
> 
> trent mick wrote:
> >     return (long) *(long *) &threadid;
> 
> from what I can tell, pthread_t is a pointer under OSF/1.
> 
> I've been using OSF/1 since the early days, and as far as I can
> remember, you've never needed to use stupid hacks like that
> to convert a pointer to a long integer. an ordinary (long) cast
> should be sufficient.
> 
> > Could this be changed to
> >   return threadid;
> > safely?
> 
> safely, yes.  but since it isn't a long on all platforms, you might
> get warnings from the compiler (see Mark's mail).
> 
> :::
> 
> from what I can tell, it's compatible with a long on all sane plat-
> forms (Win64 doesn't support pthreads anyway ;-), so I guess the
> right thing here is to remove volatile and simply use:
> 
>     return (long) threadid;
> 
> (Mark: can you try this out on your box?  setting up a Python 2.0
> environment on our alphas would take more time than I can spare
> right now...)
> 
> </F>

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 18 21:09:48 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 18 Aug 2000 21:09:48 +0200 (CEST)
Subject: [Python-Dev] Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHJHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 18, 2000 01:43:14 AM
Message-ID: <200008181909.VAA08003@python.inrialpes.fr>

[Tim, on PyErr_NoMemory]
>
> Looks good to me.  And if it breaks something, it will be darned hard to
> tell <wink>.

It's easily demonstrated with the memprof.c module I'd like to introduce
quickly here.

Note: I'll be out of town next week and if someone wants to
play with this, tell me what to do quickly: upload a (postponed) patch
which goes in pair with obmalloc.c, put it in a web page or remain quiet.

The object allocator is well tested, the memory profiler is not so
thouroughly tested... The interface needs more work IMHO, but it's
already quite useful and fun in it's current state <wink>.


Demo:


~/python/dev>python -p
Python 2.0b1 (#9, Aug 18 2000, 20:11:29)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> 
>>> # Note the -p option -- it starts any available profilers through
... # a newly introduced Py_ProfileFlag. Otherwise you'll get funny results
... # if you start memprof in the middle of an execution
... 
>>> import memprof
>>> memprof.__doc__
'This module provides access to the Python memory profiler.'
>>> 
>>> dir(memprof)
['ALIGNMENT', 'ERROR_ABORT', 'ERROR_IGNORE', 'ERROR_RAISE', 'ERROR_REPORT', 'ERROR_STOP', 'MEM_CORE', 'MEM_OBCLASS', 'MEM_OBJECT', '__doc__', '__name__', 'geterrlevel', 'getpbo', 'getprofile', 'getthreshold', 'isprofiling', 'seterrlevel', 'setpbo', 'setproftype', 'setthreshold', 'start', 'stop']
>>> 
>>> memprof.isprofiling()
1
>>> # It's running -- cool. We're now ready to get the current memory profile
... 
>>> print memprof.getprofile.__doc__
getprofile([type]) -> object

Return a snapshot of the current memory profile of the interpreter.
An optional type argument may be provided to request the profile of
a specific memory layer. It must be one of the following constants:

        MEM_CORE    - layer 1: Python core memory
        MEM_OBJECT  - layer 2: Python object memory
        MEM_OBCLASS - layer 3: Python object-specific memory 

If a type argument is not specified, the default profile is returned.
The default profile type can be set with the setproftype() function.
>>> 
>>> mp = memprof.getprofile()
>>> mp
<global memory profile, layer 2, detailed in 33 block size classes>
>>> 
>>> # now see how much mem we're using, it's a 3 tuple
... # (requested mem, minimum allocated mem, estimated mem)
... 
>>> mp.memory
(135038, 142448, 164792)
>>> mp.peakmemory
(137221, 144640, 167032)
>>> 
>>> # indeed, peak values are important. Now let's see what this gives in
... # terms of memory blocks
... 
>>> mp.blocks
(2793, 2793)
>>> mp.peakblocks
(2799, 2799)
>>> 
>>> # Again this is a 2-tuple (requested blocks, allocated blocks)
... # Now let's see the stats of the calls to the allocator.
... 
>>> mp.malloc
(4937, 0, 0)
>>> mp.calloc
(0, 0, 0)
>>> mp.realloc
(43, 0, 0)
>>> mp.free
(2144, 0, 0)
>>> 
>>> # A 3-tuple (nb of calls, nb of errors, nb of warnings by memprof)
... #
... # Good. Now let's see the memory profile detailed by size classes
... they're memory profile objects too, similar to the global profile:
>>>
>>> mp.sizeclass[0]
<size class memory profile, layer 2, block size range [1..8]>
>>> mp.sizeclass[1]
<size class memory profile, layer 2, block size range [9..16]>
>>> mp.sizeclass[2]
<size class memory profile, layer 2, block size range [17..24]>
>>> len(mp.sizeclass)
33
>>> mp.sizeclass[-1]
<size class memory profile, layer 2, block size range [257..-1]>
>>> 
>>> # The last one is for big blocks: 257 bytes and up.
... # Now let's see ithe detailed memory picture:
>>>
>>> for s in mp.sizeclass:                                                     
...     print "%.2d - " % s.sizeclass, "%8d %8d %8d" % s.memory
... 
00 -         0        0        0
01 -      3696     3776     5664
02 -       116      120      160
03 -     31670    34464    43080
04 -     30015    32480    38976
05 -     10736    11760    13720
06 -     10846    11200    12800
07 -      2664     2816     3168
08 -      1539     1584     1760
09 -      1000     1040     1144
10 -      2048     2112     2304
11 -      1206     1248     1352
12 -       598      624      672
13 -       109      112      120
14 -       575      600      640
15 -       751      768      816
16 -       407      408      432
17 -       144      144      152
18 -       298      304      320
19 -       466      480      504
20 -       656      672      704
21 -       349      352      368
22 -       542      552      576
23 -       188      192      200
24 -       392      400      416
25 -       404      416      432
26 -       640      648      672
27 -       441      448      464
28 -         0        0        0
29 -       236      240      248
30 -       491      496      512
31 -       501      512      528
32 -     31314    31480    31888
>>>
>>> for s in mp.sizeclass:
...     print "%.2d - " % s.sizeclass, "%8d %8d" % s.blocks
... 
00 -         0        0
01 -       236      236
02 -         5        5
03 -      1077     1077
04 -       812      812
05 -       245      245
06 -       200      200
07 -        44       44
08 -        22       22
09 -        13       13
10 -        24       24
11 -        13       13
12 -         6        6
13 -         1        1
14 -         5        5
15 -         6        6
16 -         3        3
17 -         1        1
18 -         2        2
19 -         3        3
20 -         4        4
21 -         2        2
22 -         3        3
23 -         1        1
24 -         2        2
25 -         2        2
26 -         3        3
27 -         2        2
28 -         0        0
29 -         1        1
30 -         2        2
31 -         2        2
32 -        51       51
>>>
>>> # Note that just started the interpreter and analysed it's initial
... # memory profile. You can repeat this game at any point of time,
... # look at the stats and enjoy a builtin memory profiler.
... #
... # Okay, now to the point on PyErr_NoMemory: but we need to restart
... # Python without "-p"
>>>
~/python/dev>python 
Python 2.0b1 (#9, Aug 18 2000, 20:11:29)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> 
>>> import memprof
>>> memprof.isprofiling()
0
>>> memprof.start()
memprof: freeing unknown block (0x40185e60)
memprof: freeing unknown block (0x40175098)
memprof: freeing unknown block (0x40179288)
>>>
>>> # See? We're freeing unknown blocks for memprof.
... # Okay, enough. See the docs for more:
... 
>>> print memprof.seterrlevel.__doc__
seterrlevel(flags) -> None

Set the error level of the profiler. The provided argument instructs the
profiler on how tolerant it should be against any detected simple errors
or memory corruption. The following non-exclusive values are recognized:

    ERROR_IGNORE - ignore silently any detected errors
    ERROR_REPORT - report all detected errors to stderr
    ERROR_STOP   - stop the profiler on the first detected error
    ERROR_RAISE  - raise a MemoryError exception for all detected errors
    ERROR_ABORT  - report the first error as fatal and abort immediately

The default error level is ERROR_REPORT.
>>> 
>>> # So here's you're PyErr_NoMemory effect:
... 
>>> memprof.seterrlevel(memprof.ERROR_REPORT | memprof.ERROR_RAISE)
>>> 
>>> import test.regrtest
memprof: resizing unknown block (0x82111b0)
memprof: raised MemoryError.
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "./Lib/test/regrtest.py", line 39, in ?
    import random
  File "./Lib/random.py", line 23, in ?
    import whrandom
  File "./Lib/whrandom.py", line 40, in ?
    class whrandom:
MemoryError: memprof: resizing unknown block (0x82111b0)
>>> 
>>> # Okay, gotta run. There are no docs for the moment. Just the source
... and function docs. (and to avoid another exception...)
>>>
>>> memprof.seterrlevel(memprof.ERROR_IGNORE)
>>>
>>> for i in dir(memprof):
...     x = memprof.__dict__[i]
...     if hasattr(x, "__doc__"):
...             print ">>>>>>>>>>>>>>>>>>>>>>>>>>>>> [%s]" % i
...             print x.__doc__
...             print '='*70
... 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [geterrlevel]
geterrlevel() -> errflags

Get the current error level of the profiler.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getpbo]
getpbo() -> int

Return the fixed per block overhead (pbo) used for estimations.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getprofile]
getprofile([type]) -> object

Return a snapshot of the current memory profile of the interpreter.
An optional type argument may be provided to request the profile of
a specific memory layer. It must be one of the following constants:

        MEM_CORE    - layer 1: Python core memory
        MEM_OBJECT  - layer 2: Python object memory
        MEM_OBCLASS - layer 3: Python object-specific memory 

If a type argument is not specified, the default profile is returned.
The default profile type can be set with the setproftype() function.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getthreshold]
getthreshold() -> int

Return the size threshold (in bytes) between small and big blocks.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [isprofiling]
isprofiling() -> 1 if profiling is currently in progress, 0 otherwise.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [seterrlevel]
seterrlevel(flags) -> None

Set the error level of the profiler. The provided argument instructs the
profiler on how tolerant it should be against any detected simple errors
or memory corruption. The following non-exclusive values are recognized:

    ERROR_IGNORE - ignore silently any detected errors
    ERROR_REPORT - report all detected errors to stderr
    ERROR_STOP   - stop the profiler on the first detected error
    ERROR_RAISE  - raise a MemoryError exception for all detected errors
    ERROR_ABORT  - report the first error as fatal and abort immediately

The default error level is ERROR_REPORT.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setpbo]
setpbo(int) -> None

Set the fixed per block overhead (pbo) used for estimations.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setproftype]
setproftype(type) -> None

Set the default profile type returned by getprofile() without arguments.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setthreshold]
setthreshold(int) -> None

Set the size threshold (in bytes) between small and big blocks.
The maximum is 256. The argument is rounded up to the ALIGNMENT.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [start]
start() -> None

Start the profiler. If it has been started, this function has no effect.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [stop]
stop() -> None

Stop the profiler. If it has been stopped, this function has no effect.
======================================================================


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Fri Aug 18 21:11:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:11:11 -0400
Subject: [Python-Dev] RE: Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <200008181909.VAA08003@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEJJHAAA.tim_one@email.msn.com>

[Tim, on PyErr_NoMemory]
> Looks good to me.  And if it breaks something, it will be darned hard to
> tell <wink>.

[Vladimir Marangozov]
> It's easily demonstrated with the memprof.c module I'd like to introduce
> quickly here.
>
> Note: I'll be out of town next week and if someone wants to
> play with this, tell me what to do quickly: upload a (postponed) patch
> which goes in pair with obmalloc.c, put it in a web page or remain
> quiet.
>
> The object allocator is well tested, the memory profiler is not so
> thouroughly tested... The interface needs more work IMHO, but it's
> already quite useful and fun in it's current state <wink>.
> ...

My bandwidth is consumed by 2.0 issues, so I won't look at it.  On the
chance that Guido gets hit by a bus, though, and I have time to kill at his
funeral, it would be nice to have it available on SourceForge.  Uploading a
postponed patch sounds fine!





From tim_one at email.msn.com  Fri Aug 18 21:26:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:26:31 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky  return statements
In-Reply-To: <399D89B4.476FB5EF@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com>

[Mark Favas]
> return (long) threadid;
>
> compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
> Removing the "volatile" is also fine for me, but may affect Vladimir.
> I'm still a bit (ha!) confused by Tim's comments that the function is
> bogus for OSF/1 because it throws away half the bits, and will therefore
> result in id collisions - this will only happen on platforms where
> sizeof(long) is less than sizeof(pointer), which is not OSF/1

Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
bits were being lost.  Are you running this on an Alpha?  The comment in the
code specifically names "Alpha OSF/1" as the culprit.  I don't know anything
about OSF/1; perhaps "Alpha" is implied.

> ...
> In summary, whatever issue there was for OSF/1 six (or so) years ago
> appears to be no longer relevant - but there will be the truncation
> issue for Win64-like platforms.

And there's Vladimir's "volatile" hack.





From effbot at telia.com  Fri Aug 18 21:37:36 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 21:37:36 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <LNBBLJKPBEHFEDALKOLCOEJIHAAA.tim_one@email.msn.com>
Message-ID: <00e501c0094b$c9ee2ac0$f2a6b5d4@hagrid>

tim peters wrote:


> [/F]
> > is the Windows build system updated to generate new
> > graminit files if the Grammar are updated?
> 
> No, not yet.
> 
> > or is python development a unix-only thingie these days?
> 
> It pretty much always has been!  Just ask Jack <wink>.  It's unreasonable to
> expect Unix(tm) weenies to keep the other builds working (although vital
> that they tell Python-Dev when they do something "new & improved"), and
> counterproductive to imply that their inability to do so should deter them
> from improving the build process on their platform. 

well, all I expect from them is that the repository should
be in a consistent state at all time.

(in other words, never assume that just because generated
files are rebuilt by the unix makefiles, they don't have to be
checked into the repository).

for a moment, sjoerd's problem report made me think that
someone had messed up here...  but I just checked, and
he hadn't ;-)

</F>




From thomas at xs4all.net  Fri Aug 18 21:43:34 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 21:43:34 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <000001c00945$a8d37e40$f2a6b5d4@hagrid>; from effbot@telia.com on Fri, Aug 18, 2000 at 08:42:34PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <20000818214333.X376@xs4all.nl>

On Fri, Aug 18, 2000 at 08:42:34PM +0200, Fredrik Lundh wrote:
> sjoerd wrote:

> > The problem was that because of your (I think it was you :-) earlier
> > change to have a Makefile in Grammar, I had an old graminit.c lying
> > around in my build directory.  I don't build in the source directory
> > and the changes for a Makefile in Grammar resulted in a file
> > graminit.c in the wrong place.

> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?

No, and that's one more reason to reverse my patch ! :-) Note that I didn't
*add* the Makefile, I only added Grammar to the
directories-to-run-make-in-by-default. If the Grammar is changed, you
already need a way to run pgen (of which the source rests in Parser/) to
generate the new graminit.c/graminit.h files. I have no way of knowing
whether that is the case for the windows build files. The CVS tree should
always contain up to date graminit.c/.h files!

The reason it was added was the multitude of Grammar-changing patches on SF,
and the number of people that forgot to run 'make' in Grammar/ after
applying them. I mentioned adding Grammar/ to the directories to be made,
Guido said it was a good idea, and noone complained to it until after it was
added ;P I think we can drop the idea, though, at least for (alpha, beta,
final) releases.

> or is python development a unix-only thingie these days?

Well, *my* python development is a unix-only thingie, mostly because I don't
have a compiler for under Windows. If anyone wants to send me or point me to
the canonical Windows compiler & debugger and such, in a way that won't set
me back a couple of megabucks, I'd be happy to test & debug windows as well.
Gives me *two* uses for Windows: games and Python ;)

My-boss-doesn't-pay-me-to-work-on-Windows-ly y'rs,

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From m.favas at per.dem.csiro.au  Fri Aug 18 22:33:21 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 04:33:21 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky  
 return statements
References: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com>
Message-ID: <399D9D91.3E76ED8D@per.dem.csiro.au>

Tim Peters wrote:
> 
> [Mark Favas]
> > return (long) threadid;
> >
> > compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
> > Removing the "volatile" is also fine for me, but may affect Vladimir.
> > I'm still a bit (ha!) confused by Tim's comments that the function is
> > bogus for OSF/1 because it throws away half the bits, and will therefore
> > result in id collisions - this will only happen on platforms where
> > sizeof(long) is less than sizeof(pointer), which is not OSF/1
> 
> Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
> bits were being lost.  Are you running this on an Alpha?  The comment in the
> code specifically names "Alpha OSF/1" as the culprit.  I don't know anything
> about OSF/1; perhaps "Alpha" is implied.

Yep - I'm running on an Alpha. The name of the OS has undergone a couple
of, um, appellation transmogrifications since the first Alpha was
produced by DEC: OSF/1 -> Digital Unix -> Tru64 Unix, although uname has
always reported "OSF1". (I don't think that there's any other
implementation of OSF/1 left these days... not that uses the name,
anyway.)

> 
> > ...
> > In summary, whatever issue there was for OSF/1 six (or so) years ago
> > appears to be no longer relevant - but there will be the truncation
> > issue for Win64-like platforms.
> 
> And there's Vladimir's "volatile" hack.

Wonder if that also is still relevant (was it required because of the
long * long * cast?)...

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From bwarsaw at beopen.com  Fri Aug 18 22:45:03 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 16:45:03 -0400 (EDT)
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
	<20000818161745.U376@xs4all.nl>
	<20000818150639.6685C31047C@bireme.oratrix.nl>
	<000001c00945$a8d37e40$f2a6b5d4@hagrid>
	<20000818214333.X376@xs4all.nl>
Message-ID: <14749.41039.166847.942483@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> No, and that's one more reason to reverse my patch ! :-) Note
    TW> that I didn't *add* the Makefile, I only added Grammar to the
    TW> directories-to-run-make-in-by-default. If the Grammar is
    TW> changed, you already need a way to run pgen (of which the
    TW> source rests in Parser/) to generate the new
    TW> graminit.c/graminit.h files. I have no way of knowing whether
    TW> that is the case for the windows build files. The CVS tree
    TW> should always contain up to date graminit.c/.h files!

I don't think you need to reverse your patch because of this, although
I haven't been following this thread closely.  Just make sure that if
you commit a Grammar change, you must commit the newly generated
graminit.c and graminit.h files.

This is no different than if you change configure.in; you must commit
both that file and the generated configure file.

-Barry



From akuchlin at mems-exchange.org  Fri Aug 18 22:48:37 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 16:48:37 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101055] Cookie.py
In-Reply-To: <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 18, 2000 at 04:06:20PM -0400
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
Message-ID: <20000818164837.A8423@kronos.cnri.reston.va.us>

[Overquoting for the sake of python-dev readers]

On Fri, Aug 18, 2000 at 04:06:20PM -0400, Fred L. Drake, Jr. wrote:
>amk writes:
> > I have a copy of Tim O'Malley's Cookie.py,v file (in order to preserve the
> > version history).  I can either ask the SF admins to drop it into
> > the right place in the CVS repository, but will that screw up the
> > Python 1.6 tagging in some way?  (I would expect not, but who
> > knows?)
>
>  That would have no effect on any of the Python tagging.  It's
>probably worthwhile making sure there are no tags in the ,v file, but
>that can be done after it gets dropped in place.
>  Now, Greg Stein will tell us that dropping this into place is the
>wrong thing to do.  What it *will* screw up is people asking for the
>state of Python at a specific date before the file was actually added;
>they'll get this file even for when it wasn't in the Python CVS tree.
>I can live with that, but we should make a policy decision for the
>Python tree regarding this sort of thing.

Excellent point.  GvR's probably the only person whose ruling matters
on this point; I'll try to remember it and bring it up whenever he
gets back (next week, right?).

>  Don't -- it's not worth it.

I hate throwing away information that stands even a tiny chance of
being useful; good thing the price of disk storage keeps dropping, eh?
The version history might contain details that will be useful in
future debugging or code comprehension, so I dislike losing it
forever.

(My minimalist side is saying that the enhanced Web tools should be a
separately downloadable package.  But you probably guessed that
already...)

--amk



From thomas at xs4all.net  Fri Aug 18 22:56:07 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 22:56:07 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <14749.41039.166847.942483@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 04:45:03PM -0400
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000818214333.X376@xs4all.nl> <14749.41039.166847.942483@anthem.concentric.net>
Message-ID: <20000818225607.Z376@xs4all.nl>

On Fri, Aug 18, 2000 at 04:45:03PM -0400, Barry A. Warsaw wrote:

> This is no different than if you change configure.in; you must commit
> both that file and the generated configure file.

Yes, but more critically so, since it'll screw up more than a couple of
defines on a handful of systems :-) However, this particular change in the
make process doesn't adress this at all. It would merely serve to mask this
problem, in the event of someone commiting a change to Grammar but not to
graminit.*. The reasoning behind the change was "if you change
Grammar/Grammar, and then type 'make', graminit.* should be regenerated
automatically, before they are used in other files." I thought the change
was a small and reasonable one, but now I don't think so, anymore ;P On the
other hand, perhaps the latest changes (not mine) fixed it for real.

But I still think that if this particular makefile setup is used in
releases, 'pgen' should at least be made a tad less verbose.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 18 23:04:01 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 23:04:01 +0200
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: <20000818164837.A8423@kronos.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Aug 18, 2000 at 04:48:37PM -0400
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com> <20000818164837.A8423@kronos.cnri.reston.va.us>
Message-ID: <20000818230401.A376@xs4all.nl>

On Fri, Aug 18, 2000 at 04:48:37PM -0400, Andrew Kuchling wrote:

[ About adding Cookie.py including CVS history ]

> I hate throwing away information that stands even a tiny chance of
> being useful; good thing the price of disk storage keeps dropping, eh?
> The version history might contain details that will be useful in
> future debugging or code comprehension, so I dislike losing it
> forever.

It would be moderately nice to have the versioning info, though I think Fred
has a point when he says that it might be confusing for people: it would
look like the file had been in the CVS repository the whole time, and it
would be very hard to see where the file had been added to Python. And what
about new versions ? Would this file be adopted by Python, would changes by
the original author be incorporated ? How about version history for those
changes ? The way it usually goes (as far as my experience goes) is that
such files are updated only periodically. Would those updates incorporate
the history of those changes as well ?

Unless Cookie.py is really split off, and we're going to maintain a separate
version, I don't think it's worth worrying about the version history as
such. Pointing to the 'main' version and it's history should be enough.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Fri Aug 18 23:13:31 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:13:31 -0400 (EDT)
Subject: [Python-Dev] gettext in the standard library
Message-ID: <14749.42747.411862.940207@anthem.concentric.net>

Apologies for duplicates to those of you already on python-dev...

I've been working on merging all the various implementations of Python
interfaces to the GNU gettext libraries.  I've worked from code
contributed by Martin, James, and Peter.  I now have something that
seems to work fairly well so I thought I'd update you all.

After looking at all the various wizzy and experimental stuff in these
implementations, I opted for simplicity, mostly just so I could get my
head around what was needed.  My goal was to build a fast C wrapper
module around the C library, and to provide a pure Python
implementation of an identical API for platforms without GNU gettext.

I started with Martin's libintlmodule, renamed it _gettext and cleaned
up the C code a bit.  This provides gettext(), dgettext(),
dcgettext(), textdomain(), and bindtextdomain() functions.  The
gettext.py module imports these, and if it succeeds, it's done.

If that fails, then there's a bunch of code, mostly derived from
Peter's fintl.py module, that reads the binary .mo files and does the
look ups itself.  Note that Peter's module only supported the GNU
gettext binary format, and that's all mine does too.  It should be
easy to support other binary formats (Solaris?) by overriding one
method in one class, and contributions are welcome.

James's stuff looked cool too, what I grokked of it :) but I think
those should be exported as higher level features.  I didn't include
the ability to write .mo files or the exported Catalog objects.  I
haven't used the I18N services enough to know whether these are
useful.

I added one convenience function, gettext.install().  If you call
this, it inserts the gettext.gettext() function into the builtins
namespace as `_'.  You'll often want to do this, based on the I18N
translatable strings marking conventions.  Note that importing gettext
does /not/ install by default!

And since (I think) you'll almost always want to call bindtextdomain()
and textdomain(), you can pass the domain and localedir in as
arguments to install.  Thus, the simple and quick usage pattern is:

    import gettext
    gettext.install('mydomain', '/my/locale/dir')

    print _('this is a localized message')

I think it'll be easier to critique this stuff if I just check it in.
Before I do, I still need to write up a test module and hack together
docos.  In the meantime, here's the module docstring for gettext.py.
Talk amongst yourselves. :)

-Barry

"""Internationalization and localization support.

This module provides internationalization (I18N) and localization (L10N)
support for your Python programs by providing an interface to the GNU gettext
message catalog library.

I18N refers to the operation by which a program is made aware of multiple
languages.  L10N refers to the adaptation of your program, once
internationalized, to the local language and cultural habits.  In order to
provide multilingual messages for your Python programs, you need to take the
following steps:

    - prepare your program by specially marking translatable strings
    - run a suite of tools over your marked program files to generate raw
      messages catalogs
    - create language specific translations of the message catalogs
    - use this module so that message strings are properly translated

In order to prepare your program for I18N, you need to look at all the strings
in your program.  Any string that needs to be translated should be marked by
wrapping it in _('...') -- i.e. a call to the function `_'.  For example:

    filename = 'mylog.txt'
    message = _('writing a log message')
    fp = open(filename, 'w')
    fp.write(message)
    fp.close()

In this example, the string `writing a log message' is marked as a candidate
for translation, while the strings `mylog.txt' and `w' are not.

The GNU gettext package provides a tool, called xgettext that scans C and C++
source code looking for these specially marked strings.  xgettext generates
what are called `.pot' files, essentially structured human readable files
which contain every marked string in the source code.  These .pot files are
copied and handed over to translators who write language-specific versions for
every supported language.

For I18N Python programs however, xgettext won't work; it doesn't understand
the myriad of string types support by Python.  The standard Python
distribution provides a tool called pygettext that does though (usually in the
Tools/i18n directory).  This is a command line script that supports a similar
interface as xgettext; see its documentation for details.  Once you've used
pygettext to create your .pot files, you can use the standard GNU gettext
tools to generate your machine-readable .mo files, which are what's used by
this module and the GNU gettext libraries.

In the simple case, to use this module then, you need only add the following
bit of code to the main driver file of your application:

    import gettext
    gettext.install()

This sets everything up so that your _('...') function calls Just Work.  In
other words, it installs `_' in the builtins namespace for convenience.  You
can skip this step and do it manually by the equivalent code:

    import gettext
    import __builtin__
    __builtin__['_'] = gettext.gettext

Once you've done this, you probably want to call bindtextdomain() and
textdomain() to get the domain set up properly.  Again, for convenience, you
can pass the domain and localedir to install to set everything up in one fell
swoop:

    import gettext
    gettext.install('mydomain', '/my/locale/dir')

"""



From tim_one at email.msn.com  Fri Aug 18 23:13:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:13:29 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818214333.X376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKCHAAA.tim_one@email.msn.com>

[Thomas Wouters]
> No, and that's one more reason to reverse my patch ! :-) Note
> that I didn't *add* the Makefile, I only added Grammar to the
> directories-to-run-make-in-by-default.
> ...
> The reason it was added was the multitude of Grammar-changing
> patches on SF, and the number of people that forgot to run 'make'
> in Grammar/ after applying them. I mentioned adding Grammar/ to
> the directories to be made, Guido said it was a good idea, and
> noone complained to it until after it was added ;P

And what exactly is the complaint?  It's nice to build things that are out
of date;  I haven't used Unix(tm) for some time, but I vaguely recall that
was "make"'s purpose in life <wink>.  Or is it that the grammar files are
getting rebuilt unnecessarily?

> ...
> My-boss-doesn't-pay-me-to-work-on-Windows-ly y'rs,

Your boss *pays* you?!  Wow.  No wonder you get so much done.





From bwarsaw at beopen.com  Fri Aug 18 23:16:07 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:16:07 -0400 (EDT)
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
	<20000818161745.U376@xs4all.nl>
	<20000818150639.6685C31047C@bireme.oratrix.nl>
	<000001c00945$a8d37e40$f2a6b5d4@hagrid>
	<20000818214333.X376@xs4all.nl>
	<14749.41039.166847.942483@anthem.concentric.net>
	<20000818225607.Z376@xs4all.nl>
Message-ID: <14749.42903.342401.245594@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Yes, but more critically so, since it'll screw up more than a
    TW> couple of defines on a handful of systems :-)

Yes, but a "cvs update" should always clue you in that those files
needs committing too.  Every always does a "cvs update" before
committing any files, right? :)
    
    TW> But I still think that if this particular makefile setup is
    TW> used in releases, 'pgen' should at least be made a tad less
    TW> verbose.

pgen also leaks like a sieve, but it's not worth patching. ;}

-Barry



From bwarsaw at beopen.com  Fri Aug 18 23:17:14 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:17:14 -0400 (EDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101055] Cookie.py
References: <200008181951.MAA30358@bush.i.sourceforge.net>
	<14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
	<20000818164837.A8423@kronos.cnri.reston.va.us>
Message-ID: <14749.42970.845587.90980@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> I hate throwing away information that stands even a tiny
    AK> chance of being useful; good thing the price of disk storage
    AK> keeps dropping, eh?  The version history might contain details
    AK> that will be useful in future debugging or code comprehension,
    AK> so I dislike losing it forever.

I agree.  Let's try to keep the revision history for Cookie.py.

-Barry



From tim_one at email.msn.com  Fri Aug 18 23:18:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:18:56 -0400
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
Message-ID: <LNBBLJKPBEHFEDALKOLCIEKDHAAA.tim_one@email.msn.com>

For those of you who stopped reading comp.lang.python, note that this is
*typical* of the advice users get in the absence of syntactic "print" sugar.

it's-not-exactly-intuitive-that-a-"print"-stmt-isn't-meant-to-
    do-output<wink>-ly y'rs  - tim

-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Jeff Kunce
Sent: Friday, August 18, 2000 3:41 PM
To: python-list at python.org
Subject: Re: sys.stndout syntax


> on win32, to get printed output to a file i use
> >>> sys.stndout = open('file.txt', 'w')
> how do i get it to go back to the command line

#Here is the specific solution for those who remember the secret password:
sys.stdout = open('file.txt', 'w')
print 'this is written to file.txt'
sys.stdout.close()
sys.stdout = sys.__stdout__
print 'this is written to console'

#Here is the general solution if you can't be bothered with remembering
secret passwords:
original_stdout = sys.stdout
sys.stdout = open('file.txt', 'w')
print 'this is written to file.txt'
sys.stdout.close()
sys.stdout = original_stdout
print 'this is written to console'


  --Jeff


--
http://www.python.org/mailman/listinfo/python-list





From mal at lemburg.com  Fri Aug 18 23:21:23 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 23:21:23 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
Message-ID: <399DA8D3.70E85C58@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> Apologies for duplicates to those of you already on python-dev...
> 
> I've been working on merging all the various implementations of Python
> interfaces to the GNU gettext libraries.  I've worked from code
> contributed by Martin, James, and Peter.  I now have something that
> seems to work fairly well so I thought I'd update you all.
> 
> After looking at all the various wizzy and experimental stuff in these
> implementations, I opted for simplicity, mostly just so I could get my
> head around what was needed.  My goal was to build a fast C wrapper
> module around the C library, and to provide a pure Python
> implementation of an identical API for platforms without GNU gettext.

Sounds cool.

I know that gettext is a standard, but from a technology POV
I would have implemented this as codec wich can then be plugged
wherever l10n is needed, since strings have the new .encode()
method which could just as well be used to convert not only the
string into a different encoding, but also a different language.
Anyway, just a thought...

What I'm missing in your doc-string is a reference as to how
well gettext works together with Unicode. After all, i18n is
among other things about international character sets.
Have you done any experiments in this area ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Fri Aug 18 23:19:12 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:19:12 -0400 (EDT)
Subject: [Python-Dev] [Patch #101055] Cookie.py
References: <200008181951.MAA30358@bush.i.sourceforge.net>
	<14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
	<20000818164837.A8423@kronos.cnri.reston.va.us>
	<20000818230401.A376@xs4all.nl>
Message-ID: <14749.43088.855537.355621@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:
    TW> It would be moderately nice to have the versioning info,
    TW> though I think Fred has a point when he says that it might be
    TW> confusing for people: it would look like the file had been in
    TW> the CVS repository the whole time, and it would be very hard
    TW> to see where the file had been added to Python.

I don't think that's true, because the file won't have the tag
information in it.  That could be a problem in and of itself, but I
dunno.

-Barry



From tim_one at email.msn.com  Fri Aug 18 23:41:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:41:18 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <14749.42903.342401.245594@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEKEHAAA.tim_one@email.msn.com>

>     TW> But I still think that if this particular makefile setup is
>     TW> used in releases, 'pgen' should at least be made a tad less
>     TW> verbose.

[Barry]
> pgen also leaks like a sieve, but it's not worth patching. ;}

Huh!  And here I thought Thomas was suggesting we shorten its name to "pge".

or-even-"p"-if-we-wanted-it-a-lot-less-verbose-ly y'rs  - tim





From bwarsaw at beopen.com  Fri Aug 18 23:49:23 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:49:23 -0400 (EDT)
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
	<399DA8D3.70E85C58@lemburg.com>
Message-ID: <14749.44899.573649.483154@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> I know that gettext is a standard, but from a technology POV I
    M> would have implemented this as codec wich can then be plugged
    M> wherever l10n is needed, since strings have the new .encode()
    M> method which could just as well be used to convert not only the
    M> string into a different encoding, but also a different
    M> language.  Anyway, just a thought...

That might be cool to play with, but I haven't done anything with
Python's Unicode stuff (and painfully little with gettext too) so
right now I don't see how they'd fit together.  My gut reaction is
that gettext could be the lower level interface to
string.encode(language).

    M> What I'm missing in your doc-string is a reference as to how
    M> well gettext works together with Unicode. After all, i18n is
    M> among other things about international character sets.
    M> Have you done any experiments in this area ?

No, but I've thought about it, and I don't think the answer is good.
The GNU gettext functions take and return char*'s, which probably
isn't very compatible with Unicode.  _gettext therefore takes and
returns PyStringObjects.

We could do better with the pure-Python implementation, and that might
be a good reason to forgo any performance gains or platform-dependent
benefits you'd get with _gettext.  Of course the trick is using the
Unicode-unaware tools to build .mo files containing Unicode strings.
I don't track GNU gettext developement close enough to know whether
they are addressing Unicode issues or not.

-Barry



From effbot at telia.com  Sat Aug 19 00:06:35 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 19 Aug 2000 00:06:35 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky   return statements
References: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com> <399D9D91.3E76ED8D@per.dem.csiro.au>
Message-ID: <006801c00960$944da200$f2a6b5d4@hagrid>

tim wrote:
> > Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
> > bits were being lost.

the compiler doesn't warn about bits being lost -- it complained
because the code was returning a pointer from a function declared
to return a long integer.

(explicitly casting the pthread_t to a long gets rid of the warning).

mark wrote:
> > > In summary, whatever issue there was for OSF/1 six (or so) years ago
> > > appears to be no longer relevant - but there will be the truncation
> > > issue for Win64-like platforms.
> > 
> > And there's Vladimir's "volatile" hack.
> 
> Wonder if that also is still relevant (was it required because of the
> long * long * cast?)...

probably.  messing up when someone abuses pointer casts is one thing,
but if the AIX compiler cannot cast a long to a long, it's broken beyond
repair ;-)

frankly, the code is just plain broken.  instead of adding even more dumb
hacks, just fix it.  here's how it should be done:

    return (long) pthread_self(); /* look! no variables! */

or change

 /* Jump through some hoops for Alpha OSF/1 */

to

 /* Jump through some hoops because Tim Peters wants us to ;-) */

</F>




From bwarsaw at beopen.com  Sat Aug 19 00:03:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 18:03:24 -0400 (EDT)
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
References: <LNBBLJKPBEHFEDALKOLCIEKDHAAA.tim_one@email.msn.com>
Message-ID: <14749.45740.432586.615745@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> For those of you who stopped reading comp.lang.python, note
    TP> that this is *typical* of the advice users get in the absence
    TP> of syntactic "print" sugar.

Which is of course broken, if say, you print an object that has a
str() that raises an exception.



From tim_one at email.msn.com  Sat Aug 19 00:08:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 18:08:55 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky   return statements
In-Reply-To: <006801c00960$944da200$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEKIHAAA.tim_one@email.msn.com>

[/F]
> the compiler doesn't warn about bits being lost -- it complained
> because the code was returning a pointer from a function declared
> to return a long integer.
>
> (explicitly casting the pthread_t to a long gets rid of the warning).

For the umpty-umpth time, the code with the simple cast to long is what was
there originally.  The convoluted casting was added later to stop "Alpha
OSF/1" compiler complaints.  Perhaps the compiler no longer complains,
though, or perhaps the one or two people who have tried it since don't have
a version of the compiler that cares about it.

> ...
> frankly, the code is just plain broken.  instead of adding even more dumb
> hacks, just fix it.  here's how it should be done:
>
>     return (long) pthread_self(); /* look! no variables! */

Fine by me, provided that works on all current platforms, and it's
understood that the function is inherently hosed anyway (the cast to long is
inherently unsafe, and we're still doing nothing to meet the promise in the
docs that this function returns a non-zero integer).

> or change
>
>  /* Jump through some hoops for Alpha OSF/1 */
>
> to
>
>  /* Jump through some hoops because Tim Peters wants us to ;-) */

Also fine by me, provided that works on all current platforms, and it's
understood that the function is inherently hosed anyway (the cast to long is
inherently unsafe, and we're still doing nothing to meet the promise in the
docs that this function returns a non-zero integer).





From tim_one at email.msn.com  Sat Aug 19 00:14:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 18:14:25 -0400
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
In-Reply-To: <14749.45740.432586.615745@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEKJHAAA.tim_one@email.msn.com>

>     TP> For those of you who stopped reading comp.lang.python, note
>     TP> that this is *typical* of the advice users get in the absence
>     TP> of syntactic "print" sugar.

[Barry]
> Which is of course broken, if say, you print an object that has a
> str() that raises an exception.

Yes, and if you follow that thread on c.l.py, you'll find that it's also
typical for the suggestions to get more and more convoluted (for that and
other reasons).





From barry at scottb.demon.co.uk  Sat Aug 19 00:36:28 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 18 Aug 2000 23:36:28 +0100
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000814094440.0BC7F303181@snelboot.oratrix.nl>
Message-ID: <000501c00964$c00e0de0$060210ac@private>


> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Jack Jansen
> Sent: 14 August 2000 10:45
> To: Guido van Rossum
> Cc: Vladimir Marangozov; Python core developers
> Subject: Re: [Python-Dev] Preventing recursion core dumps
> 
> 
> Isn't the solution to this problem to just implement PyOS_CheckStack() for 
> unix?

	And for Windows...

	I still want to control the recursion depth for other reasons than
	preventing crashes. Especially when I have embedded Python inside my
	app. (CUrrently I have to defend against a GPF under windows when
	def x(): x() is called.)

		Barry




From barry at scottb.demon.co.uk  Sat Aug 19 00:50:39 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 18 Aug 2000 23:50:39 +0100
Subject: [Python-Dev] Terminal programs (was: Python-dev summary: Jul 1-15)
In-Reply-To: <20000718124144.M29590@lyra.org>
Message-ID: <000601c00966$bac6f890$060210ac@private>

I can second Tera Term Pro. It is one of the few VT100 emulators that gets the
emulation right. Many term program get the emulation wrong, often
badly. If you do not have the docs for the VT series terminals a devo will not
know the details of how the escape sequences should work and apps will fail.

	BArry (Ex DEC VT expert)

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Greg Stein
> Sent: 18 July 2000 20:42
> To: python-dev at python.org
> Subject: [Python-Dev] Terminal programs (was: Python-dev summary: Jul
> 1-15)
> 
> 
> On Tue, Jul 18, 2000 at 10:13:21AM -0400, Andrew Kuchling wrote:
> > Thanks to everyone who made some suggestions.  The more minor
> > edits have been made, but I haven't added any of the threads I missed
> > because doing a long stretch of Emacs editing in this lame Windows terminal
> > program will drive me insane, so I just posted the summary to python-list.
> > 
> > <rant>How is it possible for Microsoft to not get a VT100-compatible
> > terminal program working correctly?  VT100s have been around since,
> > when, the 1970s?  Can anyone suggest a Windows terminal program that
> > *doesn't* suck dead bunnies through a straw?</rant>
> 
> yes.
> 
> I use "Tera Term Pro" with the SSH extensions. That gives me an excellent
> Telnet app, and it gives me SSH. I have never had a problem with it.
> 
> [ initially, there is a little tweakiness with creating the "known_hosts"
>   file, but you just hit "continue" and everything is fine after that. ]
> 
> Tera Term Pro can be downloaded from some .jp address. I think there is a
> 16-bit vs 32-bit program. I use the latter. The SSL stuff is located in Oz,
> me thinks.
> 
> I've got it on the laptop. Great stuff.
> 
> Cheers,
> -g
> 
> -- 
> Greg Stein, http://www.lyra.org/
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From james at daa.com.au  Sat Aug 19 02:54:30 2000
From: james at daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 08:54:30 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399DA8D3.70E85C58@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008190846110.25020-100000@james.daa.com.au>

On Fri, 18 Aug 2000, M.-A. Lemburg wrote:

> What I'm missing in your doc-string is a reference as to how
> well gettext works together with Unicode. After all, i18n is
> among other things about international character sets.
> Have you done any experiments in this area ?

At the C level, the extent to which gettext supports unicode is if the
catalog was encoded in the UTF8 encoding.

As an example, GTK (a GUI toolkit) is moving to pango (a library used to
allow display of multiple languages at once), all the catalogs for GTK
programs will have to be reencoded in UTF8.

I don't know if it is worth adding explicit support to the python gettext
module though.

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From fdrake at beopen.com  Sat Aug 19 03:16:33 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 21:16:33 -0400 (EDT)
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: <14749.43088.855537.355621@anthem.concentric.net>
References: <200008181951.MAA30358@bush.i.sourceforge.net>
	<14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
	<20000818164837.A8423@kronos.cnri.reston.va.us>
	<20000818230401.A376@xs4all.nl>
	<14749.43088.855537.355621@anthem.concentric.net>
Message-ID: <14749.57329.966314.171906@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > I don't think that's true, because the file won't have the tag
 > information in it.  That could be a problem in and of itself, but I
 > dunno.

  The confusion isn't from the tags, but the dates; if the ,v was
created 2 years ago, asking for the python tree as of a year ago
(using -D <date>) will include the file, even though it wasn't part of
our repository then.  Asking for a specific tag (using -r <tag>) will
properly not include it unless there's a matching tag there.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From james at daa.com.au  Sat Aug 19 03:26:44 2000
From: james at daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 09:26:44 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <14749.42747.411862.940207@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0008190854480.25020-100000@james.daa.com.au>

On Fri, 18 Aug 2000, Barry A. Warsaw wrote:

> 
> Apologies for duplicates to those of you already on python-dev...
> 
> I've been working on merging all the various implementations of Python
> interfaces to the GNU gettext libraries.  I've worked from code
> contributed by Martin, James, and Peter.  I now have something that
> seems to work fairly well so I thought I'd update you all.
> 
> After looking at all the various wizzy and experimental stuff in these
> implementations, I opted for simplicity, mostly just so I could get my
> head around what was needed.  My goal was to build a fast C wrapper
> module around the C library, and to provide a pure Python
> implementation of an identical API for platforms without GNU gettext.

Sounds good.  Most of the experimental stuff in my module turned out to
not be very useful.  Having a simple gettext module plus your pyxgettext
script should be enough.

> 
> I started with Martin's libintlmodule, renamed it _gettext and cleaned
> up the C code a bit.  This provides gettext(), dgettext(),
> dcgettext(), textdomain(), and bindtextdomain() functions.  The
> gettext.py module imports these, and if it succeeds, it's done.
> 
> If that fails, then there's a bunch of code, mostly derived from
> Peter's fintl.py module, that reads the binary .mo files and does the
> look ups itself.  Note that Peter's module only supported the GNU
> gettext binary format, and that's all mine does too.  It should be
> easy to support other binary formats (Solaris?) by overriding one
> method in one class, and contributions are welcome.

I think support for Solaris big endian format .po files would probably be
a good idea.  It is not very difficult and doesn't really add to the
complexity.

> 
> James's stuff looked cool too, what I grokked of it :) but I think
> those should be exported as higher level features.  I didn't include
> the ability to write .mo files or the exported Catalog objects.  I
> haven't used the I18N services enough to know whether these are
> useful.

As I said above, most of that turned out not to be very useful.  Did you
include any of the language selection code in the last version of my
gettext module?  It gave behaviour very close to C gettext in this
respect.  It expands the locale name given by the user using the
locale.alias files found on the systems, then decomposes that into the
simpler forms.  For instance, if LANG=en_GB, then my gettext module would
search for catalogs by names:
  ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']

This also allows things like expanding LANG=catalan to:
  ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
(provided the appropriate locale.alias files are found)

If you missed that that version I sent you I can send it again.  It
stripped out a lot of the experimental code giving a much simpler module.

> 
> I added one convenience function, gettext.install().  If you call
> this, it inserts the gettext.gettext() function into the builtins
> namespace as `_'.  You'll often want to do this, based on the I18N
> translatable strings marking conventions.  Note that importing gettext
> does /not/ install by default!

That sounds like a good idea that will make things a lot easier in the
common case.

> 
> And since (I think) you'll almost always want to call bindtextdomain()
> and textdomain(), you can pass the domain and localedir in as
> arguments to install.  Thus, the simple and quick usage pattern is:
> 
>     import gettext
>     gettext.install('mydomain', '/my/locale/dir')
> 
>     print _('this is a localized message')
> 
> I think it'll be easier to critique this stuff if I just check it in.
> Before I do, I still need to write up a test module and hack together
> docos.  In the meantime, here's the module docstring for gettext.py.
> Talk amongst yourselves. :)
> 
> -Barry

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 05:27:20 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 05:27:20 +0200 (CEST)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.33584.683341.684523@cj42289-a.reston1.va.home.com> from "Fred L. Drake, Jr." at Aug 18, 2000 02:40:48 PM
Message-ID: <200008190327.FAA10001@python.inrialpes.fr>

Fred L. Drake, Jr. wrote:
> 
> 
> Vladimir Marangozov writes:
>  > So name it PyModule_AddConstant(module, name, constant),
>  > which fails with "can't add constant to module" err msg.
> 
>   Even better!  I expect there should be at least a couple of these;
> one for ints, one for strings.
> 

What about something like this (untested):

------------------------------------------------------------------------
int
PyModule_AddObject(PyObject *m, char *name, PyObject *o)
{
        if (!PyModule_Check(m) || o == NULL)
                return -1;
        if (PyDict_SetItemString(((PyModuleObject *)m)->md_dict, name, o))
                return -1;
        Py_DECREF(o);
        return 0;
}

#define PyModule_AddConstant(m, x) \
        PyModule_AddObject(m, #x, PyInt_FromLong(x))

#define PyModule_AddString(m, x) \  
        PyModule_AddObject(m, x, PyString_FromString(x))

------------------------------------------------------------------------
void 
initmymodule(void)
{
        int CONSTANT = 123456;
        char *STR__doc__  = "Vlad";

        PyObject *m = Py_InitModule4("mymodule"...);


 
        if (PyModule_AddString(m, STR__doc__) ||
            PyModule_AddConstant(m, CONSTANT) ||
            ... 
        {
            Py_FatalError("can't init mymodule");
        }
}           


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From cgw at fnal.gov  Sat Aug 19 05:55:21 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 22:55:21 -0500 (CDT)
Subject: [Python-Dev] RE: compile.c: problem with duplicate argument bugfix
Message-ID: <14750.1321.978274.117748@buffalo.fnal.gov>

I'm catching up on the python-dev archives and see your message.

Note that I submitted a patch back in May to fix this same problem:

 http://www.python.org/pipermail/patches/2000-May/000638.html

There you will find a working patch, and a detailed discussion which
explains why your approach results in a core-dump.

I submitted this patch back before Python moved over to SourceForge,
there was a small amount of discussion about it and then the word from
Guido was "I'm too busy to look at this now", and the patch got
dropped on the floor.




From tim_one at email.msn.com  Sat Aug 19 06:11:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 00:11:41 -0400
Subject: [Python-Dev] RE: [Patches] [Patch #101055] Cookie.py
In-Reply-To: <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELAHAAA.tim_one@email.msn.com>

Moving this over from patches to python-dev.

My 2 cents:  The primary job of a source control system is to maintain an
accurate and easily retrieved historical record of a project.  Tim
O'Malley's ,v file records the history of his project, Python's should
record the history of its.  While a handful of people at CNRI have been able
to (or could, if they were of a common mind to) keep track of a handful of
exceptions in their heads, Python's CVS tree is available to the world now,
and approximately nobody looking at it will have any knowledge of this
discussion.  If they ask CVS for a date-based snapshot of the past, they're
using CVS for what it's *designed* for, and they should get back what they
asked for.

Have these kinds of tricks already been played in the CVS tree?  I'm mildly
concerned about that too, because whenever license or copyright issues are
in question, an accurate historical record is crucial ("Now, Mr. Kuchling,
isn't it true that you deliberately sabotaged the history of the Python
project in order to obscure your co-conspirators' theft of my client's
intellectual property?" <0.9 wink>).

let's-honor-the-past-by-keeping-it-honest-ly y'rs  - tim

> -----Original Message-----
> From: patches-admin at python.org [mailto:patches-admin at python.org]On
> Behalf Of Fred L. Drake, Jr.
> Sent: Friday, August 18, 2000 4:06 PM
> To: noreply at sourceforge.net
> Cc: akuchlin at mems-exchange.org; patches at python.org
> Subject: Re: [Patches] [Patch #101055] Cookie.py
>
>
>
> noreply at sourceforge.net writes:
>  > I have a copy of Tim O'Malley's ,v file (in order to preserve the
>  > version history).  I can either ask the SF admins to drop it into
>  > the right place in the CVS repository, but will that screw up the
>  > Python 1.6 tagging in some way?  (I would expect not, but who
>  > knows?)
>
>   That would have no effect on any of the Python tagging.  It's
> probably worthwhile making sure there are no tags in the ,v file, but
> that can be done after it gets dropped in place.
>   Now, Greg Stein will tell us that dropping this into place is the
> wrong thing to do.  What it *will* screw up is people asking for the
> state of Python at a specific date before the file was actually added;
> they'll get this file even for when it wasn't in the Python CVS tree.
> I can live with that, but we should make a policy decision for the
> Python tree regarding this sort of thing.
>
>  > The second option would be for me to retrace Cookie.py's
>  > development -- add revision 1.1, check in revision 1.2 with the
>  > right log message, check in revision 1.3, &c.  Obviously I'd prefer
>  > to not do this.
>
>   Don't -- it's not worth it.
>
>
>   -Fred
>
> --
> Fred L. Drake, Jr.  <fdrake at beopen.com>
> BeOpen PythonLabs Team Member
>
>
> _______________________________________________
> Patches mailing list
> Patches at python.org
> http://www.python.org/mailman/listinfo/patches





From cgw at fnal.gov  Sat Aug 19 06:31:06 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 23:31:06 -0500 (CDT)
Subject: [Python-Dev] Eureka! (Re: test_fork fails --with-thread)
Message-ID: <14750.3466.34096.504552@buffalo.fnal.gov>


Last month there was a flurry of discussion, around 

http://www.python.org/pipermail/python-dev/2000-July/014208.html

about problems arising when combining threading and forking.  I've
been reading through the python-dev archives and as far as I can tell
this problem has not yet been resolved.

Well, I think I understand what's going on and I have a patch that
fixes the problem.

Contrary to some folklore, you *can* use fork() in threaded code; you
just have to be a bit careful about locks...

Rather than write up a long-winded explanation myself, allow me to
quote:

-----------------------------------------------------------------
from "man pthread_atfork":

       ... recall that fork(2) duplicates the whole memory space,
       including mutexes in their current locking state, but only the
       calling thread: other threads are not running in the child
       process. Thus, if a mutex is locked by a thread other than
       the thread calling fork, that  mutex  will  remain  locked
       forever in the child process, possibly blocking the execu-
       tion of the child process. 

and from http://www.lambdacs.com/newsgroup/FAQ.html#Q120

  Q120: Calling fork() from a thread 

  > Can I fork from within a thread ?

  Absolutely.

  > If that is not explicitly forbidden, then what happens to the
  > other threads in the child process ?

  There ARE no other threads in the child process. Just the one that
  forked. If your application/library has background threads that need
  to exist in a forked child, then you should set up an "atfork" child
  handler (by calling pthread_atfork) to recreate them. And if you use
  mutexes, and want your application/library to be "fork safe" at all,
  you also need to supply an atfork handler set to pre-lock all your
  mutexes in the parent, then release them in the parent and child
  handlers.  Otherwise, ANOTHER thread might have a mutex locked when
  one thread forks -- and because the owning thread doesn't exist in
  the child, the mutex could never be released. (And, worse, whatever
  data is protected by the mutex is in an unknown and inconsistent
  state.)

-------------------------------------------------------------------

Below is a patch (I will also post this to SourceForge)

Notes on the patch:

1) I didn't make use of pthread_atfork, because I don't know how
   portable it is.  So, if somebody uses "fork" in a C extension there
   will still be trouble.

2) I'm deliberately not cleaning up the old lock before creating 
   the new one, because the lock destructors also do error-checking.
   It might be better to add a PyThread_reset_lock function to all the
   thread_*.h files, but I'm hesitant to do this because of the amount
   of testing required.


Patch:

Index: Modules/signalmodule.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Modules/signalmodule.c,v
retrieving revision 2.53
diff -c -r2.53 signalmodule.c
*** Modules/signalmodule.c	2000/08/03 02:34:44	2.53
--- Modules/signalmodule.c	2000/08/19 03:37:52
***************
*** 667,672 ****
--- 667,673 ----
  PyOS_AfterFork(void)
  {
  #ifdef WITH_THREAD
+ 	PyEval_ReInitThreads();
  	main_thread = PyThread_get_thread_ident();
  	main_pid = getpid();
  #endif
Index: Parser/intrcheck.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Parser/intrcheck.c,v
retrieving revision 2.39
diff -c -r2.39 intrcheck.c
*** Parser/intrcheck.c	2000/07/31 15:28:04	2.39
--- Parser/intrcheck.c	2000/08/19 03:37:54
***************
*** 206,209 ****
--- 206,212 ----
  void
  PyOS_AfterFork(void)
  {
+ #ifdef WITH_THREAD
+ 	PyEval_ReInitThreads();
+ #endif
  }
Index: Python/ceval.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/ceval.c,v
retrieving revision 2.191
diff -c -r2.191 ceval.c
*** Python/ceval.c	2000/08/18 19:53:25	2.191
--- Python/ceval.c	2000/08/19 03:38:06
***************
*** 142,147 ****
--- 142,165 ----
  		Py_FatalError("PyEval_ReleaseThread: wrong thread state");
  	PyThread_release_lock(interpreter_lock);
  }
+ 
+ /* This function is called from PyOS_AfterFork to ensure that newly
+    created child processes don't hold locks referring to threads which
+    are not running in the child process.  (This could also be done using
+    pthread_atfork mechanism, at least for the pthreads implementation) */
+ void
+ PyEval_ReInitThreads(void)
+ {
+ 	if (!interpreter_lock)
+ 		return;
+ 	/*XXX Can't use PyThread_free_lock here because it does too
+ 	  much error-checking.  Doing this cleanly would require
+ 	  adding a new function to each thread_*.h.  Instead, just
+ 	  create a new lock and waste a little bit of memory */
+ 	interpreter_lock = PyThread_allocate_lock();
+ 	PyThread_acquire_lock(interpreter_lock, 1);
+ 	main_thread = PyThread_get_thread_ident();
+ }
  #endif
  
  /* Functions save_thread and restore_thread are always defined so





From esr at thyrsus.com  Sat Aug 19 07:17:17 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 19 Aug 2000 01:17:17 -0400
Subject: [Python-Dev] Request for help w/ bsddb module
In-Reply-To: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>; from amk@s154.tnt3.ann.va.dialup.rcn.com on Thu, Aug 17, 2000 at 10:46:32PM -0400
References: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>
Message-ID: <20000819011717.L835@thyrsus.com>

A.M. Kuchling <amk at s154.tnt3.ann.va.dialup.rcn.com>:
> (Can this get done in time for Python 2.0?  Probably.  Can it get
> tested in time for 2.0?  Ummm....)

I have zero experience with writing C extensions, so I'm probably not
best deployed on this.  But I'm willing to help with testing.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"As to the species of exercise, I advise the gun. While this gives
[only] moderate exercise to the body, it gives boldness, enterprise,
and independence to the mind.  Games played with the ball and others
of that nature, are too violent for the body and stamp no character on
the mind. Let your gun, therefore, be the constant companion to your
walks."
        -- Thomas Jefferson, writing to his teenaged nephew.



From tim_one at email.msn.com  Sat Aug 19 07:11:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 01:11:28 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
Message-ID: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>

Note that a patch has been posted to SourceForge that purports to solve
*some* thread vs fork problems:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470

Since nobody has made real progress on figuring out why test_fork1 fails on
some systems, would somebody who is able to make it fail please just try
this patch & see what happens?

understanding-is-better-than-a-fix-but-i'll-settle-for-the-latter-ly
    y'rs  - tim





From cgw at fnal.gov  Sat Aug 19 07:26:33 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Sat, 19 Aug 2000 00:26:33 -0500 (CDT)
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
Message-ID: <14750.6793.342815.211141@buffalo.fnal.gov>

Tim Peters writes:

 > Since nobody has made real progress on figuring out why test_fork1 
 > fails on some systems,  would somebody who is able to make it fail
 > please just try this patch & see what happens?

Or try this program (based on Neil's example), which will fail almost
immediately unless you apply my patch:


import thread
import os, sys
import time

def doit(name):
    while 1:
        if os.fork()==0:
            print name, 'forked', os.getpid()
            os._exit(0)
        r = os.wait()

for x in range(50):
    name = 't%s'%x
    print 'starting', name
    thread.start_new_thread(doit, (name,))

time.sleep(300)




From tim_one at email.msn.com  Sat Aug 19 07:59:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 01:59:12 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <14750.6793.342815.211141@buffalo.fnal.gov>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELGHAAA.tim_one@email.msn.com>

[Tim]
> Since nobody has made real progress on figuring out why test_fork1
> fails on some systems,  would somebody who is able to make it fail
> please just try this patch & see what happens?

[Charles G Waldman]
> Or try this program (based on Neil's example), which will fail almost
> immediately unless you apply my patch:

Not "or", please, "both".  Without understanding the problem in detail, we
have no idea how many bugs are lurking here.  For example, Python allocates
at least two locks besides "the global lock", and "doing something" about
the latter alone may not help with all the failing test cases.  Note too
that the pthread_atfork docs were discussed earlier, and neither Guido nor I
were able to dream up a scenario that accounted for the details of most
failures people *saw*:  we both stumbled into another (and the same) failing
scenario, but it didn't match the stacktraces people posted (which showed
deadlocks/hangs in the *parent* thread; but at a fork, only the state of the
locks in the child "can" get screwed up).  The patch you posted should plug
the "deadlock in the child" scenario we did understand, but that scenario
didn't appear to be relevant in most cases.

The more info the better, let's just be careful to test *everything* that
failed before writing this off.





From ping at lfw.org  Sat Aug 19 08:43:18 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 18 Aug 2000 23:43:18 -0700 (PDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>

My $0.02.

+1 on:    import <modname> as <localmodname>
          import <pkgname> as <localpkgname>

+1 on:    from <modname> import <symname> as <localsymname>
          from <pkgname> import <modname> as <localmodname>

+1 on:    from <pkgname>.<modname> import <symname> as <localsymname>
          from <pkgname>.<pkgname> import <modname> as <localmodname>


-1 on *either* meaning of:

          import <pkgname>.<modname> as <localname>

...as it's not clear what the correct meaning is.

If the intent of this last form is to import a sub-module of a
package into the local namespace with an aliased name, then you
can just say

          from <pkgname> import <modname> as <localname>

and the meaning is then quite clear.



-- ?!ng




From ping at lfw.org  Sat Aug 19 08:38:10 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 18 Aug 2000 23:38:10 -0700 (PDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items()
In-Reply-To: <14749.18016.323403.295212@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008182336010.416-100000@skuld.lfw.org>

On Fri, 18 Aug 2000, Fred L. Drake, Jr. wrote:
>   I hadn't considered *not* using an "in" clause, but that is actually
> pretty neat.  I'd like to see all of these allowed; disallowing "for i
> indexing e in ...:" reduces the intended functionality substantially.

I like them all as well (and had previously assumed that the "indexing"
proposal included the "for i indexing sequence" case!).

While we're sounding off on the issue, i'm quite happy (+1) on both of:

          for el in seq:
          for i indexing seq:
          for i indexing el in seq:

and

          for el in seq:
          for i in indices(seq):
          for i, el in irange(seq):

with a slight preference for the former.


-- ?!ng




From loewis at informatik.hu-berlin.de  Sat Aug 19 09:25:20 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 19 Aug 2000 09:25:20 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399DA8D3.70E85C58@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com>
Message-ID: <200008190725.JAA26022@pandora.informatik.hu-berlin.de>

> What I'm missing in your doc-string is a reference as to how
> well gettext works together with Unicode. After all, i18n is
> among other things about international character sets.
> Have you done any experiments in this area ?

I have, to some degree. As others pointed out, gettext maps byte
arrays to byte arrays. However, in the GNU internationalization
project, it is convention to put an entry like

msgid ""
msgstr ""
"Project-Id-Version: GNU grep 2.4\n"
"POT-Creation-Date: 1999-11-13 11:33-0500\n"
"PO-Revision-Date: 1999-12-07 10:10+01:00\n"
"Last-Translator: Martin von L?wis <martin at mira.isdn.cs.tu-berlin.de>\n"
"Language-Team: German <de at li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=ISO-8859-1\n"
"Content-Transfer-Encoding: 8-bit\n"

into the catalog, which can be accessed as translation of the empty
string. It typically has a charset= element, which allows to analyse
what character set is used in the catalog. Of course, this is a
convention only, so it may not be present. If it is absent, and
conversion to Unicode is requested, it is probably a good idea to
assume UTF-8 (as James indicated, that will be the GNOME coded
character set for catalogs, for example).

In any case, I think it is a good idea to support retrieval of
translated strings as Unicode objects. I can think of two alternative
interfaces:

gettext.gettext(msgid, unicode=1)
#or
gettext.unigettext(msgid)

Of course, if applications install _, they'd know whether they want
unicode or byte strings, so _ would still take a single argument.

However, I don't think that this feature must be there at the first
checkin; I'd volunteer to work on a patch after Barry has installed
his code, and after I got some indication what the interface should
be.

Regards,
Martin



From tim_one at email.msn.com  Sat Aug 19 11:19:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 05:19:23 -0400
Subject: [Python-Dev] RE: Call for reviewer!
In-Reply-To: <B5BF7652.7B39%dgoodger@bigfoot.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com>

[David Goodger]
> I thought the "backwards compatibility" issue might be a sticking point ;>
> And I *can* see why.
>
> So, if I were to rework the patch to remove the incompatibility, would it
> fly or still be shot down?

I'm afraid "shot down" is the answer, but it's no reflection on the quality
of your work.  Guido simply doesn't want any enhancements of any kind to
getopt to be distributed in the standard library.  He made that very clear
in a conference call with the PythonLabs team, and as the interim 2.0
release manager du jour I pass that on in his absence.

This wasn't a surprise to me, as there's a very long history of rejected
getopt patches.  There are certainly users who *want* fancier getopt
capabilities!  The problem in making them happy is threefold:  (1) most
users don't (as the lack of positive response in this thread on Python-Dev
confirms); (2) users who do want them seem unable to agree on how they
should work (witness the bifurcation in your own patch set); and, (3) Guido
actively wants to keep the core getopt simple in the absence of both demand
for, and consensus on, more than it offers already.

This needn't mean your work is dead.  It will find users if it you make it
available on the web, and even in the core Andrew Kuchling pointed out that
the Distutils folks are keen to have a fancier getopt for their own
purposes:

[Andrew]
> Note that there's Lib/distutils/fancy_getopt.py.  The docstring reads:
>
> Wrapper around the standard getopt module that provides the following
> additional features:
>  * short and long options are tied together
>  * options have help strings, so fancy_getopt could potentially
>    create a complete usage summary
>  * options set attributes of a passed-in object

So you might want to talk to Gred Ward about that too (Greg is the Distuils
Dood).

[back to David]
> ...
> BUT WAIT, THERE'S MORE! As part of the deal, you get a free
> test_getopt.py regression test module! Act now; vote +1! (Actually,
> you'll get that no matter what you vote. I'll remove the getoptdict-
> specific stuff and resubmit it if this patch is rejected.)

We don't have to ask Guido abut *that*:  a test module for getopt would be
accepted with extreme (yet intangible <wink>) gratitude.  Thank you!





From mal at lemburg.com  Sat Aug 19 11:28:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:28:32 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de>
Message-ID: <399E5340.B00811EF@lemburg.com>

Martin von Loewis wrote:
> 
> In any case, I think it is a good idea to support retrieval of
> translated strings as Unicode objects. I can think of two alternative
> interfaces:
> 
> gettext.gettext(msgid, unicode=1)
> #or
> gettext.unigettext(msgid)
> 
> Of course, if applications install _, they'd know whether they want
> unicode or byte strings, so _ would still take a single argument.

Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
chars then the traditional API would have to raise encoding
errors -- probably not a good idea since the errors would be
hard to deal with in large applications.

Perhaps the return value type of .gettext() should be given on
the .install() call: e.g. encoding='utf-8' would have .gettext()
return a string using UTF-8 while encoding='unicode' would have
it return Unicode objects.
 
[Which makes me think: perhaps I should add a new codec which
does pretty much the same as the unicode() call: convert the
input data to Unicode ?!]

> However, I don't think that this feature must be there at the first
> checkin; I'd volunteer to work on a patch after Barry has installed
> his code, and after I got some indication what the interface should
> be.

Right.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sat Aug 19 11:37:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:37:28 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
		<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net>
Message-ID: <399E5558.C7B6029B@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> I know that gettext is a standard, but from a technology POV I
>     M> would have implemented this as codec wich can then be plugged
>     M> wherever l10n is needed, since strings have the new .encode()
>     M> method which could just as well be used to convert not only the
>     M> string into a different encoding, but also a different
>     M> language.  Anyway, just a thought...
> 
> That might be cool to play with, but I haven't done anything with
> Python's Unicode stuff (and painfully little with gettext too) so
> right now I don't see how they'd fit together.  My gut reaction is
> that gettext could be the lower level interface to
> string.encode(language).

Oh, codecs are not just about Unicode. Normal string objects
also have an .encode() method which can be used for these
purposes as well.
 
>     M> What I'm missing in your doc-string is a reference as to how
>     M> well gettext works together with Unicode. After all, i18n is
>     M> among other things about international character sets.
>     M> Have you done any experiments in this area ?
> 
> No, but I've thought about it, and I don't think the answer is good.
> The GNU gettext functions take and return char*'s, which probably
> isn't very compatible with Unicode.  _gettext therefore takes and
> returns PyStringObjects.

Martin mentioned the possibility of using UTF-8 for the
catalogs and then decoding them into Unicode. That should be
a reasonable way of getting .gettext() to talk Unicode :-)
 
> We could do better with the pure-Python implementation, and that might
> be a good reason to forgo any performance gains or platform-dependent
> benefits you'd get with _gettext.  Of course the trick is using the
> Unicode-unaware tools to build .mo files containing Unicode strings.
> I don't track GNU gettext developement close enough to know whether
> they are addressing Unicode issues or not.

Just dreaming a little here: I would prefer that we use some
form of XML to write the catalogs. XML comes with Unicode support
and tools for writing XML are available too. We'd only need
a way to transform XML into catalog files of some Python
specific platform independent format (should be possible to
create .mo files from XML too).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sat Aug 19 11:44:19 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:44:19 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <Pine.LNX.4.21.0008190854480.25020-100000@james.daa.com.au>
Message-ID: <399E56F3.53799860@lemburg.com>

James Henstridge wrote:
> 
> On Fri, 18 Aug 2000, Barry A. Warsaw wrote:
> 
> > I started with Martin's libintlmodule, renamed it _gettext and cleaned
> > up the C code a bit.  This provides gettext(), dgettext(),
> > dcgettext(), textdomain(), and bindtextdomain() functions.  The
> > gettext.py module imports these, and if it succeeds, it's done.
> >
> > If that fails, then there's a bunch of code, mostly derived from
> > Peter's fintl.py module, that reads the binary .mo files and does the
> > look ups itself.  Note that Peter's module only supported the GNU
> > gettext binary format, and that's all mine does too.  It should be
> > easy to support other binary formats (Solaris?) by overriding one
> > method in one class, and contributions are welcome.
> 
> I think support for Solaris big endian format .po files would probably be
> a good idea.  It is not very difficult and doesn't really add to the
> complexity.
> 
> >
> > James's stuff looked cool too, what I grokked of it :) but I think
> > those should be exported as higher level features.  I didn't include
> > the ability to write .mo files or the exported Catalog objects.  I
> > haven't used the I18N services enough to know whether these are
> > useful.
> 
> As I said above, most of that turned out not to be very useful.  Did you
> include any of the language selection code in the last version of my
> gettext module?  It gave behaviour very close to C gettext in this
> respect.  It expands the locale name given by the user using the
> locale.alias files found on the systems, then decomposes that into the
> simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> search for catalogs by names:
>   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> 
> This also allows things like expanding LANG=catalan to:
>   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> (provided the appropriate locale.alias files are found)
> 
> If you missed that that version I sent you I can send it again.  It
> stripped out a lot of the experimental code giving a much simpler module.

Uhm, can't you make some use of the new APIs in locale.py
for this ?

locale.py has a whole new set of encoding aware support for
LANG variables. It supports Unix and Windows (thanks to /F).
 
--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mwh21 at cam.ac.uk  Sat Aug 19 11:52:00 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 19 Aug 2000 10:52:00 +0100
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: "Fred L. Drake, Jr."'s message of "Fri, 18 Aug 2000 21:16:33 -0400 (EDT)"
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com> <20000818164837.A8423@kronos.cnri.reston.va.us> <20000818230401.A376@xs4all.nl> <14749.43088.855537.355621@anthem.concentric.net> <14749.57329.966314.171906@cj42289-a.reston1.va.home.com>
Message-ID: <m3itsxpohr.fsf@atrus.jesus.cam.ac.uk>

"Fred L. Drake, Jr." <fdrake at beopen.com> writes:

> Barry A. Warsaw writes:
>  > I don't think that's true, because the file won't have the tag
>  > information in it.  That could be a problem in and of itself, but I
>  > dunno.
> 
>   The confusion isn't from the tags, but the dates; if the ,v was
> created 2 years ago, asking for the python tree as of a year ago
> (using -D <date>) will include the file, even though it wasn't part of
> our repository then.  Asking for a specific tag (using -r <tag>) will
> properly not include it unless there's a matching tag there.

Is it feasible to hack the dates in the ,v file so that it looks like
all the revisions happened between say

2000.08.19.10.50.00

and

2000.08.19.10.51.00

?  This probably has problems too, but they will be more subtle...

Cheers,
M.

-- 
  That's why the smartest companies use Common Lisp, but lie about it
  so all their competitors think Lisp is slow and C++ is fast.  (This
  rumor has, however, gotten a little out of hand. :)
                                        -- Erik Naggum, comp.lang.lisp




From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 12:23:12 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 12:23:12 +0200 (CEST)
Subject: [Python-Dev] RE: Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEJJHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 18, 2000 03:11:11 PM
Message-ID: <200008191023.MAA11071@python.inrialpes.fr>

Tim Peters wrote:
> 
> My bandwidth is consumed by 2.0 issues, so I won't look at it.  On the
> chance that Guido gets hit by a bus, though, and I have time to kill at his
> funeral, it would be nice to have it available on SourceForge.  Uploading a
> postponed patch sounds fine!

Done. Both patches are updated and relative to current CVS:

Optional object malloc:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470

Optional memory profiler:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470

Let me insist again that these are totally optional and off by default
(lately, a recurrent wish of mine regarding proposed features). 

Since they're optional, off by default, and consitute a solid base for
further work on mem + GC, and despite the tiny imperfections I see in
the profiler, I think I'm gonna push a bit, given that I'm pretty
confident in the code and that it barely affects anything.

So while I'm out of town, my mailbox would be happy to register any
opinions that the python-dev crowd might have (I'm thinking of Barry
and Neil Schemenauer in particular). Also, when BDFL is back from
Palo Alto, give him a chance to emit a statement (although I know
he's not a memory fan <wink>).

I'll *try* to find some time for docs and test cases, but I'd like to
get some preliminary feedback first (especially if someone care to try
this on a 64 bit machine). That's it for now.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From fdrake at beopen.com  Sat Aug 19 14:44:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 19 Aug 2000 08:44:00 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>
References: <20000818182246.V376@xs4all.nl>
	<Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>
Message-ID: <14750.33040.285051.600113@cj42289-a.reston1.va.home.com>

Ka-Ping Yee writes:
 > If the intent of this last form is to import a sub-module of a
 > package into the local namespace with an aliased name, then you
 > can just say
 > 
 >           from <pkgname> import <modname> as <localname>

  I could live with this restriction, and this expression is
unambiguous (a good thing for Python!).


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Sat Aug 19 15:54:21 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sat, 19 Aug 2000 16:54:21 +0300 (IDT)
Subject: [Python-Dev] Intent to document: Cookie.py
Message-ID: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>

This is just a notice that I'm currently in the middle of documenting
Cookie. I should be finished sometime today. This is just to stop anyone
else from wasting his time -- if you got time to kill, you can write a
test suite <wink>

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From james at daa.com.au  Sat Aug 19 16:14:12 2000
From: james at daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 22:14:12 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E56F3.53799860@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008192202520.25394-100000@james.daa.com.au>

On Sat, 19 Aug 2000, M.-A. Lemburg wrote:

> > As I said above, most of that turned out not to be very useful.  Did you
> > include any of the language selection code in the last version of my
> > gettext module?  It gave behaviour very close to C gettext in this
> > respect.  It expands the locale name given by the user using the
> > locale.alias files found on the systems, then decomposes that into the
> > simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> > search for catalogs by names:
> >   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> > 
> > This also allows things like expanding LANG=catalan to:
> >   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> > (provided the appropriate locale.alias files are found)
> > 
> > If you missed that that version I sent you I can send it again.  It
> > stripped out a lot of the experimental code giving a much simpler module.
> 
> Uhm, can't you make some use of the new APIs in locale.py
> for this ?
> 
> locale.py has a whole new set of encoding aware support for
> LANG variables. It supports Unix and Windows (thanks to /F).

Well, it can do a little more than that.  It will also handle the case of
a number of locales listed in the LANG environment variable.  It also
doesn't look like it handles decomposition of a locale like
ll_CC.encoding at modifier into other matching encodings in the correct
precedence order.

Maybe something to do this sort of decomposition would fit better in
locale.py though.

This sort of thing is very useful for people who know more than one
language, and doesn't seem to be handled by plain setlocale()

>  
> --
> Marc-Andre Lemburg

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From fdrake at beopen.com  Sat Aug 19 16:14:27 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 19 Aug 2000 10:14:27 -0400 (EDT)
Subject: [Python-Dev] Intent to document: Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>
References: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>
Message-ID: <14750.38467.274688.274349@cj42289-a.reston1.va.home.com>

Moshe Zadka writes:
 > This is just a notice that I'm currently in the middle of documenting
 > Cookie. I should be finished sometime today. This is just to stop anyone
 > else from wasting his time -- if you got time to kill, you can write a
 > test suite <wink>

  Great, thanks!  Just check it in as libcookie.tex when you're ready,
and I'll check the markup for details.  Someone familiar with the
module can proof it for content.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From m.favas at per.dem.csiro.au  Sat Aug 19 16:24:18 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 22:24:18 +0800
Subject: [Python-Dev] [Fwd: Who can make test_fork1 fail?]
Message-ID: <399E9892.35A1AC79@per.dem.csiro.au>

 
-------------- next part --------------
An embedded message was scrubbed...
From: Mark Favas <m.favas at per.dem.csiro.au>
Subject: Who can make test_fork1 fail?
Date: Sat, 19 Aug 2000 17:59:13 +0800
Size: 658
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000819/d63fb085/attachment.eml>

From tim_one at email.msn.com  Sat Aug 19 19:34:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 13:34:28 -0400
Subject: [Python-Dev] New anal crusade
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com>

Has anyone tried compiling Python under gcc with

    -Wmissing-prototypes -Wstrict-prototypes

?  Someone on Python-Help just complained about warnings under that mode,
but it's unclear to me which version of Python they were talking about.





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 20:01:52 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 20:01:52 +0200 (CEST)
Subject: [Python-Dev] New anal crusade
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 19, 2000 01:34:28 PM
Message-ID: <200008191801.UAA17999@python.inrialpes.fr>

Tim Peters wrote:
> 
> Has anyone tried compiling Python under gcc with
> 
>     -Wmissing-prototypes -Wstrict-prototypes
> 
> ?  Someone on Python-Help just complained about warnings under that mode,
> but it's unclear to me which version of Python they were talking about.

Just tried it. Indeed, there are a couple of warnings. Wanna list?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Sat Aug 19 20:33:57 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 14:33:57 -0400
Subject: [Python-Dev] New anal crusade
In-Reply-To: <200008191801.UAA17999@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEMGHAAA.tim_one@email.msn.com>

[Tim, on gcc -Wmissing-prototypes -Wstrict-prototypes]

[Vladimir]
> Just tried it. Indeed, there are a couple of warnings. Wanna list?

Not me personally, no.  The very subtle <wink> implied request in that was
that someone who *can* run gcc this way actually commit to doing so as a
matter of course, and fix warnings as they pop up.  But, in the absence of
joy, the occasional one-shot list is certainly better than nothing.





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 20:58:18 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 20:58:18 +0200 (CEST)
Subject: [Python-Dev] New anal crusade
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEMGHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 19, 2000 02:33:57 PM
Message-ID: <200008191858.UAA31550@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Tim, on gcc -Wmissing-prototypes -Wstrict-prototypes]
> 
> [Vladimir]
> > Just tried it. Indeed, there are a couple of warnings. Wanna list?
> 
> Not me personally, no.  The very subtle <wink> implied request in that was
> that someone who *can* run gcc this way actually commit to doing so as a
> matter of course, and fix warnings as they pop up.  But, in the absence of
> joy, the occasional one-shot list is certainly better than nothing.

Sorry, I'm running after my plane (and I need to run fast <wink>) so please
find another volunteer. They're mostly ansification thingies, as expected.

Here's the list from a the default ./configure, make, on Linux, so that
even someone without gcc can fix them <wink>:

----------------------------------------------------------------------------

pgenmain.c:43: warning: no previous prototype for `Py_Exit'
pgenmain.c:169: warning: no previous prototype for `PyOS_Readline'

myreadline.c:66: warning: no previous prototype for `PyOS_StdioReadline'

intrcheck.c:138: warning: no previous prototype for `PyErr_SetInterrupt'
intrcheck.c:191: warning: no previous prototype for `PyOS_FiniInterrupts'

fileobject.c:253: warning: function declaration isn't a prototype
fileobject.c:302: warning: function declaration isn't a prototype

floatobject.c:242: warning: no previous prototype for `PyFloat_AsStringEx'
floatobject.c:285: warning: no previous prototype for `PyFloat_AsString'

unicodeobject.c:548: warning: no previous prototype for `_PyUnicode_AsDefaultEncodedString'
unicodeobject.c:5142: warning: no previous prototype for `_PyUnicode_Init'
unicodeobject.c:5159: warning: no previous prototype for `_PyUnicode_Fini'

codecs.c:423: warning: no previous prototype for `_PyCodecRegistry_Init'
codecs.c:434: warning: no previous prototype for `_PyCodecRegistry_Fini'

frozenmain.c:34: warning: no previous prototype for `Py_FrozenMain'

getmtime.c:30: warning: no previous prototype for `PyOS_GetLastModificationTime'

import.c:2269: warning: no previous prototype for `initimp'

marshal.c:771: warning: no previous prototype for `PyMarshal_Init'

pyfpe.c:21: warning: no previous prototype for `PyFPE_dummy'

pythonrun.c: In function `run_pyc_file':
pythonrun.c:880: warning: function declaration isn't a prototype

dynload_shlib.c:49: warning: no previous prototype for `_PyImport_GetDynLoadFunc'

In file included from thread.c:125:
thread_pthread.h:205: warning: no previous prototype for `PyThread__exit_thread'

getopt.c:48: warning: no previous prototype for `getopt'

./threadmodule.c:389: warning: no previous prototype for `initthread'
./gcmodule.c:698: warning: no previous prototype for `initgc'
./regexmodule.c:661: warning: no previous prototype for `initregex'
./pcremodule.c:633: warning: no previous prototype for `initpcre'
./posixmodule.c:3698: warning: no previous prototype for `posix_strerror'
./posixmodule.c:5456: warning: no previous prototype for `initposix'
./signalmodule.c:322: warning: no previous prototype for `initsignal'
./_sre.c:2301: warning: no previous prototype for `init_sre'
./arraymodule.c:792: warning: function declaration isn't a prototype
./arraymodule.c:1511: warning: no previous prototype for `initarray'
./cmathmodule.c:412: warning: no previous prototype for `initcmath'
./mathmodule.c:254: warning: no previous prototype for `initmath'
./stropmodule.c:1205: warning: no previous prototype for `initstrop'
./structmodule.c:1225: warning: no previous prototype for `initstruct'
./timemodule.c:571: warning: no previous prototype for `inittime'
./operator.c:251: warning: no previous prototype for `initoperator'
./_codecsmodule.c:628: warning: no previous prototype for `init_codecs'
./unicodedata.c:277: warning: no previous prototype for `initunicodedata'
./ucnhash.c:107: warning: no previous prototype for `getValue'
./ucnhash.c:179: warning: no previous prototype for `initucnhash'
./_localemodule.c:408: warning: no previous prototype for `init_locale'
./fcntlmodule.c:322: warning: no previous prototype for `initfcntl'
./pwdmodule.c:129: warning: no previous prototype for `initpwd'
./grpmodule.c:136: warning: no previous prototype for `initgrp'
./errnomodule.c:74: warning: no previous prototype for `initerrno'
./mmapmodule.c:940: warning: no previous prototype for `initmmap'
./selectmodule.c:339: warning: no previous prototype for `initselect'
./socketmodule.c:2366: warning: no previous prototype for `init_socket'
./md5module.c:275: warning: no previous prototype for `initmd5'
./shamodule.c:550: warning: no previous prototype for `initsha'
./rotormodule.c:621: warning: no previous prototype for `initrotor'
./newmodule.c:205: warning: no previous prototype for `initnew'
./binascii.c:1014: warning: no previous prototype for `initbinascii'
./parsermodule.c:2637: warning: no previous prototype for `initparser'
./cStringIO.c:643: warning: no previous prototype for `initcStringIO'
./cPickle.c:358: warning: no previous prototype for `cPickle_PyMapping_HasKey'
./cPickle.c:2287: warning: no previous prototype for `Pickler_setattr'
./cPickle.c:4518: warning: no previous prototype for `initcPickle'

main.c:33: warning: function declaration isn't a prototype
main.c:79: warning: no previous prototype for `Py_Main'
main.c:292: warning: no previous prototype for `Py_GetArgcArgv'

getbuildinfo.c:34: warning: no previous prototype for `Py_GetBuildInfo'
./Modules/getbuildinfo.c:34: warning: no previous prototype for `Py_GetBuildInfo'


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at python.org  Fri Aug 18 21:13:14 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:13:14 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
Message-ID: <011301c00a1f$927e7980$7aa41718@beopen.com>

I'm reading this tread off-line. I'm feeling responsible because I gave Skip
the green light. I admit that that was a mistake: I didn't recall the
purpose of commonprefix() correctrly, and didn't refresh my memory by
reading the docs. I think Tim is right: as the docs say, the function was
*intended* to work on a character basis. This doesn't mean that it doesn't
belong in os.path! Note that os.dirname() will reliably return the common
directory, exactly because the trailing slash is kept.

I propose:

- restore the old behavior on all platforms
- add to the docs that to get the common directory you use dirname()
- add testcases that check that this works on all platforms

- don't add commonpathprefix(), because dirname() already does it

Note that I've only read email up to Thursday morning. If this has been
superseded by more recent resolution, I'll reconsider; but if it's still up
in the air this should be it.

It doesn't look like the change made it into 1.6.

--Guido





From guido at python.org  Fri Aug 18 21:20:06 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:20:06 -0400
Subject: [Python-Dev] PEP 214, extended print statement
References: <14747.22851.266303.28877@anthem.concentric.net><Pine.GSO.4.10.10008170915050.24783-100000@sundial><20000817083023.J376@xs4all.nl> <14747.63511.725610.771162@anthem.concentric.net>
Message-ID: <011401c00a1f$92db8da0$7aa41718@beopen.com>

I'm still reading my email off-line on the plane. I've now read PEP 214 and
think I'll reverse my opinion: it's okay. Barry, check it in! (And change
the SF PM status to 'Accepted'.) I think I'll start using it for error
messages: errors should go to stderr, but it's often inconvenient, so in
minor scripts instead of doing

  sys.stderr.write("Error: can't open %s\n" % filename)

I often write

  print "Error: can't open", filename

which is incorrect but more convenient. I can now start using

  print >>sys.stderr, "Error: can't open", filename

--Guido





From guido at python.org  Fri Aug 18 21:23:37 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:23:37 -0400
Subject: [Python-Dev] PyErr_NoMemory
References: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <011501c00a1f$939bd060$7aa41718@beopen.com>

> The current PyErr_NoMemory() function reads:
>
> PyObject *
> PyErr_NoMemory(void)
> {
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
>         else
>                 /* this will probably fail since there's no memory and
hee,
>                    hee, we have to instantiate this class
>                 */
>                 PyErr_SetNone(PyExc_MemoryError);
>
>         return NULL;
> }
>
> thus overriding any previous exceptions unconditionally. This is a
> problem when the current exception already *is* PyExc_MemoryError,
> notably when we have a chain (cascade) of memory errors. It is a
> problem because the original memory error and eventually its error
> message is lost.
>
> I suggest to make this code look like:
>
> PyObject *
> PyErr_NoMemory(void)
> {
> if (PyErr_ExceptionMatches(PyExc_MemoryError))
> /* already current */
> return NULL;
>
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
> ...
>
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

+1. The cascading memory error seems a likely scenario indeed: something
returns a memory error, the error handling does some more stuff, and hits
more memory errors.

--Guido






From guido at python.org  Fri Aug 18 22:57:15 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 16:57:15 -0400
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net>
Message-ID: <011601c00a1f$9923d460$7aa41718@beopen.com>

Paul Prescod wrote:

> I don't think of iterators as indexing in terms of numbers. Otherwise I
> could do this:
>
> >>> a={0:"zero",1:"one",2:"two",3:"three"}
> >>> for i in a:
> ...     print i
> ...
>
> So from a Python user's point of view, for-looping has nothing to do
> with integers. From a Python class/module creator's point of view it
> does have to do with integers. I wouldn't be either surprised nor
> disappointed if that changed one day.

Bingo!

I've long had an idea for generalizing 'for' loops using iterators. This is
more a Python 3000 thing, but I'll explain it here anyway because I think
it's relevant. Perhaps this should become a PEP?  (Maybe we should have a
series of PEPs with numbers in the 3000 range for Py3k ideas?)

The statement

  for <variable> in <object>: <block>

should translate into this kind of pseudo-code:

  # variant 1
  __temp = <object>.newiterator()
  while 1:
      try: <variable> = __temp.next()
      except ExhaustedIterator: break
      <block>

or perhaps (to avoid the relatively expensive exception handling):

  # variant 2
  __temp = <object>.newiterator()
  while 1:
      __flag, <variable = __temp.next()
      if not __flag: break
      <block>

In variant 1, the next() method returns the next object or raises
ExhaustedIterator. In variant 2, the next() method returns a tuple (<flag>,
<variable>) where <flag> is 1 to indicate that <value> is valid or 0 to
indicate that there are no more items available. I'm not crazy about the
exception, but I'm even less crazy about the more complex next() return
value (careful observers may have noticed that I'm rarely crazy about flag
variables :-).

Another argument for variant 1 is that variant 2 changes what <variable> is
after the loop is exhausted, compared to current practice: currently, it
keeps the last valid value assigned to it. Most likely, the next() method
returns None when the sequence is exhausted. It doesn't make a lot of sense
to require it to return the last item of the sequence -- there may not *be*
a last item, if the sequence is empty, and not all sequences make it
convenient to keep hanging on to the last item in the iterator, so it's best
to specify that next() returns (0, None) when the sequence is exhausted.

(It would be tempting to suggeste a variant 1a where instead of raising an
exception, next() returns None when the sequence is exhausted, but this
won't fly: you couldn't iterate over a list containing some items that are
None.)

Side note: I believe that the iterator approach could actually *speed up*
iteration over lists compared to today's code. This is because currently the
interation index is a Python integer object that is stored on the stack.
This means an integer add with overflow check, allocation, and deallocation
on each iteration! But the iterator for lists (and other basic sequences)
could easily store the index as a C int! (As long as the sequence's length
is stored in an int, the index can't overflow.)

[Warning: thinking aloud ahead!]

Once we have the concept of iterators, we could support explicit use of them
as well. E.g. we could use a variant of the for statement to iterate over an
existing iterator:

  for <variable> over <iterator>: <block>

which would (assuming variant 1 above) translate to:

  while 1:
      try: <variable> = <iterator>.next()
      except ExhaustedIterator: break
      <block>

This could be used in situations where you have a loop iterating over the
first half of a sequence and a second loop that iterates over the remaining
items:

  it = something.newiterator()
  for x over it:
      if time_to_start_second_loop(): break
      do_something()
  for x over it:
      do_something_else()

Note that the second for loop doesn't reset the iterator -- it just picks up
where the first one left off! (Detail: the x that caused the break in the
first loop doesn't get dealt with in the second loop.)

I like the iterator concept because it allows us to do things lazily. There
are lots of other possibilities for iterators. E.g. mappings could have
several iterator variants to loop over keys, values, or both, in sorted or
hash-table order. Sequences could have an iterator for traversing them
backwards, and a few other ones for iterating over their index set (cf.
indices()) and over (index, value) tuples (cf. irange()). Files could be
their own iterator where the iterator is almost the same as readline()
except it raises ExhaustedIterator at EOF instead of returning "".  A tree
datastructure class could have an associated iterator class that maintains a
"finger" into the tree.

Hm, perhaps iterators could be their own iterator? Then if 'it' were an
iterator, it.newiterator() would return a reference to itself (not a copy).
Then we wouldn't even need the 'over' alternative syntax. Maybe the method
should be called iterator() then, not newiterator(), to avoid suggesting
anything about the newness of the returned iterator.

Other ideas:

- Iterators could have a backup() method that moves the index back (or
raises an exception if this feature is not supported, e.g. when reading data
from a pipe).

- Iterators over certain sequences might support operations on the
underlying sequence at the current position of the iterator, so that you
could iterate over a sequence and occasionally insert or delete an item (or
a slice).

Of course iterators also connect to generators. The basic list iterator
doesn't need coroutines or anything, it can be done as follows:

  class Iterator:
      def __init__(self, seq):
          self.seq = seq
          self.ind = 0
      def next(self):
          if self.ind >= len(self.seq): raise ExhaustedIterator
          val = self.seq[self.ind]
          self.ind += 1
          return val

so that <list>.iterator() could just return Iterator(<list>) -- at least
conceptually.

But for other data structures the amount of state needed might be
cumbersome. E.g. a tree iterator needs to maintain a stack, and it's much
easier to code that using a recursive Icon-style generator than by using an
explicit stack. On the other hand, I remember reading an article a while ago
(in Dr. Dobbs?) by someone who argued (in a C++ context) that such recursive
solutions are very inefficient, and that an explicit stack (1) is really not
that hard to code, and (2) gives much more control over the memory and time
consumption of the code. On the third hand, some forms of iteration really
*are* expressed much more clearly using recursion. On the fourth hand, I
disagree with Matthias ("Dr. Scheme") Felleisen about recursion as the root
of all iteration. Anyway, I believe that iterators (as explained above) can
be useful even if we don't have generators (in the Icon sense, which I
believe means coroutine-style).

--Guido





From amk at s222.tnt1.ann.va.dialup.rcn.com  Sat Aug 19 23:15:53 2000
From: amk at s222.tnt1.ann.va.dialup.rcn.com (A.M. Kuchling)
Date: Sat, 19 Aug 2000 17:15:53 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
Message-ID: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>

The handwritten BSDDB3 module has just started actually functioning.
It now runs the dbtest.py script without core dumps or reported
errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
db.py and the most recent _bsddb.c.

I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
have to struggle a bit with integrating it into Greg's package and
compiling it (replacing db.py with my version, and modifying Setup to
compile _bsddb.c).  I haven't integrated it more, because I'm not sure
how we want to proceed with it.  Robin/Greg, do you want to continue
to maintain the package?  ...in which I'll contribute the code to one
or both of you.  Or, I can take over maintaining the package, or we
can try to get the module into Python 2.0, but with the feature freeze
well advanced, I'm doubtful that it'll get in.

Still missing: Cursor objects still aren't implemented -- Martin, if
you haven't started yet, let me know and I'll charge ahead with them
tomorrow.  Docstrings.  More careful type-checking of function
objects.  Finally, general tidying, re-indenting, and a careful
reading to catch any stupid errors that I made.  

--amk




From esr at thyrsus.com  Sat Aug 19 23:37:27 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 19 Aug 2000 17:37:27 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>; from amk@s222.tnt1.ann.va.dialup.rcn.com on Sat, Aug 19, 2000 at 05:15:53PM -0400
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000819173727.A4015@thyrsus.com>

A.M. Kuchling <amk at s222.tnt1.ann.va.dialup.rcn.com>:
> The handwritten BSDDB3 module has just started actually functioning.
> It now runs the dbtest.py script without core dumps or reported
> errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
> db.py and the most recent _bsddb.c.

I see I wasn't on the explicit addressee list.  But if you can get any good
use out of another pair of hands, I'm willing.

> I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
> have to struggle a bit with integrating it into Greg's package and
> compiling it (replacing db.py with my version, and modifying Setup to
> compile _bsddb.c).  I haven't integrated it more, because I'm not sure
> how we want to proceed with it.  Robin/Greg, do you want to continue
> to maintain the package?  ...in which I'll contribute the code to one
> or both of you.  Or, I can take over maintaining the package, or we
> can try to get the module into Python 2.0, but with the feature freeze
> well advanced, I'm doubtful that it'll get in.

I'm +1 for slipping this one in under the wire, if it matters.

I'm not just randomly pushing a feature here -- I think the multiple-reader/
one-writer atomicity guarantees this will give us will be extremely important
for CGI programmers, who often need a light-duty database facility with exactly
this kind of concurrency guarantee.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The people of the various provinces are strictly forbidden to have in their
possession any swords, short swords, bows, spears, firearms, or other types
of arms. The possession of unnecessary implements makes difficult the
collection of taxes and dues and tends to foment uprisings.
        -- Toyotomi Hideyoshi, dictator of Japan, August 1588



From martin at loewis.home.cs.tu-berlin.de  Sat Aug 19 23:52:56 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 19 Aug 2000 23:52:56 +0200
Subject: [Python-Dev] Re: BSDDB 3 module now somewhat functional
In-Reply-To: 	<20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
	(amk@s222.tnt1.ann.va.dialup.rcn.com)
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200008192152.XAA00691@loewis.home.cs.tu-berlin.de>

> Still missing: Cursor objects still aren't implemented -- Martin, if
> you haven't started yet, let me know and I'll charge ahead with them
> tomorrow.

No, I haven't started yet, so go ahead.

Regards,
Martin



From trentm at ActiveState.com  Sun Aug 20 01:59:40 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 19 Aug 2000 16:59:40 -0700
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 19, 2000 at 01:11:28AM -0400
References: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
Message-ID: <20000819165940.A21864@ActiveState.com>

On Sat, Aug 19, 2000 at 01:11:28AM -0400, Tim Peters wrote:
> Note that a patch has been posted to SourceForge that purports to solve
> *some* thread vs fork problems:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470
> 
> Since nobody has made real progress on figuring out why test_fork1 fails on
> some systems, would somebody who is able to make it fail please just try
> this patch & see what happens?
> 

That patch *seems* to fix it for me. As before, I can get test_fork to fail
intermittently (i.e. it doesn't hang every time I run it) without the patch
and cannot get it to hang at all with the patch.

Would you like me to run and provide the instrumented output that I showed
last time this topic came up?


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Sun Aug 20 02:45:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 20:45:32 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <20000819165940.A21864@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEMPHAAA.tim_one@email.msn.com>

[Trent Mick, on
//sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470
]
> That patch *seems* to fix it for me. As before, I can get test_fork
> to fail intermittently (i.e. it doesn't hang every time I run it) without
> the patch and cannot get it to hang at all with the patch.

Thanks a bunch, Trent!  (That's a Minnesotaism -- maybe that's far enough
North that it sounds natural to you, though <wink>.)

> Would you like me to run and provide the instrumented output that
> I showed last time this topic came up?

Na, it's enough to know that the problem appears to have gone away, and
since this was-- in some sense --the simplest of the test cases (just one
fork), it provides the starkest contrast we're going to get between the
behaviors people are seeing and my utter failure to account for them.  OTOH,
we knew the global lock *should be* a problem here (just not the problem we
actually saw!), and Charles is doing the right kind of thing to make that go
away.

I still encourage everyone to run all the tests that failed on all the SMP
systems they can get hold of, before and after the patch.  I'll talk with
Guido about it too (the patch is still a bit too hacky to put out in the
world with pride <wink>).





From dgoodger at bigfoot.com  Sun Aug 20 06:53:05 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Sun, 20 Aug 2000 00:53:05 -0400
Subject: [Python-Dev] Re: Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com>
Message-ID: <B5C4DC70.7D6C%dgoodger@bigfoot.com>

on 2000-08-19 05:19, Tim Peters (tim_one at email.msn.com) wrote:
> I'm afraid "shot down" is the answer...

That's too bad. Thanks for the 'gentle' explanation. This 'crusader' knows
when to give up on a lost cause. ;>

>> test_getopt.py
> 
> We don't have to ask Guido abut *that*:  a test module for getopt would be
> accepted with extreme (yet intangible <wink>) gratitude.  Thank you!

Glad to contribute. You'll find a regression test module for the current
getopt.py as revised patch #101110. I based it on some existing Lib/test/
modules, but haven't found the canonical example or instruction set. Is
there one?

FLASH: Tim's been busy. Just received the official rejections & acceptance
of test_getopt.py.

-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From moshez at math.huji.ac.il  Sun Aug 20 07:19:28 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 20 Aug 2000 08:19:28 +0300 (IDT)
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <20000819173727.A4015@thyrsus.com>
Message-ID: <Pine.GSO.4.10.10008200817510.13651-100000@sundial>

On Sat, 19 Aug 2000, Eric S. Raymond wrote:

> I'm +1 for slipping this one in under the wire, if it matters.
> 
> I'm not just randomly pushing a feature here -- I think the multiple-reader/
> one-writer atomicity guarantees this will give us will be extremely important
> for CGI programmers, who often need a light-duty database facility with exactly
> this kind of concurrency guarantee.

I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
yet, which makes it the perfect place to get stuff for 2.0.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From greg at electricrain.com  Sun Aug 20 08:04:51 2000
From: greg at electricrain.com (Gregory P. Smith)
Date: Sat, 19 Aug 2000 23:04:51 -0700
Subject: [Python-Dev] Re: BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>; from amk@s222.tnt1.ann.va.dialup.rcn.com on Sat, Aug 19, 2000 at 05:15:53PM -0400
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000819230451.A22669@yyz.electricrain.com>

On Sat, Aug 19, 2000 at 05:15:53PM -0400, A.M. Kuchling wrote:
> The handwritten BSDDB3 module has just started actually functioning.
> It now runs the dbtest.py script without core dumps or reported
> errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
> db.py and the most recent _bsddb.c.
> 
> I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
> have to struggle a bit with integrating it into Greg's package and
> compiling it (replacing db.py with my version, and modifying Setup to
> compile _bsddb.c).  I haven't integrated it more, because I'm not sure
> how we want to proceed with it.  Robin/Greg, do you want to continue
> to maintain the package?  ...in which I'll contribute the code to one
> or both of you.  Or, I can take over maintaining the package, or we
> can try to get the module into Python 2.0, but with the feature freeze
> well advanced, I'm doubtful that it'll get in.

I just did a quick scan over your code and liked what I saw.  I was
thinking it'd be cool if someone did this (write a non-SWIG version based
on mine) but knew I wouldn't have time right now.  Thanks!  Note that I
haven't tested your module or looked closely to see if anything looks odd.

I'm happy to keep maintaining the bsddb3 module until it makes its way
into a future Python version.  I don't have a lot of time for it, but send
me updates/fixes as you make them (I'm not on python-dev at the moment).
If your C version is working well, I'll make a new release sometime next
week after I test it a bit more in our application on a few platforms
(linux, linux alpha and win98).

> Still missing: Cursor objects still aren't implemented -- Martin, if
> you haven't started yet, let me know and I'll charge ahead with them
> tomorrow.  Docstrings.  More careful type-checking of function
> objects.  Finally, general tidying, re-indenting, and a careful
> reading to catch any stupid errors that I made.  

It looked like you were keeping the same interface (good!), so I
recommend simply stealing the docstrings from mine if you haven't already
and reviewing them to make sure they make sense.  I pretty much pasted
trimmed down forms of the docs that come with BerkeleyDB in to make them
as well as using some of what Robin had from before.

Also, unless someone actually tests the Recno format databases should
we even bother to include support for it?  I haven't tested them at all.
If nothing else, writing some Recno tests for dbtest.py would be a good
idea before including it.

Greg

-- 
Gregory P. Smith   gnupg/pgp: http://suitenine.com/greg/keys/
                   C379 1F92 3703 52C9 87C4  BE58 6CDA DB87 105D 9163



From tim_one at email.msn.com  Sun Aug 20 08:11:52 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 02:11:52 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <Pine.GSO.4.10.10008200817510.13651-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCEENHHAAA.tim_one@email.msn.com>

[esr]
> I'm +1 for slipping this one in under the wire, if it matters.

[Moshe Zadka]
> I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
> yet, which makes it the perfect place to get stuff for 2.0.

I may be an asshole, but I'm not an idiot:  note that the planned release
date (PEP 200) for 2.0b1 is a week from Monday.  And since there is only one
beta cycle planned, *nothing* goes in except bugfixes after 2.0b1 is
released.  Guido won't like that, but he's not the release manager, and when
I'm replaced by the real release manager on Tuesday, he'll agree with me on
this and Guido will get noogied to death if he opposes us <wink>.

So whatever tricks you want to try to play, play 'em fast.

not-that-i-believe-the-beta-release-date-will-be-met-anyway-
    but-i-won't-admit-that-until-after-it-slips-ly y'rs  - tim





From moshez at math.huji.ac.il  Sun Aug 20 08:17:12 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 20 Aug 2000 09:17:12 +0300 (IDT)
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEENHHAAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008200915410.13651-100000@sundial>

On Sun, 20 Aug 2000, Tim Peters wrote:

> [esr]
> > I'm +1 for slipping this one in under the wire, if it matters.
> 
> [Moshe Zadka]
> > I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
> > yet, which makes it the perfect place to get stuff for 2.0.
> 
> I may be an asshole, but I'm not an idiot:  note that the planned release
> date (PEP 200) for 2.0b1 is a week from Monday.  And since there is only one
> beta cycle planned, *nothing* goes in except bugfixes after 2.0b1 is
> released. 

But that's irrelevant. The sumo interpreter will be a different release,
and will probably be based on 2.0 for core. So what if it's only available
only a month after 2.0 is ready?

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Sun Aug 20 08:24:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 02:24:31 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <Pine.GSO.4.10.10008200915410.13651-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENIHAAA.tim_one@email.msn.com>

[Moshe]
> But that's irrelevant. The sumo interpreter will be a different release,
> and will probably be based on 2.0 for core. So what if it's only available
> only a month after 2.0 is ready?

Like I said, I may be an idiot, but I'm not an asshole -- have fun!





From sjoerd at oratrix.nl  Sun Aug 20 11:22:28 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 11:22:28 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Fri, 18 Aug 2000 20:42:34 +0200.
             <000001c00945$a8d37e40$f2a6b5d4@hagrid> 
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> 
            <000001c00945$a8d37e40$f2a6b5d4@hagrid> 
Message-ID: <20000820092229.F3A2D31047C@bireme.oratrix.nl>

Why don't we handle graminit.c/graminit.h the same way as we currently
handle configure/config.h.in?  The person making a change to
configure.in is responsible for running autoconf and checking in the
result.  Similarly, the person making a change to Grammar should
regenerate graminit.c/graminit.h and check in the result.  In fact,
that is exactly what happened in this particular case.  I'd say there
isn't really a reason to create graminit.c/graminit.h automatically
whenever you do a build of Python.  Even worse, when you have a
read-only copy of the source and you build in a different directory
(and that used to be supported) the current setup will break since it
tries to overwrite Python/graminit.c and Include/graminit.h.

I'd say, go back to the old situation, possibly with a simple Makefile
rule added so that you *can* build graminit, but one that is not used
automatically.

On Fri, Aug 18 2000 "Fredrik Lundh" wrote:

> sjoerd wrote:
> 
> > The problem was that because of your (I think it was you :-) earlier
> > change to have a Makefile in Grammar, I had an old graminit.c lying
> > around in my build directory.  I don't build in the source directory
> > and the changes for a Makefile in Grammar resulted in a file
> > graminit.c in the wrong place.
> 
> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?
> 
> or is python development a unix-only thingie these days?
> 
> </F>
> 
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From thomas at xs4all.net  Sun Aug 20 11:41:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 11:41:14 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000820092229.F3A2D31047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Sun, Aug 20, 2000 at 11:22:28AM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl>
Message-ID: <20000820114114.A4797@xs4all.nl>

On Sun, Aug 20, 2000 at 11:22:28AM +0200, Sjoerd Mullender wrote:

> I'd say, go back to the old situation, possibly with a simple Makefile
> rule added so that you *can* build graminit, but one that is not used
> automatically.

That *is* the old situation. The rule of making graminit as a matter of
course was for convenience with patches that change grammar. Now that most
have been checked in, and we've seen what havoc and confusion making
graminit automatically can cause, I'm all for going back to that too ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From loewis at informatik.hu-berlin.de  Sun Aug 20 12:51:16 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sun, 20 Aug 2000 12:51:16 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E5340.B00811EF@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com>
Message-ID: <200008201051.MAA05259@pandora.informatik.hu-berlin.de>

> Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
> chars then the traditional API would have to raise encoding
> errors

I don't know what you mean by "traditional" here. The gettext.gettext
implementation in Barry's patch will return the UTF-8 encoded byte
string, instead of raising encoding errors - no code conversion takes
place.

> Perhaps the return value type of .gettext() should be given on
> the .install() call: e.g. encoding='utf-8' would have .gettext()
> return a string using UTF-8 while encoding='unicode' would have
> it return Unicode objects.

No. You should have the option of either receiving byte strings, or
Unicode strings. If you want byte strings, you should get the ones
appearing in the catalog.

Regards,
Martin



From loewis at informatik.hu-berlin.de  Sun Aug 20 12:59:28 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sun, 20 Aug 2000 12:59:28 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E5558.C7B6029B@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net>
		<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com>
Message-ID: <200008201059.MAA05292@pandora.informatik.hu-berlin.de>

> Martin mentioned the possibility of using UTF-8 for the
> catalogs and then decoding them into Unicode. That should be
> a reasonable way of getting .gettext() to talk Unicode :-)

You misunderstood. Using UTF-8 in the catalogs is independent from
using Unicode. You can have the catalogs in UTF-8, and still access
the catalog as byte strings, and you can have the catalog in Latin-1,
and convert that to unicode strings upon retrieval.

> Just dreaming a little here: I would prefer that we use some
> form of XML to write the catalogs. 

Well, I hope that won't happen. We have excellent tools dealing with
the catalogs, and I see no value in replacing

#: src/grep.c:183 src/grep.c:200 src/grep.c:300 src/grep.c:408 src/kwset.c:184
#: src/kwset.c:190
msgid "memory exhausted"
msgstr "Virtueller Speicher ersch?pft."

with

<entry>
  <sourcelist>
    <source file="src/grep.c" line="183"/>
    <source file="src/grep.c" line="200"/>
    <source file="src/grep.c" line="300"/>
    <source file="src/grep.c" line="408"/>
    <source file="src/kwset.c" line="184"/>
    <source file="src/kwset.c" line="190"/>
  </sourcelist>
  <msgid>memory exhausted</msgid>
  <msgstr>Virtueller Speicher ersch?pft.</msgstr>
</entry>

> XML comes with Unicode support and tools for writing XML are
> available too.

Well, the catalog files also "come with unicode support", meaning that
you can write them in UTF-8 if you want; and tools could be easily
extended to process UCS-2 input if anybody desires.

OTOH, the tools for writing po files are much more advanced than any
XML editor I know.

> We'd only need a way to transform XML into catalog files of some
> Python specific platform independent format (should be possible to
> create .mo files from XML too).

Or we could convert the XML catalogs in Uniforum-style catalogs, and
then use the existing tools.

Regards,
Martin



From sjoerd at oratrix.nl  Sun Aug 20 13:26:05 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 13:26:05 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Sun, 20 Aug 2000 11:41:14 +0200.
             <20000820114114.A4797@xs4all.nl> 
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> 
            <20000820114114.A4797@xs4all.nl> 
Message-ID: <20000820112605.BF61431047C@bireme.oratrix.nl>

Here's another pretty serious bug.  Can you verify that this time it
isn't my configurations?

Try this:

from encodings import cp1006, cp1026

I get the error
ImportError: cannot import name cp1026
but if I try to import the two modules separately I get no error.

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From thomas at xs4all.net  Sun Aug 20 15:51:17 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 15:51:17 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000820112605.BF61431047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Sun, Aug 20, 2000 at 01:26:05PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> <20000820114114.A4797@xs4all.nl> <20000820112605.BF61431047C@bireme.oratrix.nl>
Message-ID: <20000820155117.C4797@xs4all.nl>

On Sun, Aug 20, 2000 at 01:26:05PM +0200, Sjoerd Mullender wrote:

> Here's another pretty serious bug.  Can you verify that this time it
> isn't my configurations?

It isn't your config, this is a genuine bug. I'll be checking in a quick fix
in a few minutes, and start thinking about a test case that would've caught
this.

> Try this:
> from encodings import cp1006, cp1026

> I get the error
> ImportError: cannot import name cp1026
> but if I try to import the two modules separately I get no error.

Yes. 'find_from_args' wasn't trying hard enough to find out what the
argument to an import were. Previously, all it had to do was scan the
bytecodes immediately following an 'IMPORT_NAME' for IMPORT_FROM statements,
and record its names. However, now that IMPORT_FROM generates a STORE, it
stops looking after the first IMPORT_FROM. This worked fine for normal
object-retrieval imports, which don't use the list generated by
find_from_args, but not for dynamic loading tricks such as 'encodings' uses.

The fix I made was to unconditionally jump over 5 bytes, after an
IMPORT_FROM, rather than 2 (2 for the oparg, 1 for the next instruction (a
STORE) and two more for the oparg for the STORE)

This does present a problem for the proposed change in semantics for the
'as' clause, though. If we allow all expressions that yield valid l-values
in import-as and from-import-as, we can't easily find out what the import
arguments are by examining the future bytecode stream. (It might be
possible, if we changed the POP_TOP to, say, END_IMPORT, that pops the
module from the stack and can be used to end the search for import
arguments.

However, I find this hackery a bit appalling :) Why are we constructing a
list of import arguments at runtime, from compile-time information, if all
that information is available at compile time too ? And more easily so ?
What would break if I made IMPORT_NAME retrieve the from-arguments from a
list, which is built on the stack by com_import_stmt ? Or is there a more
convenient way of passing a variable list of strings to a bytecode ? It
won't really affect performance, since find_from_args is called for all
imports anyway.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Sun Aug 20 16:34:31 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 20 Aug 2000 16:34:31 +0200
Subject: [Python-Dev] [ Patch #101238 ] PyOS_CheckStack for Windows
References: <20000815104723.A27306@ActiveState.com> <005401c006ec$a95a74a0$f2a6b5d4@hagrid>
Message-ID: <019801c00ab3$c59e8d20$f2a6b5d4@hagrid>

I've prepared a patch based on the PyOS_CheckStack code
I posted earlier:

http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101238&group_id=5470

among other things, this fixes the recursive __repr__/__str__
problems under Windows.  it also makes it easier to use Python
with non-standard stack sizes (e.g. when embedding).

some additional notes:

- the new function was added to pythonrun.c, mostly because
it was already declared in pythonrun.h...

- for the moment, it's only enabled if you're using MSVC.  if any-
one here knows if structured exceptions are widely supported by
Windows compilers, let me know.

- it would probably be a good idea to make it an __inline function
(and put the entire function in the header file instead), but I don't
recall if MSVC does the right thing in that case, and I haven't had
time to try it out just yet...

enjoy /F




From sjoerd.mullender at oratrix.com  Sun Aug 20 16:54:43 2000
From: sjoerd.mullender at oratrix.com (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 16:54:43 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> <20000820114114.A4797@xs4all.nl> <20000820112605.BF61431047C@bireme.oratrix.nl> <20000820155117.C4797@xs4all.nl>
Message-ID: <399FF133.63B83A52@oratrix.com>

This seems to have done the trick.  Thanks.

Thomas Wouters wrote:
> 
> On Sun, Aug 20, 2000 at 01:26:05PM +0200, Sjoerd Mullender wrote:
> 
> > Here's another pretty serious bug.  Can you verify that this time it
> > isn't my configurations?
> 
> It isn't your config, this is a genuine bug. I'll be checking in a quick fix
> in a few minutes, and start thinking about a test case that would've caught
> this.
> 
> > Try this:
> > from encodings import cp1006, cp1026
> 
> > I get the error
> > ImportError: cannot import name cp1026
> > but if I try to import the two modules separately I get no error.
> 
> Yes. 'find_from_args' wasn't trying hard enough to find out what the
> argument to an import were. Previously, all it had to do was scan the
> bytecodes immediately following an 'IMPORT_NAME' for IMPORT_FROM statements,
> and record its names. However, now that IMPORT_FROM generates a STORE, it
> stops looking after the first IMPORT_FROM. This worked fine for normal
> object-retrieval imports, which don't use the list generated by
> find_from_args, but not for dynamic loading tricks such as 'encodings' uses.
> 
> The fix I made was to unconditionally jump over 5 bytes, after an
> IMPORT_FROM, rather than 2 (2 for the oparg, 1 for the next instruction (a
> STORE) and two more for the oparg for the STORE)
> 
> This does present a problem for the proposed change in semantics for the
> 'as' clause, though. If we allow all expressions that yield valid l-values
> in import-as and from-import-as, we can't easily find out what the import
> arguments are by examining the future bytecode stream. (It might be
> possible, if we changed the POP_TOP to, say, END_IMPORT, that pops the
> module from the stack and can be used to end the search for import
> arguments.
> 
> However, I find this hackery a bit appalling :) Why are we constructing a
> list of import arguments at runtime, from compile-time information, if all
> that information is available at compile time too ? And more easily so ?
> What would break if I made IMPORT_NAME retrieve the from-arguments from a
> list, which is built on the stack by com_import_stmt ? Or is there a more
> convenient way of passing a variable list of strings to a bytecode ? It
> won't really affect performance, since find_from_args is called for all
> imports anyway.
> 
> --
> Thomas Wouters <thomas at xs4all.net>
> 
> Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev



From nascheme at enme.ucalgary.ca  Sun Aug 20 17:53:28 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sun, 20 Aug 2000 09:53:28 -0600
Subject: [Python-Dev] Re: Eureka! (Re: test_fork fails --with-thread)
In-Reply-To: <14750.3466.34096.504552@buffalo.fnal.gov>; from Charles G Waldman on Fri, Aug 18, 2000 at 11:31:06PM -0500
References: <14750.3466.34096.504552@buffalo.fnal.gov>
Message-ID: <20000820095328.A25233@keymaster.enme.ucalgary.ca>

On Fri, Aug 18, 2000 at 11:31:06PM -0500, Charles G Waldman wrote:
> Well, I think I understand what's going on and I have a patch that
> fixes the problem.

Yes!  With this patch my nasty little tests run successfully both
single and dual CPU Linux machines.  Its still a mystery how the child
can screw up the parent after the fork.  Oh well.

  Neil



From trentm at ActiveState.com  Sun Aug 20 19:15:52 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 20 Aug 2000 10:15:52 -0700
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <B5C4DC70.7D6C%dgoodger@bigfoot.com>; from dgoodger@bigfoot.com on Sun, Aug 20, 2000 at 12:53:05AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com> <B5C4DC70.7D6C%dgoodger@bigfoot.com>
Message-ID: <20000820101552.A24181@ActiveState.com>

On Sun, Aug 20, 2000 at 12:53:05AM -0400, David Goodger wrote:
> Glad to contribute. You'll find a regression test module for the current
> getopt.py as revised patch #101110. I based it on some existing Lib/test/
> modules, but haven't found the canonical example or instruction set. Is
> there one?

I don't really think there is. Kind of folk lore. There are some good
examples to follow in the existing test suite. Skip Montanaro wrote a README
for writing tests and using the test suite (.../dist/src/Lib/test/README).

Really, the testing framework is extremely simple. Which is one of its
benefits. There is not a whole lot of depth that one has not grokked just by
writing one test_XXX.py.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Sun Aug 20 19:46:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 13:46:35 -0400
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <20000820101552.A24181@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>

[David Goodger]
> Glad to contribute. You'll find a regression test module for the current
> getopt.py as revised patch #101110. I based it on some existing
> Lib/test/ modules, but haven't found the canonical example or instruction
> set. Is there one?

[Trent Mick]
> I don't really think there is. Kind of folk lore. There are some good
> examples to follow in the existing test suite. Skip Montanaro
> wrote a README for writing tests and using the test suite
> (.../dist/src/Lib/test/README).
>
> Really, the testing framework is extremely simple. Which is one of its
> benefits. There is not a whole lot of depth that one has not
> grokked just by writing one test_XXX.py.

What he said.  The README explains it well, and I think the only thing you
(David) missed in your patch was the need to generate the "expected output"
file via running regrtest once with -g on the new test case.

I'd add one thing:  people use "assert" *way* too much in the test suite.
It's usually much better to just print what you got, and rely on regrtest's
output-comparison to complain if what you get isn't what you expected.  The
primary reason for this is that asserts "vanish" when Python is run
using -O, so running regrtest in -O mode simply doesn't test *anything*
caught by an assert.

A compromise is to do both:

    print what_i_expected, what_i_got
    assert what_i_expected == what_i_got

In Python 3000, I expect we'll introduce a new binary infix operator

    !!!

so that

    print x !!! y

both prints x and y and asserts that they're equal <wink>.





From mal at lemburg.com  Sun Aug 20 19:57:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 20 Aug 2000 19:57:32 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <Pine.LNX.4.21.0008192202520.25394-100000@james.daa.com.au>
Message-ID: <39A01C0C.E6BA6CCB@lemburg.com>

James Henstridge wrote:
> 
> On Sat, 19 Aug 2000, M.-A. Lemburg wrote:
> 
> > > As I said above, most of that turned out not to be very useful.  Did you
> > > include any of the language selection code in the last version of my
> > > gettext module?  It gave behaviour very close to C gettext in this
> > > respect.  It expands the locale name given by the user using the
> > > locale.alias files found on the systems, then decomposes that into the
> > > simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> > > search for catalogs by names:
> > >   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> > >
> > > This also allows things like expanding LANG=catalan to:
> > >   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> > > (provided the appropriate locale.alias files are found)
> > >
> > > If you missed that that version I sent you I can send it again.  It
> > > stripped out a lot of the experimental code giving a much simpler module.
> >
> > Uhm, can't you make some use of the new APIs in locale.py
> > for this ?
> >
> > locale.py has a whole new set of encoding aware support for
> > LANG variables. It supports Unix and Windows (thanks to /F).
> 
> Well, it can do a little more than that.  It will also handle the case of
> a number of locales listed in the LANG environment variable.  It also
> doesn't look like it handles decomposition of a locale like
> ll_CC.encoding at modifier into other matching encodings in the correct
> precedence order.
> 
> Maybe something to do this sort of decomposition would fit better in
> locale.py though.
> 
> This sort of thing is very useful for people who know more than one
> language, and doesn't seem to be handled by plain setlocale()

I'm not sure I can follow you here: are you saying that your
support in gettext.py does more or less than what's present
in locale.py ?

If it's more, I think it would be a good idea to add those
parts to locale.py.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bckfnn at worldonline.dk  Sun Aug 20 20:34:53 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Sun, 20 Aug 2000 18:34:53 GMT
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
Message-ID: <39a024b1.5036672@smtp.worldonline.dk>

[Tim Peters]

>I'd add one thing:  people use "assert" *way* too much in the test suite.

I'll second that.

>It's usually much better to just print what you got, and rely on regrtest's
>output-comparison to complain if what you get isn't what you expected.  The
>primary reason for this is that asserts "vanish" when Python is run
>using -O, so running regrtest in -O mode simply doesn't test *anything*
>caught by an assert.

It can also stop the test script from being used with JPython. A
difference that is acceptable (perhaps by necessity) will prevent the
remaining test from executing.

regards,
finn



From amk1 at erols.com  Sun Aug 20 22:44:02 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Sun, 20 Aug 2000 16:44:02 -0400
Subject: [Python-Dev] ANN: BerkeleyDB 2.9.0 (experimental)
Message-ID: <200008202044.QAA01842@207-172-111-161.s161.tnt1.ann.va.dialup.rcn.com>

This is an experimental release of a rewritten version of the
BerkeleyDB module by Robin Dunn.  Starting from Greg Smith's version,
which supports the 3.1.x versions of Sleepycat's DB library, I've
translated the SWIG wrapper into hand-written C code.  

Warnings: this module is experimental, so don't put it to production
use.  I've only compiled the code with the current Python CVS tree;
there might be glitches with 1.5.2 which will need to be fixed.
Cursor objects are implemented, but completely untested; methods might
not work or might dump core.  (DB and DbEnv objects *are* tested, and
seem to work fine.)

Grab the code from this FTP directory: 
     ftp://starship.python.net/pub/crew/amk/new/

Please report problems to me.  Thanks!

--amk



From thomas at xs4all.net  Sun Aug 20 22:49:18 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 22:49:18 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: <200008202002.NAA13530@delerium.i.sourceforge.net>; from noreply@sourceforge.net on Sun, Aug 20, 2000 at 01:02:32PM -0700
References: <200008202002.NAA13530@delerium.i.sourceforge.net>
Message-ID: <20000820224918.D4797@xs4all.nl>

On Sun, Aug 20, 2000 at 01:02:32PM -0700, noreply at sourceforge.net wrote:

> Summary: Allow all assignment expressions after 'import something as'

> Date: 2000-Aug-19 21:29
> By: twouters
> 
> Comment:
> This absurdly simple patch (4 lines changed in 2 files) turns 'import-as'
> and 'from-import-as' into true assignment expressions: the name given
> after 'as' can be any expression that is a valid l-value:

> >>> from sys import version_info as (maj,min,pl,relnam,relno)          
> >>> maj,min,pl,relnam,relno
> (2, 0, 0, 'beta', 1)

[snip other examples]

> This looks so natural, I would almost treat this as a bugfix instead of a
> new feature ;)

> -------------------------------------------------------
> 
> Date: 2000-Aug-20 20:02
> By: nowonder
> 
> Comment:
> Looks fine. Works as I expect. Doesn't break old code. I hope Guido likes
> it (assigned to gvanrossum).

Actually, it *will* break old code. Try using some of those tricks on, say,
'encodings', like so (excessively convoluted to prove a point ;):

>>> x = {}
>>> from encodings import cp1006 as x[oct(1)], cp1026 as x[hex(20)]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ImportError: cannot import name cp1026

I've another patch waiting which I'll upload after some cleanup, which
circumvents this. The problem is that find_from_args is having a hard time
figuring out how 'import' is being called, exactly. So instead, I create a
list *before* calling import, straight from the information available at
compile-time. (It's only a list because it is currently a list, I would
prefer to make it a tuple instead but I donno if it would break stuff)

That patch is necessary to be able to support this new behaviour, but I
think it's worth checking in even if this patch is rejected -- it speeds up
pystone ! :-) Basically it moves the logic of finding out what the import
arguments are to com_import_stmt() (at compiletime), rather than the
'IMPORT_NAME' bytecode (at runtime).

The only downside is that it adds all 'from-import' arguments to the
co_consts list (as PyString objects) as well as where they already are, the
co_names list (as normal strings). I don't think that's a high price to pay,
myself, and mayhaps the actual storage use could be reduced by making the
one point to the data of the other. Not sure if it's worth it, though.

I've just uploaded the other patch, it can be found here:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101243&group_id=5470

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From dgoodger at bigfoot.com  Sun Aug 20 23:08:05 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Sun, 20 Aug 2000 17:08:05 -0400
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call
	for reviewer!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
Message-ID: <B5C5C0F4.7E0D%dgoodger@bigfoot.com>

on 2000-08-20 13:46, Tim Peters (tim_one at email.msn.com) wrote:
> What he said.  The README explains it well...

Missed that. Will read it & update the test module.

> In Python 3000, I expect we'll introduce a new binary infix operator
> 
> !!!

Looking forward to more syntax in future releases of Python. I'm sure you'll
lead the way, Tim.
-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From thomas at xs4all.net  Sun Aug 20 23:17:30 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 23:17:30 +0200
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <B5C5C0F4.7E0D%dgoodger@bigfoot.com>; from dgoodger@bigfoot.com on Sun, Aug 20, 2000 at 05:08:05PM -0400
References: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com> <B5C5C0F4.7E0D%dgoodger@bigfoot.com>
Message-ID: <20000820231730.F4797@xs4all.nl>

On Sun, Aug 20, 2000 at 05:08:05PM -0400, David Goodger wrote:
> on 2000-08-20 13:46, Tim Peters (tim_one at email.msn.com) wrote:

> > In Python 3000, I expect we'll introduce a new binary infix operator
> > 
> > !!!
> 
> Looking forward to more syntax in future releases of Python. I'm sure you'll
> lead the way, Tim.

I think you just witnessed some of Tim's legendary wit ;) I suspect most
Python programmers would shoot Guido, Tim, whoever wrote the patch that
added the new syntax, and then themselves, if that ever made it into Python
;)

Good-thing-I-can't-legally-carry-guns-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Mon Aug 21 01:22:10 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 20 Aug 2000 23:22:10 +0000
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment 
 expressions after 'import something as'
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl>
Message-ID: <39A06822.5360596D@nowonder.de>

Thomas Wouters wrote:
> 
> > Date: 2000-Aug-20 20:02
> > By: nowonder
> >
> > Comment:
> > Looks fine. Works as I expect. Doesn't break old code. I hope Guido likes
> > it (assigned to gvanrossum).
> 
> Actually, it *will* break old code. Try using some of those tricks on, say,
> 'encodings', like so (excessively convoluted to prove a point ;):

Actually, I meant that it won't break any existing code (because there
is no code using 'import x as y' yet).

Although I don't understand your example (because the word "encoding"
makes
me want to stick my head into the sand), I am fine with your shift
of the list building to compile-time. When I realized what IMPORT_NAME
does at runtime, I wondered if this was really neccessary.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From nowonder at nowonder.de  Mon Aug 21 01:54:09 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 20 Aug 2000 23:54:09 +0000
Subject: [Python-Dev] OT: How to send CVS update mails?
Message-ID: <39A06FA1.C5EB34D1@nowonder.de>

Sorry, but I cannot figure out how to make SourceForge send
updates whenever there is a CVS commit (checkins mailing
list).

I need this for another project, so if someone remembers
how to do this, please tell me.

off-topic-and-terribly-sorri-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From greg at cosc.canterbury.ac.nz  Mon Aug 21 03:50:49 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 21 Aug 2000 13:50:49 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14749.26431.198802.970572@cj42289-a.reston1.va.home.com>
Message-ID: <200008210150.NAA15911@s454.cosc.canterbury.ac.nz>

"Fred L. Drake, Jr." <fdrake at beopen.com>:

> Let's accept (some variant) or Skip's desired functionality as
> os.path.splitprefix(); The result can be (prefix, [list of suffixes]).

+1

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From guido at beopen.com  Mon Aug 21 06:37:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 20 Aug 2000 23:37:46 -0500
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Sun, 20 Aug 2000 22:49:18 +0200."
             <20000820224918.D4797@xs4all.nl> 
References: <200008202002.NAA13530@delerium.i.sourceforge.net>  
            <20000820224918.D4797@xs4all.nl> 
Message-ID: <200008210437.XAA22075@cj20424-a.reston1.va.home.com>

> > Summary: Allow all assignment expressions after 'import something as'

-1.  Hypergeneralization.

By the way, notice that

  import foo.bar

places 'foo' in the current namespace, after ensuring that 'foo.bar'
is defined.

What should

  import foo.bar as spam

assign to spam?  I hope foo.bar, not foo.

I note that the CVS version doesn't support this latter syntax at all;
it should be fixed, even though the same effect can be has with

  from foo import bar as spam

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Mon Aug 21 07:08:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 01:08:25 -0400
Subject: [Python-Dev] Py_MakePendingCalls
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>

Does anyone ever call Py_MakePendingCalls?  It's an undocumented entry point
in ceval.c.  I'd like to get rid of it.  Guido sez:

    The only place I know that uses it was an old Macintosh module I
    once wrote to play sounds asynchronously.  I created
    Py_MakePendingCalls() specifically for that purpose.  It may be
    best to get rid of it.

It's not best if anyone is using it despite its undocumented nature, but is
best otherwise.





From moshez at math.huji.ac.il  Mon Aug 21 07:56:31 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 21 Aug 2000 08:56:31 +0300 (IDT)
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for
 reviewer!)
In-Reply-To: <20000820231730.F4797@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008210855550.8603-100000@sundial>

On Sun, 20 Aug 2000, Thomas Wouters wrote:

> I think you just witnessed some of Tim's legendary wit ;) I suspect most
> Python programmers would shoot Guido, Tim, whoever wrote the patch that
> added the new syntax, and then themselves, if that ever made it into Python
> ;)
> 
> Good-thing-I-can't-legally-carry-guns-ly y'rs,

Oh, I'm sure ESR will let you use on of his for this purpose.

it's-a-worth-goal-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From nowonder at nowonder.de  Mon Aug 21 10:30:02 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Mon, 21 Aug 2000 08:30:02 +0000
Subject: [Python-Dev] Re: compile.c: problem with duplicate argument bugfix
References: <14750.1321.978274.117748@buffalo.fnal.gov>
Message-ID: <39A0E88A.2E2DB35E@nowonder.de>

Charles G Waldman wrote:
> 
> I'm catching up on the python-dev archives and see your message.
> 
> Note that I submitted a patch back in May to fix this same problem:
> 
>  http://www.python.org/pipermail/patches/2000-May/000638.html
> 
> There you will find a working patch, and a detailed discussion which
> explains why your approach results in a core-dump.

I had a look. This problem was fixed by removing the call to
PyErr_Clear() from (at that time) line 359 in Objects/object.c.

If you think your patch is a better solution/still needed, please
explain why. Thanks anyway.

> I submitted this patch back before Python moved over to SourceForge,
> there was a small amount of discussion about it and then the word from
> Guido was "I'm too busy to look at this now", and the patch got
> dropped on the floor.

a-patch-manager-can-be-a-good-thing---even-web-based-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From gstein at lyra.org  Mon Aug 21 09:57:06 2000
From: gstein at lyra.org (Greg Stein)
Date: Mon, 21 Aug 2000 00:57:06 -0700
Subject: [Python-Dev] OT: How to send CVS update mails?
In-Reply-To: <39A06FA1.C5EB34D1@nowonder.de>; from nowonder@nowonder.de on Sun, Aug 20, 2000 at 11:54:09PM +0000
References: <39A06FA1.C5EB34D1@nowonder.de>
Message-ID: <20000821005706.D11327@lyra.org>

Take a look at CVSROOT/loginfo and CVSROOT/syncmail in the Python repository.

Cheers,
-g

On Sun, Aug 20, 2000 at 11:54:09PM +0000, Peter Schneider-Kamp wrote:
> Sorry, but I cannot figure out how to make SourceForge send
> updates whenever there is a CVS commit (checkins mailing
> list).
> 
> I need this for another project, so if someone remembers
> how to do this, please tell me.
> 
> off-topic-and-terribly-sorri-ly y'rs
> Peter
> -- 
> Peter Schneider-Kamp          ++47-7388-7331
> Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
> N-7050 Trondheim              http://schneider-kamp.de
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From gstein at lyra.org  Mon Aug 21 09:58:41 2000
From: gstein at lyra.org (Greg Stein)
Date: Mon, 21 Aug 2000 00:58:41 -0700
Subject: [Python-Dev] Py_MakePendingCalls
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 21, 2000 at 01:08:25AM -0400
References: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>
Message-ID: <20000821005840.E11327@lyra.org>

Torch the sucker. It is a pain for free-threading.

(and no: I don't use it, nor do I know anything that uses it)

Cheers,
-g

On Mon, Aug 21, 2000 at 01:08:25AM -0400, Tim Peters wrote:
> Does anyone ever call Py_MakePendingCalls?  It's an undocumented entry point
> in ceval.c.  I'd like to get rid of it.  Guido sez:
> 
>     The only place I know that uses it was an old Macintosh module I
>     once wrote to play sounds asynchronously.  I created
>     Py_MakePendingCalls() specifically for that purpose.  It may be
>     best to get rid of it.
> 
> It's not best if anyone is using it despite its undocumented nature, but is
> best otherwise.
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From martin at loewis.home.cs.tu-berlin.de  Mon Aug 21 09:57:54 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Mon, 21 Aug 2000 09:57:54 +0200
Subject: [Python-Dev] ANN: BerkeleyDB 2.9.0 (experimental)
Message-ID: <200008210757.JAA08643@loewis.home.cs.tu-berlin.de>

Hi Andrew,

I just downloaded your new module, and found a few problems with it:

- bsddb3.db.hashopen does not work, as Db() is called with no
  arguments; it expects at least one argument. The same holds for btopen
  and rnopen.

- The Db() function should accept None as an argument (or no argument),
  as invoking db_create with a NULL environment creates a "standalone
  database".

- Error recovery appears to be missing; I'm not sure whether this is
  the fault of the library or the fault of the module, though:

>>> from bsddb3 import db
>>> e=db.DbEnv()
>>> e.open("/tmp/aaa",db.DB_CREATE)
>>> d=db.Db(e)
>>> d.open("foo",db.DB_HASH,db.DB_CREATE)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
_bsddb.error: (22, 'Das Argument ist ung\374ltig')
>>> d.open("foo",db.DB_HASH,db.DB_CREATE)
zsh: segmentation fault  python

BTW, I still don't know what argument exactly was invalid ...

Regards,
Martin



From mal at lemburg.com  Mon Aug 21 11:42:04 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 11:42:04 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
			<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com> <200008201059.MAA05292@pandora.informatik.hu-berlin.de>
Message-ID: <39A0F96C.EA0D0D4B@lemburg.com>

Martin von Loewis wrote:
> 
> > Just dreaming a little here: I would prefer that we use some
> > form of XML to write the catalogs.
> 
> Well, I hope that won't happen. We have excellent tools dealing with
> the catalogs, and I see no value in replacing
> 
> #: src/grep.c:183 src/grep.c:200 src/grep.c:300 src/grep.c:408 src/kwset.c:184
> #: src/kwset.c:190
> msgid "memory exhausted"
> msgstr "Virtueller Speicher ersch?pft."
> 
> with
> 
> <entry>
>   <sourcelist>
>     <source file="src/grep.c" line="183"/>
>     <source file="src/grep.c" line="200"/>
>     <source file="src/grep.c" line="300"/>
>     <source file="src/grep.c" line="408"/>
>     <source file="src/kwset.c" line="184"/>
>     <source file="src/kwset.c" line="190"/>
>   </sourcelist>
>   <msgid>memory exhausted</msgid>
>   <msgstr>Virtueller Speicher ersch?pft.</msgstr>
> </entry>

Well, it's the same argument as always: better have one format
which fits all than a new format for every application. XML
suits these tasks nicely and is becoming more and more accepted
these days.
 
> > XML comes with Unicode support and tools for writing XML are
> > available too.
> 
> Well, the catalog files also "come with unicode support", meaning that
> you can write them in UTF-8 if you want; and tools could be easily
> extended to process UCS-2 input if anybody desires.
> 
> OTOH, the tools for writing po files are much more advanced than any
> XML editor I know.
> 
> > We'd only need a way to transform XML into catalog files of some
> > Python specific platform independent format (should be possible to
> > create .mo files from XML too).
> 
> Or we could convert the XML catalogs in Uniforum-style catalogs, and
> then use the existing tools.

True.

Was just a thought...
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 21 11:30:20 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 11:30:20 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com> <200008201051.MAA05259@pandora.informatik.hu-berlin.de>
Message-ID: <39A0F6AC.B9C0FCC9@lemburg.com>

Martin von Loewis wrote:
> 
> > Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
> > chars then the traditional API would have to raise encoding
> > errors
> 
> I don't know what you mean by "traditional" here. The gettext.gettext
> implementation in Barry's patch will return the UTF-8 encoded byte
> string, instead of raising encoding errors - no code conversion takes
> place.

True.
 
> > Perhaps the return value type of .gettext() should be given on
> > the .install() call: e.g. encoding='utf-8' would have .gettext()
> > return a string using UTF-8 while encoding='unicode' would have
> > it return Unicode objects.
> 
> No. You should have the option of either receiving byte strings, or
> Unicode strings. If you want byte strings, you should get the ones
> appearing in the catalog.

So you're all for the two different API version ? After some
more thinking, I think I agree. The reason is that the lookup
itself will have to be Unicode-aware too:

gettext.unigettext(u"L?schen") would have to convert u"L?schen"
to UTF-8, then look this up and convert the returned value
back to Unicode.

gettext.gettext(u"L?schen") will fail with ASCII default encoding.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 21 12:04:04 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 12:04:04 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>
Message-ID: <39A0FE94.1AF5FABF@lemburg.com>

Guido van Rossum wrote:
> 
> Paul Prescod wrote:
> 
> > I don't think of iterators as indexing in terms of numbers. Otherwise I
> > could do this:
> >
> > >>> a={0:"zero",1:"one",2:"two",3:"three"}
> > >>> for i in a:
> > ...     print i
> > ...
> >
> > So from a Python user's point of view, for-looping has nothing to do
> > with integers. From a Python class/module creator's point of view it
> > does have to do with integers. I wouldn't be either surprised nor
> > disappointed if that changed one day.
> 
> Bingo!
> 
> I've long had an idea for generalizing 'for' loops using iterators. This is
> more a Python 3000 thing, but I'll explain it here anyway because I think
> it's relevant. Perhaps this should become a PEP?  (Maybe we should have a
> series of PEPs with numbers in the 3000 range for Py3k ideas?)
> 
> The statement
> 
>   for <variable> in <object>: <block>
> 
> should translate into this kind of pseudo-code:
> 
>   # variant 1
>   __temp = <object>.newiterator()
>   while 1:
>       try: <variable> = __temp.next()
>       except ExhaustedIterator: break
>       <block>
> 
> or perhaps (to avoid the relatively expensive exception handling):
> 
>   # variant 2
>   __temp = <object>.newiterator()
>   while 1:
>       __flag, <variable = __temp.next()
>       if not __flag: break
>       <block>
> 
> In variant 1, the next() method returns the next object or raises
> ExhaustedIterator. In variant 2, the next() method returns a tuple (<flag>,
> <variable>) where <flag> is 1 to indicate that <value> is valid or 0 to
> indicate that there are no more items available. I'm not crazy about the
> exception, but I'm even less crazy about the more complex next() return
> value (careful observers may have noticed that I'm rarely crazy about flag
> variables :-).
> 
> Another argument for variant 1 is that variant 2 changes what <variable> is
> after the loop is exhausted, compared to current practice: currently, it
> keeps the last valid value assigned to it. Most likely, the next() method
> returns None when the sequence is exhausted. It doesn't make a lot of sense
> to require it to return the last item of the sequence -- there may not *be*
> a last item, if the sequence is empty, and not all sequences make it
> convenient to keep hanging on to the last item in the iterator, so it's best
> to specify that next() returns (0, None) when the sequence is exhausted.
> 
> (It would be tempting to suggeste a variant 1a where instead of raising an
> exception, next() returns None when the sequence is exhausted, but this
> won't fly: you couldn't iterate over a list containing some items that are
> None.)

How about a third variant:

#3:
__iter = <object>.iterator()
while __iter:
   <variable> = __iter.next()
   <block>

This adds a slot call, but removes the malloc overhead introduced
by returning a tuple for every iteration (which is likely to be
a performance problem).

Another possibility would be using an iterator attribute
to get at the variable:

#4:
__iter = <object>.iterator()
while 1:
   if not __iter.next():
        break
   <variable> = __iter.value
   <block>

> Side note: I believe that the iterator approach could actually *speed up*
> iteration over lists compared to today's code. This is because currently the
> interation index is a Python integer object that is stored on the stack.
> This means an integer add with overflow check, allocation, and deallocation
> on each iteration! But the iterator for lists (and other basic sequences)
> could easily store the index as a C int! (As long as the sequence's length
> is stored in an int, the index can't overflow.)

You might want to check out the counterobject.c approach I used
to speed up the current for-loop in Python 1.5's ceval.c:
it's basically a mutable C integer which is used instead of
the current Python integer index.

The details can be found in my old patch:

  http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz

> [Warning: thinking aloud ahead!]
> 
> Once we have the concept of iterators, we could support explicit use of them
> as well. E.g. we could use a variant of the for statement to iterate over an
> existing iterator:
> 
>   for <variable> over <iterator>: <block>
> 
> which would (assuming variant 1 above) translate to:
> 
>   while 1:
>       try: <variable> = <iterator>.next()
>       except ExhaustedIterator: break
>       <block>
> 
> This could be used in situations where you have a loop iterating over the
> first half of a sequence and a second loop that iterates over the remaining
> items:
> 
>   it = something.newiterator()
>   for x over it:
>       if time_to_start_second_loop(): break
>       do_something()
>   for x over it:
>       do_something_else()
> 
> Note that the second for loop doesn't reset the iterator -- it just picks up
> where the first one left off! (Detail: the x that caused the break in the
> first loop doesn't get dealt with in the second loop.)
> 
> I like the iterator concept because it allows us to do things lazily. There
> are lots of other possibilities for iterators. E.g. mappings could have
> several iterator variants to loop over keys, values, or both, in sorted or
> hash-table order. Sequences could have an iterator for traversing them
> backwards, and a few other ones for iterating over their index set (cf.
> indices()) and over (index, value) tuples (cf. irange()). Files could be
> their own iterator where the iterator is almost the same as readline()
> except it raises ExhaustedIterator at EOF instead of returning "".  A tree
> datastructure class could have an associated iterator class that maintains a
> "finger" into the tree.
> 
> Hm, perhaps iterators could be their own iterator? Then if 'it' were an
> iterator, it.newiterator() would return a reference to itself (not a copy).
> Then we wouldn't even need the 'over' alternative syntax. Maybe the method
> should be called iterator() then, not newiterator(), to avoid suggesting
> anything about the newness of the returned iterator.
> 
> Other ideas:
> 
> - Iterators could have a backup() method that moves the index back (or
> raises an exception if this feature is not supported, e.g. when reading data
> from a pipe).
> 
> - Iterators over certain sequences might support operations on the
> underlying sequence at the current position of the iterator, so that you
> could iterate over a sequence and occasionally insert or delete an item (or
> a slice).

FYI, I've attached a module which I've been using a while for
iteration. The code is very simple and implements the #4 variant
described above.
 
> Of course iterators also connect to generators. The basic list iterator
> doesn't need coroutines or anything, it can be done as follows:
> 
>   class Iterator:
>       def __init__(self, seq):
>           self.seq = seq
>           self.ind = 0
>       def next(self):
>           if self.ind >= len(self.seq): raise ExhaustedIterator
>           val = self.seq[self.ind]
>           self.ind += 1
>           return val
> 
> so that <list>.iterator() could just return Iterator(<list>) -- at least
> conceptually.
> 
> But for other data structures the amount of state needed might be
> cumbersome. E.g. a tree iterator needs to maintain a stack, and it's much
> easier to code that using a recursive Icon-style generator than by using an
> explicit stack. On the other hand, I remember reading an article a while ago
> (in Dr. Dobbs?) by someone who argued (in a C++ context) that such recursive
> solutions are very inefficient, and that an explicit stack (1) is really not
> that hard to code, and (2) gives much more control over the memory and time
> consumption of the code. On the third hand, some forms of iteration really
> *are* expressed much more clearly using recursion. On the fourth hand, I
> disagree with Matthias ("Dr. Scheme") Felleisen about recursion as the root
> of all iteration. Anyway, I believe that iterators (as explained above) can
> be useful even if we don't have generators (in the Icon sense, which I
> believe means coroutine-style).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Iterator.py
Type: text/python
Size: 2858 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000821/8f8fec76/attachment.bin>

From loewis at informatik.hu-berlin.de  Mon Aug 21 12:25:01 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 21 Aug 2000 12:25:01 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A0F6AC.B9C0FCC9@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com> <200008201051.MAA05259@pandora.informatik.hu-berlin.de> <39A0F6AC.B9C0FCC9@lemburg.com>
Message-ID: <200008211025.MAA14212@pandora.informatik.hu-berlin.de>

> So you're all for the two different API version ? After some
> more thinking, I think I agree. The reason is that the lookup
> itself will have to be Unicode-aware too:
> 
> gettext.unigettext(u"L?schen") would have to convert u"L?schen"
> to UTF-8, then look this up and convert the returned value
> back to Unicode.

I did not even think of using Unicode as *keys* to the lookup. The GNU
translation project recommends that message in the source code are in
English. This is good advice, as translators producing, say, Japanese
translation like have more problems with German input than with
English one.

So I'd say that message ids can safely be byte strings, especially as
I believe that the gettext tools treat them that way, as well. If
authors really have to put non-ASCII into message ids, they should use
\x escapes. I have never seen such a message, though (and I have
translated a number of message catalogs).

Regards,
Martin




From loewis at informatik.hu-berlin.de  Mon Aug 21 12:31:25 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 21 Aug 2000 12:31:25 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A0F96C.EA0D0D4B@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net>
			<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com> <200008201059.MAA05292@pandora.informatik.hu-berlin.de> <39A0F96C.EA0D0D4B@lemburg.com>
Message-ID: <200008211031.MAA14260@pandora.informatik.hu-berlin.de>

> Well, it's the same argument as always: better have one format
> which fits all than a new format for every application. XML
> suits these tasks nicely and is becoming more and more accepted
> these days.

I believe this is a deluding claim. First, XML is not *one* format; is
is rather a "meta format"; you still need the document type definition
(valid vs. well-formed). Furthermore, the XML argument is good if
there is no established data format for some application. If there is
already an accepted format, I see no value in converting that to XML.

Regards,
Martin



From fredrik at pythonware.com  Mon Aug 21 12:43:47 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 21 Aug 2000 12:43:47 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com>
Message-ID: <020401c00b5c$b07f1870$0900a8c0@SPIFF>

mal wrote:
> How about a third variant:
> 
> #3:
> __iter = <object>.iterator()
> while __iter:
>    <variable> = __iter.next()
>    <block>

how does that one terminate?

maybe you meant something like:

    __iter = <object>.iterator()
    while __iter:
        <variable> = __iter.next()
        if <variable> is <sentinel>:
            break
        <block>

(where <sentinel> could be __iter itself...)

</F>




From thomas at xs4all.net  Mon Aug 21 12:59:44 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 12:59:44 +0200
Subject: [Python-Dev] iterators
In-Reply-To: <020401c00b5c$b07f1870$0900a8c0@SPIFF>; from fredrik@pythonware.com on Mon, Aug 21, 2000 at 12:43:47PM +0200
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <020401c00b5c$b07f1870$0900a8c0@SPIFF>
Message-ID: <20000821125944.K4797@xs4all.nl>

On Mon, Aug 21, 2000 at 12:43:47PM +0200, Fredrik Lundh wrote:
> mal wrote:
> > How about a third variant:
> > 
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>

> how does that one terminate?

__iter should evaluate to false once it's "empty". 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Mon Aug 21 13:08:05 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 21 Aug 2000 13:08:05 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <020401c00b5c$b07f1870$0900a8c0@SPIFF>
Message-ID: <024301c00b60$15168fe0$0900a8c0@SPIFF>

I wrote:
> mal wrote:
> > How about a third variant:
> > 
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>
> 
> how does that one terminate?

brain disabled.  sorry.

</F>




From thomas at xs4all.net  Mon Aug 21 14:03:06 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 14:03:06 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: <200008210437.XAA22075@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 20, 2000 at 11:37:46PM -0500
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl> <200008210437.XAA22075@cj20424-a.reston1.va.home.com>
Message-ID: <20000821140306.L4797@xs4all.nl>

On Sun, Aug 20, 2000 at 11:37:46PM -0500, Guido van Rossum wrote:

> > > Summary: Allow all assignment expressions after 'import something as'
> -1.  Hypergeneralization.

I don't think it's hypergeneralization. In fact, people might expect it[*],
if we claim that the 'import-as' syntax is a shortcut for the current
practice of

   import somemod
   sspam = somemod.somesubmod.spam

(or similar constructs.) However, I realize you're under a lot of pressure
to Pronounce a number of things now that you're back, and we can always
change this later (if you change your mind.) I dare to predict, though, that
we'll see questions about why this isn't generalized, on c.l.py.

(*] I know 'people might expect it' and 'hypergeneralization' aren't
mutually exclusive, but you know what I mean.)

> By the way, notice that
>   import foo.bar
> places 'foo' in the current namespace, after ensuring that 'foo.bar'
> is defined.

Oh, I noticed ;) We had a small thread about that, this weekend. The subject
was something like ''import as'' or so.

> What should
>   import foo.bar as spam
> assign to spam?  I hope foo.bar, not foo.

The original patch assigned foo to spam, not foo.bar. Why ? Well, all the
patch did was use a different name for the STORE operation that follows an
IMPORT_NAME. To elaborate, 'import foo.bar' does this:

    IMPORT_NAME "foo.bar"
    <resulting object, 'foo', is pushed on the stack>
    STORE_NAME "foo"

and all the patch did was replace the "foo" in STORE_NAME with the name
given in the "as" clause.

> I note that the CVS version doesn't support this latter syntax at all;
> it should be fixed, even though the same effect can be has with
>   from foo import bar as spam

Well, "general consensus" (where the general was a three-headed beast, see
the thread I mentioned) prompted me to make it illegal for now. At least
noone is going to rely on it just yet ;) Making it work as you suggest
requires a separate approach, though. I'll think about how to do it best.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Mon Aug 21 15:52:34 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 15:52:34 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <200008211335.GAA27170@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Mon, Aug 21, 2000 at 06:35:40AM -0700
References: <200008211335.GAA27170@slayer.i.sourceforge.net>
Message-ID: <20000821155234.M4797@xs4all.nl>

On Mon, Aug 21, 2000 at 06:35:40AM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/nondist/peps
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv27120

> Modified Files:
> 	pep-0000.txt 
> Log Message:
> PEP 202, change tim's email address to tpeters -- we really need a key
> for these.

>    I   200  pep-0200.txt  Python 2.0 Release Schedule           jhylton
>    SA  201  pep-0201.txt  Lockstep Iteration                    bwarsaw
> !  S   202  pep-0202.txt  List Comprehensions                   tim_one
>    S   203  pep-0203.txt  Augmented Assignments                 twouters
>    S   204  pep-0204.txt  Range Literals                        twouters


I thought the last collumn was the SourceForge username ? I don't have an
email address that reads 'twouters', except the SF one, anyway, and I
thought tim had 'tim_one' there. Or did he change it ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From james at daa.com.au  Mon Aug 21 16:02:21 2000
From: james at daa.com.au (James Henstridge)
Date: Mon, 21 Aug 2000 22:02:21 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A01C0C.E6BA6CCB@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008210948070.15515-100000@james.daa.com.au>

On Sun, 20 Aug 2000, M.-A. Lemburg wrote:

> James Henstridge wrote:
> > Well, it can do a little more than that.  It will also handle the case of
> > a number of locales listed in the LANG environment variable.  It also
> > doesn't look like it handles decomposition of a locale like
> > ll_CC.encoding at modifier into other matching encodings in the correct
> > precedence order.
> > 
> > Maybe something to do this sort of decomposition would fit better in
> > locale.py though.
> > 
> > This sort of thing is very useful for people who know more than one
> > language, and doesn't seem to be handled by plain setlocale()
> 
> I'm not sure I can follow you here: are you saying that your
> support in gettext.py does more or less than what's present
> in locale.py ?
> 
> If it's more, I think it would be a good idea to add those
> parts to locale.py.

It does a little more than the current locale.py.

I just checked the current locale module, and it gives a ValueError
exception when LANG is set to something like en_AU:fr_FR.  This sort of
thing should be handled by the python interface to gettext, as it is by
the C implementation (and I am sure that most programmers would not expect
such an error from the locale module).

The code in my gettext module handles that case.

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From MarkH at ActiveState.com  Mon Aug 21 16:48:09 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 00:48:09 +1000
Subject: [Python-Dev] configure.in, C++ and Linux
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>

I'm pretty new to all of this, so please bear with me.

I create a Setup.in that references some .cpp or .cxx files.  When I create
the Makefile, the command line generated for building these C++ sources is
similar to:

  $(CCC) $(CCSHARE) ...

However, CCC is never set anywhere....

Looking at configure.in, there appears to be support for setting this CCC
variable.  However, it was commented out in revision 1.113 - a checkin by
Guido, December 1999, with the comment:
"""
Patch by Geoff Furnish to make compiling with C++ more gentle.
(The configure script is regenerated, not from his patch.)
"""

Digging a little deeper, I find that config/Makefile.pre.in and
config/makesetup both have references to CCC that account for the
references in my Makefile.  Unfortunately, my knowledge doesnt yet stretch
to knowing exactly where these files come from :-)

Surely all of this isn't correct.  Can anyone tell me what is going on, or
what I am doing wrong?

Thanks,

Mark.








From mal at lemburg.com  Mon Aug 21 16:59:36 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 16:59:36 +0200
Subject: [Python-Dev] Adding more LANG support to locale.py (Re: gettext in 
 the standard library)
References: <Pine.LNX.4.21.0008210948070.15515-100000@james.daa.com.au>
Message-ID: <39A143D8.5595B7C4@lemburg.com>

James Henstridge wrote:
> 
> On Sun, 20 Aug 2000, M.-A. Lemburg wrote:
> 
> > James Henstridge wrote:
> > > Well, it can do a little more than that.  It will also handle the case of
> > > a number of locales listed in the LANG environment variable.  It also
> > > doesn't look like it handles decomposition of a locale like
> > > ll_CC.encoding at modifier into other matching encodings in the correct
> > > precedence order.
> > >
> > > Maybe something to do this sort of decomposition would fit better in
> > > locale.py though.
> > >
> > > This sort of thing is very useful for people who know more than one
> > > language, and doesn't seem to be handled by plain setlocale()
> >
> > I'm not sure I can follow you here: are you saying that your
> > support in gettext.py does more or less than what's present
> > in locale.py ?
> >
> > If it's more, I think it would be a good idea to add those
> > parts to locale.py.
> 
> It does a little more than the current locale.py.
> 
> I just checked the current locale module, and it gives a ValueError
> exception when LANG is set to something like en_AU:fr_FR.  This sort of
> thing should be handled by the python interface to gettext, as it is by
> the C implementation (and I am sure that most programmers would not expect
> such an error from the locale module).

That usage of LANG is new to me... I wonder how well the
multiple options settings fit the current API.
 
> The code in my gettext module handles that case.

Would you be willing to supply a patch to locale.py which
adds multiple LANG options to the interface ?

I guess we'd need a new API getdefaultlocales() [note the trailing
"s"] which will then return a list of locale tuples rather than
a single one. The standard getdefaultlocale() should then return
whatever is considered to be the standard locale when using the
multiple locale notation for LANG.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Mon Aug 21 17:05:13 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 21 Aug 2000 11:05:13 -0400 (EDT)
Subject: [Python-Dev] OT: How to send CVS update mails?
References: <39A06FA1.C5EB34D1@nowonder.de>
	<20000821005706.D11327@lyra.org>
Message-ID: <14753.17705.775721.360133@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> Take a look at CVSROOT/loginfo and CVSROOT/syncmail in the
    GS> Python repository.

Just to complete the picture, add CVSROOT/checkoutlist.

-Barry



From akuchlin at mems-exchange.org  Mon Aug 21 17:06:16 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Mon, 21 Aug 2000 11:06:16 -0400
Subject: [Python-Dev] configure.in, C++ and Linux
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Tue, Aug 22, 2000 at 12:48:09AM +1000
References: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>
Message-ID: <20000821110616.A547@kronos.cnri.reston.va.us>

On Tue, Aug 22, 2000 at 12:48:09AM +1000, Mark Hammond wrote:
>Digging a little deeper, I find that config/Makefile.pre.in and
>config/makesetup both have references to CCC that account for the
>references in my Makefile.  Unfortunately, my knowledge doesnt yet stretch
>to knowing exactly where these files come from :-)

The Makefile.pre.in is probably from Misc/Makefile.pre.in, which has a
reference to $(CCC); Modules/Makefile.pre.in is more up to date and
uses $(CXX).  Modules/makesetup also refers to $(CCC), and probably
needs to be changed to use $(CXX), matching Modules/Makefile.pre.in.

Given that we want to encourage the use of the Distutils,
Misc/Makefile.pre.in should be deleted to avoid having people use it.

--amk




From fdrake at beopen.com  Mon Aug 21 18:01:38 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 12:01:38 -0400 (EDT)
Subject: [Python-Dev] OT: How to send CVS update mails?
In-Reply-To: <39A06FA1.C5EB34D1@nowonder.de>
References: <39A06FA1.C5EB34D1@nowonder.de>
Message-ID: <14753.21090.492033.754101@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > Sorry, but I cannot figure out how to make SourceForge send
 > updates whenever there is a CVS commit (checkins mailing
 > list).

  I wrote up some instructions at:

http://sfdocs.sourceforge.net/sfdocs/display_topic.php?topicid=52


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Mon Aug 21 18:49:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 12:49:00 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python import.c,2.146,2.147
In-Reply-To: <200008211635.JAA09187@slayer.i.sourceforge.net>
References: <200008211635.JAA09187@slayer.i.sourceforge.net>
Message-ID: <14753.23932.816392.92125@cj42289-a.reston1.va.home.com>

Barry Warsaw writes:
 > Thomas reminds me to bump the MAGIC number for the extended print
 > opcode additions.

  You also need to document the new opcodes in Doc/lib/libdis.tex.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Mon Aug 21 19:21:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 13:21:32 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <20000821155234.M4797@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>

[Thomas Wouters]
> I thought the last collumn was the SourceForge username ?
> I don't have an email address that reads 'twouters', except the
> SF one, anyway, and I thought tim had 'tim_one' there. Or did he
> change it ?

I don't know what the last column means.  What would you like it to mean?
Perhaps a complete email address, or (what a concept!) the author's *name*,
would be best.

BTW, my current employer assigned "tpeters at beopen.com" to me.  I was just
"tim" for the first 15 years of my career, and then "tim_one" when you kids
started using computers faster than me and took "tim" everywhere before I
got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
now.  I have given up all hope of retaining an online identity.

the-effbot-is-next-ly y'rs  - tim





From cgw at fnal.gov  Mon Aug 21 19:47:04 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 21 Aug 2000 12:47:04 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps
 pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
References: <20000821155234.M4797@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <14753.27416.760084.528198@buffalo.fnal.gov>

Tim Peters writes:

 >  I have given up all hope of retaining an online identity.

And have you seen http://www.timpeters.com ?

(I don't know how you find the time to take those stunning color
photographs!)




From bwarsaw at beopen.com  Mon Aug 21 20:29:27 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 21 Aug 2000 14:29:27 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
References: <20000821155234.M4797@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <14753.29959.135594.439438@anthem.concentric.net>

I don't know why I haven't seen Thomas's reply yet, but in any event...

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> [Thomas Wouters]
    >> I thought the last collumn was the SourceForge username ?  I
    >> don't have an email address that reads 'twouters', except the
    >> SF one, anyway, and I thought tim had 'tim_one' there. Or did
    >> he change it ?

    TP> I don't know what the last column means.  What would you like
    TP> it to mean?  Perhaps a complete email address, or (what a
    TP> concept!) the author's *name*, would be best.

    TP> BTW, my current employer assigned "tpeters at beopen.com" to me.
    TP> I was just "tim" for the first 15 years of my career, and then
    TP> "tim_one" when you kids started using computers faster than me
    TP> and took "tim" everywhere before I got to it.  Now even
    TP> "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one" now.
    TP> I have given up all hope of retaining an online identity.

I'm not sure what it should mean either, except as a shorthand way to
identify the owner of the PEP.  Most important is that each line fit
in 80 columns!

Perhaps we can do away with the filename column, since that's easily
calculated?

I had originally thought the owner should be the mailbox on
SourceForge, but then I thought maybe it ought to be the mailbox given
in the Author: field of the PEP.  Perhaps the Real Name is best after
all, if we can reclaim some horizontal space.

unsure-ly, y'rs,
-Barry



From thomas at xs4all.net  Mon Aug 21 21:02:58 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 21:02:58 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <14753.29959.135594.439438@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 21, 2000 at 02:29:27PM -0400
References: <20000821155234.M4797@xs4all.nl> <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com> <14753.29959.135594.439438@anthem.concentric.net>
Message-ID: <20000821210258.B4933@xs4all.nl>

On Mon, Aug 21, 2000 at 02:29:27PM -0400, Barry A. Warsaw wrote:

> I don't know why I haven't seen Thomas's reply yet, but in any event...

Strange, it went to python-dev not long after the checkin. Tim quoted about
the entire email, though, so you didn't miss much. The name-calling and
snide remarks weren't important anyway :)

> I'm not sure what it should mean either, except as a shorthand way to
> identify the owner of the PEP.  Most important is that each line fit
> in 80 columns!

> I had originally thought the owner should be the mailbox on
> SourceForge, but then I thought maybe it ought to be the mailbox given
> in the Author: field of the PEP.

Emails in the Author field are likely to be too long to fit in that list,
even if you remove the filename. I'd say go for the SF username, for five
reasons:

  1) it's a name developers know and love (or hate)
  2) more information on a user can be found through SourceForge
  5) that SF email address should work, too. It's where patch updates and
     stuff are sent, so most people are likely to have it forwarding to
     their PEP author address.

Also, it's the way of least resistance. All you need to change is that
'tpeters' into 'tim_one' :-)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Mon Aug 21 21:11:37 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 21:11:37 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 21, 2000 at 01:21:32PM -0400
References: <20000821155234.M4797@xs4all.nl> <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <20000821211137.C4933@xs4all.nl>

On Mon, Aug 21, 2000 at 01:21:32PM -0400, Tim Peters wrote:

> BTW, my current employer assigned "tpeters at beopen.com" to me.  I was just
> "tim" for the first 15 years of my career, and then "tim_one" when you kids
> started using computers faster than me and took "tim" everywhere before I
> got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
> now.  I have given up all hope of retaining an online identity.

For the first few years online, I was known as 'zonny'. Chosen because my
first online experience, The Digital City of Amsterdam (a local freenet),
was a free service, and I'd forgotten the password of 'thomas', 'sonny',
'twouters' and 'thomasw'. And back then you couldn't get the password
changed :-) So 'zonny' it was, even when I started working there and
could've changed it. And I was happy with it, because I could use 'zonny'
everywhere, noone had apparently ever thought of that name (no suprise
there, eh ? :)

And then two years after I started with that name, I ran into another
'zonny' in some American MUD or another. (I believe it was TinyTIM(*), for
those who know about such things.) And it was a girl, and she had been using
it for years as well! So to avoid confusion I started using 'thomas', and
have had the luck of not needing another name until Mailman moved to
SourceForge :-) But ever since then, I don't believe *any* name is not
already taken. You'll just have to live with the confusion.

*) This is really true. There was a MUD called TinyTIM (actually an offshoot
of TinyMUSH) and it had a shitload of bots, too. It was one of the most
amusing senseless MU*s out there, with a lot of Pythonic humour.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Mon Aug 21 22:30:41 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 15:30:41 -0500
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Mon, 21 Aug 2000 14:03:06 +0200."
             <20000821140306.L4797@xs4all.nl> 
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl> <200008210437.XAA22075@cj20424-a.reston1.va.home.com>  
            <20000821140306.L4797@xs4all.nl> 
Message-ID: <200008212030.PAA26887@cj20424-a.reston1.va.home.com>

> > > > Summary: Allow all assignment expressions after 'import
> > > > something as'

[GvR]
> > -1.  Hypergeneralization.

[TW]
> I don't think it's hypergeneralization. In fact, people might expect it[*],
> if we claim that the 'import-as' syntax is a shortcut for the current
> practice of
> 
>    import somemod
>    sspam = somemod.somesubmod.spam
> 
> (or similar constructs.) However, I realize you're under a lot of pressure
> to Pronounce a number of things now that you're back, and we can always
> change this later (if you change your mind.) I dare to predict, though, that
> we'll see questions about why this isn't generalized, on c.l.py.

I kind of doubt it, because it doesn't look useful.

I do want "import foo.bar as spam" back, assigning foo.bar to spam.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Mon Aug 21 21:42:50 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 21 Aug 2000 22:42:50 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps
 pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008212241020.7563-100000@sundial>

On Mon, 21 Aug 2000, Tim Peters wrote:

> BTW, my current employer assigned "tpeters at beopen.com" to me.  I was just
> "tim" for the first 15 years of my career, and then "tim_one" when you kids
> started using computers faster than me and took "tim" everywhere before I
> got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
> now.  I have given up all hope of retaining an online identity.

Hah! That's all I have to say to you! 
Being the only moshez in the Free Software/Open Source community and 
the only zadka which cares about the internet has certainly made my life
easier (can you say zadka.com?) 

on-the-other-hand-people-use-moshe-as-a-generic-name-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Mon Aug 21 22:22:10 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 22:22:10 +0200
Subject: [Python-Dev] Doc-strings for class attributes ?!
Message-ID: <39A18F72.A0EADEA7@lemburg.com>

Lately I was busy extracting documentation from a large
Python application. 

Everything worked just fine building on existing doc-strings and
the nice Python reflection features, but I came across one 
thing to which I didn't find a suitable Python-style solution:
inline documentation for class attributes.

We already have doc-strings for modules, classes, functions
and methods, but there is no support for documenting class
attributes in a way that:

1. is local to the attribute definition itself
2. doesn't affect the attribute object in any way (e.g. by
   adding wrappers of some sort)
3. behaves well under class inheritence
4. is available online

After some thinking and experimenting with different ways
of achieving the above I came up with the following solution
which looks very Pythonesque to me:

class C:
        " class C doc-string "

        a = 1
        " attribute C.a doc-string "

        b = 2
        " attribute C.b doc-string "

The compiler would handle these cases as follows:

" class C doc-string " -> C.__doc__
" attribute C.a doc-string " -> C.__doc__a__
" attribute C.b doc-string " -> C.__doc__b__

All of the above is perfectly valid Python syntax. Support
should be easy to add to the byte code compiler. The
name mangling assures that attribute doc-strings

a) participate in class inheritence and
b) are treated as special attributes (following the __xxx__
   convention)

Also, the look&feel of this convention is similar to that
of the other existing conventions: the doc string follows
the definition of the object.

What do you think about this idea ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Mon Aug 21 22:32:22 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 16:32:22 -0400 (EDT)
Subject: [Python-Dev] regression test question
Message-ID: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>

  I'm working on bringing the parser module up to date and introducing
a regression test for it.  (And if the grammar stops changing, it may
actually get finished!)
  I'm having a bit of a problem, though:  the test passes when run as
a script, but not when run via the regression test framework.  The
problem is *not* with the output file.  I'm getting an exception from
the module which is not expected, and is only raised when it runs
using the regression framework.
  Has anyone else encountered a similar problem?  I've checked to make
sure the right version or parsermodule.so and test_parser.py are being
picked up.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gvwilson at nevex.com  Mon Aug 21 22:48:11 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 21 Aug 2000 16:48:11 -0400 (EDT)
Subject: [Python-Dev] Doc-strings for class attributes ?!
In-Reply-To: <39A18F72.A0EADEA7@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008211644050.24709-100000@akbar.nevex.com>

> Marc-Andre Lemburg wrote:
> We already have doc-strings for modules, classes, functions and
> methods, but there is no support for documenting class attributes in a
> way that:
> 
> 1. is local to the attribute definition itself
> 2. doesn't affect the attribute object
> 3. behaves well under class inheritence
> 4. is available online
> 
> [proposal]
> class C:
>         " class C doc-string "
> 
>         a = 1
>         " attribute C.a doc-string "
> 
>         b = 2
>         " attribute C.b doc-string "
>
> What do you think about this idea ?

Greg Wilson:
I think it would be useful, but as we've discussed elsewhere, I think that
if the doc string mechanism is going to be extended, I would like it to
allow multiple chunks of information to be attached to objects (functions,
modules, class variables, etc.), so that different developers and tools
can annotate programs without colliding.

Thanks,
Greg




From tim_one at email.msn.com  Tue Aug 22 00:01:54 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 18:01:54 -0400
Subject: [Python-Dev] Looking for an "import" expert
Message-ID: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>

Fred Drake opened an irritating <wink> bug report (#112436).

Cut to the chase:

regrtest.py imports test_support.
test_support.verbose is 1 after that.
regrtest's main() reaches into test_support and
    stomps on test_support.verbose, usually setting to 0.

Now in my build directory, if I run

   python ..\lib\test\regrtest.py test_getopt

the test passes.  However, it *shouldn't* (and the crux of Fred's bug report
is that it does fail when he runs regrtest in an old & deprecated way).

What happens is that test_getopt.py has this near the top:

    from test.test_support import verbose

and this is causing *another* copy of the test_support module to get loaded,
and *its* verbose attr is 1.

So when we run test_getopt "normally" via regrtest, it incorrectly believes
that verbose is 1, and the "expected result" file (which I generated via
regrtest -g) is in fact verbose-mode output.

If I change the import at the top of test_getopt.py to

    from test import test_support
    from test_support import verbose

then test_getopt.py sees the 0 it's supposed to see.

The story is exactly the same, btw, if I run regrtest while *in* the test
directory (so this has nothing to do with that I usually run regrtest from
my build directory).

Now what *Fred* does is equivalent to getting into a Python shell and typing

>>> from test import regrtest
>>> regrtest.main()

and in *that* case (the original) test_getopt sees the 0 it's supposed to
see.

I confess I lost track how fancy Python imports work a long time ago.  Can
anyone make sense of these symptoms?  Why is a 2nd version of test_support
getting loaded, and why only sometimes?






From fdrake at beopen.com  Tue Aug 22 00:10:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 18:10:53 -0400 (EDT)
Subject: [Python-Dev] regression test question
In-Reply-To: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
References: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
Message-ID: <14753.43245.685276.857116@cj42289-a.reston1.va.home.com>

I wrote:
 >   I'm having a bit of a problem, though:  the test passes when run as
 > a script, but not when run via the regression test framework.  The
 > problem is *not* with the output file.  I'm getting an exception from
 > the module which is not expected, and is only raised when it runs
 > using the regression framework.

  Of course, I managed to track this down to a bug in my own code.  I
wasn't clearing an error that should have been cleared, and that was
affecting the result in strange ways.
  I'm not at all sure why it didn't affect the results in more cases,
but that may just mean I need more variation in my tests.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Tue Aug 22 00:13:32 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 00:13:32 +0200
Subject: [Python-Dev] regression test question
In-Reply-To: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Mon, Aug 21, 2000 at 04:32:22PM -0400
References: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
Message-ID: <20000822001331.D4933@xs4all.nl>

On Mon, Aug 21, 2000 at 04:32:22PM -0400, Fred L. Drake, Jr. wrote:

>   I'm working on bringing the parser module up to date and introducing
> a regression test for it.  (And if the grammar stops changing, it may
> actually get finished!)

Well, I have augmented assignment in the queue, but that's about it for
Grammar changing patches ;)

>   I'm having a bit of a problem, though:  the test passes when run as
> a script, but not when run via the regression test framework.  The
> problem is *not* with the output file.  I'm getting an exception from
> the module which is not expected, and is only raised when it runs
> using the regression framework.
>   Has anyone else encountered a similar problem?  I've checked to make
> sure the right version or parsermodule.so and test_parser.py are being
> picked up.

I've seen this kind of problem when writing the pty test suite: fork() can
do nasty things to the regression test suite. You have to make sure the
child process exits brutally, in all cases, and *does not output anything*,
etc. I'm not sure if that's your problem though. Another issue I've had to
deal with was with a signal/threads problem on BSDI: enabling threads
screwed up random tests *after* the signal or thread test (never could
figure out what triggered it ;)

(This kind of problem is generic: several test modules, like test_signal,
set 'global' attributes to test something, and don't always reset them. If
you type ^C at the right moment in the test process, test_signal doesn't
remove the SIGINT-handler, and subsequent ^C's dont do anything other than
saying 'HandlerBC called' and failing the test ;))

I'm guessing this is what your parser test is hitting: the regression tester
itself sets something differently from running it directly. Try importing
the test from a script rather than calling it directly ? Did you remember to
set PYTHONPATH and such, like 'make test' does ? Did you use '-tt' like
'make test' does ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Tue Aug 22 00:30:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 18:30:11 -0400
Subject: [Python-Dev] Looking for an "import" expert
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDAHBAA.tim_one@email.msn.com>

> ...
> What happens is that test_getopt.py has this near the top:
>
>     from test.test_support import verbose
>
> and this is causing *another* copy of the test_support module to
> get loaded, and *its* verbose attr is 1.

Maybe adding these lines after that import will help clarify it for you
(note that you can't print to stdout without screwing up what regrtest
expects):

import sys
print >> sys.stderr, sys.modules["test_support"], \
                     sys.modules["test.test_support"]

(hot *damn* is that more pleasant than pasting stuff together by hand!).

When I run it, I get

<module 'test_support' from '..\lib\test\test_support.pyc'>
<module 'test.test_support' from
    'C:\CODE\PYTHON\DIST\SRC\lib\test\test_support.pyc'>

so they clearly are distinct modules.





From guido at beopen.com  Tue Aug 22 02:00:41 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 19:00:41 -0500
Subject: [Python-Dev] Looking for an "import" expert
In-Reply-To: Your message of "Mon, 21 Aug 2000 18:01:54 -0400."
             <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> 
Message-ID: <200008220000.TAA27482@cj20424-a.reston1.va.home.com>

If the tests are run "the modern way" (python ../Lib/test/regrtest.py)
then the test module is the script directory and it is on the path, so
"import test_support" sees and loads a toplevel module test_support.
Then "import test.test_support" sees a package test with a
test_support submodule which is assumed to be a different one, so it
is loaded again.

But if the tests are run via "import test.autotest" (or "import
test.regrtest; test.regrtest.main()" the "import test_support" knows
that the importing module is in the test package, so it first tries to
import the test_support submodule from that package, so
test.test_support and (plain) test_support are the same.

Conclusion: inside the test package, never refer explicitly to the
test package.  Always use "import test_support".  Never "import
test.test_support" or "from test.test_support import verbose" or "from
test import test_support".

This is one for the README!

I've fixed this by checking in a small patch to test_getopt.py and the
corresponding output file (because of the bug, the output file was
produced under verbose mode).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From MarkH at ActiveState.com  Tue Aug 22 03:58:15 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 11:58:15 +1000
Subject: [Python-Dev] configure.in, C++ and Linux
In-Reply-To: <20000821110616.A547@kronos.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGENODFAA.MarkH@ActiveState.com>

[Andrew]

> The Makefile.pre.in is probably from Misc/Makefile.pre.in, which has a
> reference to $(CCC); Modules/Makefile.pre.in is more up to date and
> uses $(CXX).  Modules/makesetup also refers to $(CCC), and probably
> needs to be changed to use $(CXX), matching Modules/Makefile.pre.in.

This is a bug in the install script then - it installs the CCC version into
/usr/local/lib/python2.0/config.

Also, the online extending-and-embedding instructions explicitly tell you
to use the Misc/ version
(http://www.python.org/doc/current/ext/building-on-unix.html)

> Given that we want to encourage the use of the Distutils,
> Misc/Makefile.pre.in should be deleted to avoid having people use it.

That may be a little drastic :-)

So, as far as I can tell, we have a problem in that using the most widely
available instructions, attempting to build a new C++ extension module on
Linux will fail.  Can someone confirm it is indeed a bug that I should
file?  (Or maybe a patch I should submit?)

Mark.




From guido at beopen.com  Tue Aug 22 05:32:18 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 22:32:18 -0500
Subject: [Python-Dev] iterators
In-Reply-To: Your message of "Mon, 21 Aug 2000 12:04:04 +0200."
             <39A0FE94.1AF5FABF@lemburg.com> 
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>  
            <39A0FE94.1AF5FABF@lemburg.com> 
Message-ID: <200008220332.WAA02661@cj20424-a.reston1.va.home.com>

[BDFL]
> > The statement
> > 
> >   for <variable> in <object>: <block>
> > 
> > should translate into this kind of pseudo-code:
> > 
> >   # variant 1
> >   __temp = <object>.newiterator()
> >   while 1:
> >       try: <variable> = __temp.next()
> >       except ExhaustedIterator: break
> >       <block>
> > 
> > or perhaps (to avoid the relatively expensive exception handling):
> > 
> >   # variant 2
> >   __temp = <object>.newiterator()
> >   while 1:
> >       __flag, <variable = __temp.next()
> >       if not __flag: break
> >       <block>

[MAL]
> How about a third variant:
> 
> #3:
> __iter = <object>.iterator()
> while __iter:
>    <variable> = __iter.next()
>    <block>
> 
> This adds a slot call, but removes the malloc overhead introduced
> by returning a tuple for every iteration (which is likely to be
> a performance problem).

Are you sure the slot call doesn't cause some malloc overhead as well?

Ayway, the problem with this one is that it requires a dynamic
iterator (one that generates values on the fly, e.g. something reading
lines from a pipe) to hold on to the next value between "while __iter"
and "__iter.next()".

> Another possibility would be using an iterator attribute
> to get at the variable:
> 
> #4:
> __iter = <object>.iterator()
> while 1:
>    if not __iter.next():
>         break
>    <variable> = __iter.value
>    <block>

Uglier than any of the others.

> You might want to check out the counterobject.c approach I used
> to speed up the current for-loop in Python 1.5's ceval.c:
> it's basically a mutable C integer which is used instead of
> the current Python integer index.

> The details can be found in my old patch:
> 
>   http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz

Ah, yes, that's what I was thinking of.

> """ Generic object iterators.
[...]

Thanks -- yes, that's what I was thinking of.  Did you just whip this
up?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Tue Aug 22 09:58:12 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 09:58:12 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>  
	            <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>
Message-ID: <39A23294.B2DB3C77@lemburg.com>

Guido van Rossum wrote:
> 
> [BDFL]
> > > The statement
> > >
> > >   for <variable> in <object>: <block>
> > >
> > > should translate into this kind of pseudo-code:
> > >
> > >   # variant 1
> > >   __temp = <object>.newiterator()
> > >   while 1:
> > >       try: <variable> = __temp.next()
> > >       except ExhaustedIterator: break
> > >       <block>
> > >
> > > or perhaps (to avoid the relatively expensive exception handling):
> > >
> > >   # variant 2
> > >   __temp = <object>.newiterator()
> > >   while 1:
> > >       __flag, <variable = __temp.next()
> > >       if not __flag: break
> > >       <block>
> 
> [MAL]
> > How about a third variant:
> >
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>
> >
> > This adds a slot call, but removes the malloc overhead introduced
> > by returning a tuple for every iteration (which is likely to be
> > a performance problem).
> 
> Are you sure the slot call doesn't cause some malloc overhead as well?

Quite likely not, since the slot in question doesn't generate
Python objects (nb_nonzero).
 
> Ayway, the problem with this one is that it requires a dynamic
> iterator (one that generates values on the fly, e.g. something reading
> lines from a pipe) to hold on to the next value between "while __iter"
> and "__iter.next()".

Hmm, that depends on how you look at it: I was thinking in terms
of reading from a file -- feof() is true as soon as the end of
file is reached. The same could be done for iterators.

We might also consider a mixed approach:

#5:
__iter = <object>.iterator()
while __iter:
   try:
       <variable> = __iter.next()
   except ExhaustedIterator:
       break
   <block>

Some iterators may want to signal the end of iteration using
an exception, others via the truth text prior to calling .next(),
e.g. a list iterator can easily implement the truth test
variant, while an iterator with complex .next() processing
might want to use the exception variant.

Another possibility would be using exception class objects
as singleton indicators of "end of iteration":

#6:
__iter = <object>.iterator()
while 1:
   try:
       rc = __iter.next()
   except ExhaustedIterator:
       break
   else:
       if rc is ExhaustedIterator:
           break
   <variable> = rc
   <block>

> > Another possibility would be using an iterator attribute
> > to get at the variable:
> >
> > #4:
> > __iter = <object>.iterator()
> > while 1:
> >    if not __iter.next():
> >         break
> >    <variable> = __iter.value
> >    <block>
> 
> Uglier than any of the others.
> 
> > You might want to check out the counterobject.c approach I used
> > to speed up the current for-loop in Python 1.5's ceval.c:
> > it's basically a mutable C integer which is used instead of
> > the current Python integer index.
> 
> > The details can be found in my old patch:
> >
> >   http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz
> 
> Ah, yes, that's what I was thinking of.
> 
> > """ Generic object iterators.
> [...]
> 
> Thanks -- yes, that's what I was thinking of.  Did you just whip this
> up?

The file says: Feb 2000... I don't remember what I wrote it for;
it's in my lib/ dir meaning that it qualified as general purpose
utility :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Tue Aug 22 10:01:40 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 10:01:40 +0200
Subject: [Python-Dev] Looking for an "import" expert
References: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> <200008220000.TAA27482@cj20424-a.reston1.va.home.com>
Message-ID: <39A23364.356E9EE4@lemburg.com>

Guido van Rossum wrote:
> 
> If the tests are run "the modern way" (python ../Lib/test/regrtest.py)
> then the test module is the script directory and it is on the path, so
> "import test_support" sees and loads a toplevel module test_support.
> Then "import test.test_support" sees a package test with a
> test_support submodule which is assumed to be a different one, so it
> is loaded again.
> 
> But if the tests are run via "import test.autotest" (or "import
> test.regrtest; test.regrtest.main()" the "import test_support" knows
> that the importing module is in the test package, so it first tries to
> import the test_support submodule from that package, so
> test.test_support and (plain) test_support are the same.
> 
> Conclusion: inside the test package, never refer explicitly to the
> test package.  Always use "import test_support".  Never "import
> test.test_support" or "from test.test_support import verbose" or "from
> test import test_support".

I'd rather suggest to use a different convention: *always* import
using the full path, i.e. "from test import test_support". 

This scales much better and also avoids a nasty problem with
Python pickles related to much the same problem Tim found here:
dual import of subpackage modules (note that pickle will always
do the full path import).
 
> This is one for the README!
> 
> I've fixed this by checking in a small patch to test_getopt.py and the
> corresponding output file (because of the bug, the output file was
> produced under verbose mode).
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Tue Aug 22 11:34:20 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 22 Aug 2000 11:34:20 +0200
Subject: [Python-Dev] New anal crusade 
In-Reply-To: Message by "Tim Peters" <tim_one@email.msn.com> ,
	     Sat, 19 Aug 2000 13:34:28 -0400 , <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com> 
Message-ID: <20000822093420.AE00B303181@snelboot.oratrix.nl>

> Has anyone tried compiling Python under gcc with
> 
>     -Wmissing-prototypes -Wstrict-prototypes

I have a similar set of options (actually it's difficult to turn them off if 
you're checking for prototypes:-) which will make the CodeWarrior compiler for 
the Mac be strict about prototypes, which complains about external functions 
being declared without a prototype in scope. I was initially baffled by this: 
why would it want a prototype if the function declaration is ansi-style 
anyway? But, it turns out its a really neat warning: usually if you declare an 
external without a prototype in scope it means that it isn't declared in a .h 
file, which means that either (a) it shouldn't have been an extern in the 
first place or (b) you've duplicated the prototype in an external declaration 
somewhere else which means the prototypes aren't necessarily identical.

For Python this option produces warnings for all the init routines, which is 
to be expected, but also for various other things (I seem to remember there's 
a couple in the GC code). If anyone is interested I can print them out and 
send them here.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jack at oratrix.nl  Tue Aug 22 11:45:41 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 22 Aug 2000 11:45:41 +0200
Subject: [Python-Dev] iterators 
In-Reply-To: Message by "Guido van Rossum" <guido@python.org> ,
	     Fri, 18 Aug 2000 16:57:15 -0400 , <011601c00a1f$9923d460$7aa41718@beopen.com> 
Message-ID: <20000822094542.71467303181@snelboot.oratrix.nl>

>   it = something.newiterator()
>   for x over it:
>       if time_to_start_second_loop(): break
>       do_something()
>   for x over it:
>       do_something_else()

Another, similar, paradigm I find myself often using is something like
    tmplist = []
    for x in origlist:
        if x.has_some_property():
           tmplist.append(x)
        else
           do_something()
    for x in tmplist:
        do_something_else()

I think I'd like it if iterators could do something like
    it = origlist.iterator()
    for x in it:
        if x.has_some_property():
           it.pushback()
        else
           do_something()
    for x in it:
        do_something_else()

--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From guido at beopen.com  Tue Aug 22 15:03:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 22 Aug 2000 08:03:28 -0500
Subject: [Python-Dev] iterators
In-Reply-To: Your message of "Tue, 22 Aug 2000 09:58:12 +0200."
             <39A23294.B2DB3C77@lemburg.com> 
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>  
            <39A23294.B2DB3C77@lemburg.com> 
Message-ID: <200008221303.IAA09992@cj20424-a.reston1.va.home.com>

> > [MAL]
> > > How about a third variant:
> > >
> > > #3:
> > > __iter = <object>.iterator()
> > > while __iter:
> > >    <variable> = __iter.next()
> > >    <block>
> > >
> > > This adds a slot call, but removes the malloc overhead introduced
> > > by returning a tuple for every iteration (which is likely to be
> > > a performance problem).
> > 
> > Are you sure the slot call doesn't cause some malloc overhead as well?
> 
> Quite likely not, since the slot in question doesn't generate
> Python objects (nb_nonzero).

Agreed only for built-in objects like lists.  For class instances this
would be way more expensive, because of the two calls vs. one!

> > Ayway, the problem with this one is that it requires a dynamic
> > iterator (one that generates values on the fly, e.g. something reading
> > lines from a pipe) to hold on to the next value between "while __iter"
> > and "__iter.next()".
> 
> Hmm, that depends on how you look at it: I was thinking in terms
> of reading from a file -- feof() is true as soon as the end of
> file is reached. The same could be done for iterators.

But feof() needs to read an extra character into the buffer if the
buffer is empty -- so it needs buffering!  That's what I'm trying to
avoid.

> We might also consider a mixed approach:
> 
> #5:
> __iter = <object>.iterator()
> while __iter:
>    try:
>        <variable> = __iter.next()
>    except ExhaustedIterator:
>        break
>    <block>
> 
> Some iterators may want to signal the end of iteration using
> an exception, others via the truth text prior to calling .next(),
> e.g. a list iterator can easily implement the truth test
> variant, while an iterator with complex .next() processing
> might want to use the exception variant.

Belt and suspenders.  What does this buy you over "while 1"?

> Another possibility would be using exception class objects
> as singleton indicators of "end of iteration":
> 
> #6:
> __iter = <object>.iterator()
> while 1:
>    try:
>        rc = __iter.next()
>    except ExhaustedIterator:
>        break
>    else:
>        if rc is ExhaustedIterator:
>            break
>    <variable> = rc
>    <block>

Then I'd prefer to use a single protocol:

    #7:
    __iter = <object>.iterator()
    while 1:
       rc = __iter.next()
       if rc is ExhaustedIterator:
	   break
       <variable> = rc
       <block>

This means there's a special value that you can't store in lists
though, and that would bite some introspection code (e.g. code listing
all internal objects).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fredrik at pythonware.com  Tue Aug 22 14:27:11 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 22 Aug 2000 14:27:11 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
Message-ID: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>

well, see for yourself:
http://www.pythonlabs.com/logos.html






From thomas.heller at ion-tof.com  Tue Aug 22 15:13:20 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Tue, 22 Aug 2000 15:13:20 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
References: <200008221210.FAA25857@slayer.i.sourceforge.net>
Message-ID: <001501c00c3a$becdaac0$4500a8c0@thomasnb>

> Update of /cvsroot/python/python/dist/src/PCbuild
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv25776
> 
> Modified Files:
> python20.wse 
> Log Message:
> Thomas Heller noticed that the wrong registry entry was written for
> the DLL.  Replace
>  %_SYSDEST_%\Python20.dll
> with
>  %_DLLDEST_%\Python20.dll.
> 
Unfortunately, there was a bug in my bug-report.

%DLLDEST% would have been correct.
Sorry: Currently I don't have time to test the fix.

Thomas Heller




From MarkH at ActiveState.com  Tue Aug 22 15:32:25 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 23:32:25 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: <001501c00c3a$becdaac0$4500a8c0@thomasnb>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>

> > Modified Files:
> > python20.wse
> > Log Message:
> > Thomas Heller noticed that the wrong registry entry was written for
> > the DLL.  Replace
> >  %_SYSDEST_%\Python20.dll
> > with
> >  %_DLLDEST_%\Python20.dll.
> >
> Unfortunately, there was a bug in my bug-report.

Actually, there is no need to write that entry at all!  It should be
removed.  I thought it was, ages ago.

Mark.




From thomas at xs4all.net  Tue Aug 22 15:35:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 15:35:13 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>; from fredrik@pythonware.com on Tue, Aug 22, 2000 at 02:27:11PM +0200
References: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>
Message-ID: <20000822153512.H4933@xs4all.nl>

On Tue, Aug 22, 2000 at 02:27:11PM +0200, Fredrik Lundh wrote:

> well, see for yourself:
> http://www.pythonlabs.com/logos.html

Oh, that reminds me, the FAQ needs adjusting ;) It still says:
"""
1.2. Why is it called Python?

Apart from being a computer scientist, I'm also a fan of "Monty Python's
Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely
-- case you didn't know). It occurred to me one day that I needed a name
that was short, unique, and slightly mysterious. And I happened to be
reading some scripts from the series at the time... So then I decided to
call my language Python. But Python is not a joke. And don't you associate
it with dangerous reptiles either! (If you need an icon, use an image of the
16-ton weight from the TV series or of a can of SPAM :-)
"""
 
And while I'm at it, I hope I can say without offending anyone that I hope
the logo is open for critisism. Few (if any?) python species look that
green, making the logo looks more like an adder. And I think the more
majestic python species are cooler on a logo, anyway ;) If whoever makes the
logos wants, I can go visit the small reptile-zoo around the corner from
where I live and snap some pictures of the Pythons they have there (they
have great Tiger-Pythons, including an albino one!)

I-still-like-the-shirt-though!-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas.heller at ion-tof.com  Tue Aug 22 15:52:16 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Tue, 22 Aug 2000 15:52:16 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
References: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>
Message-ID: <002b01c00c40$2e7b32c0$4500a8c0@thomasnb>

> > > Modified Files:
> > > python20.wse
> > > Log Message:
> > > Thomas Heller noticed that the wrong registry entry was written for
> > > the DLL.  Replace
> > >  %_SYSDEST_%\Python20.dll
> > > with
> > >  %_DLLDEST_%\Python20.dll.
> > >
> > Unfortunately, there was a bug in my bug-report.
> 
> Actually, there is no need to write that entry at all!  It should be
> removed.  I thought it was, ages ago.
I would like to use this entry to find the python-interpreter
belonging to a certain registry entry.

How would you do it if this entry is missing?
Guess the name python<major-version/minor-version>.dll???

Thomas Heller




From guido at beopen.com  Tue Aug 22 17:04:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 22 Aug 2000 10:04:33 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: Your message of "Tue, 22 Aug 2000 23:32:25 +1000."
             <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com> 
Message-ID: <200008221504.KAA01151@cj20424-a.reston1.va.home.com>

> From: "Mark Hammond" <MarkH at ActiveState.com>
> To: "Thomas Heller" <thomas.heller at ion-tof.com>, <python-dev at python.org>
> Date: Tue, 22 Aug 2000 23:32:25 +1000
> 
> > > Modified Files:
> > > python20.wse
> > > Log Message:
> > > Thomas Heller noticed that the wrong registry entry was written for
> > > the DLL.  Replace
> > >  %_SYSDEST_%\Python20.dll
> > > with
> > >  %_DLLDEST_%\Python20.dll.
> > >
> > Unfortunately, there was a bug in my bug-report.

Was that last like Thomas Heller speaking?  I didn't see that mail!
(Maybe because my machine crashed due to an unexpected power outage a
few minutes ago.)

> Actually, there is no need to write that entry at all!  It should be
> removed.  I thought it was, ages ago.

OK, will do, but for 2.0 only.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From MarkH at ActiveState.com  Tue Aug 22 16:04:08 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Wed, 23 Aug 2000 00:04:08 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: <002b01c00c40$2e7b32c0$4500a8c0@thomasnb>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEOPDFAA.MarkH@ActiveState.com>

[Me, about removing the .DLL entry from the registry]

> > Actually, there is no need to write that entry at all!  It should be
> > removed.  I thought it was, ages ago.

[Thomas]
> I would like to use this entry to find the python-interpreter
> belonging to a certain registry entry.
>
> How would you do it if this entry is missing?
> Guess the name python<major-version/minor-version>.dll???

I think I am responsible for this registry entry in the first place.
Pythonwin/COM etc. went down the path of locating and loading the Python
DLL from the registry, but it has since all been long removed.

The basic problem is that there is only _one_ acceptable Python DLL for a
given version, regardless of what that particular registry says!  If the
registry points to the "wrong" DLL, things start to go wrong pretty quick,
and in not-so-obvious ways!

I think it is better to LoadLibrary("Python%d.dll") (or GetModuleHandle()
if you know Python is initialized) - this is what the system itself will
soon be doing to start Python up anyway!

Mark.




From skip at mojam.com  Tue Aug 22 16:45:27 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 22 Aug 2000 09:45:27 -0500 (CDT)
Subject: [Python-Dev] commonprefix - the beast just won't die...
Message-ID: <14754.37383.913840.582313@beluga.mojam.com>

I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
updated the tests (still to be checked in) and was starting to work on
documentation changes, when I realized that something Guido said about using
dirname to trim to the common directory prefix is probably not correct.
Here's an example.  The common prefix of ["/usr/local", "/usr/local/bin"] is
"/usr/local".  If you blindly apply dirname to that (which is what I think
Guido suggested as the way to make commonprefix do what I wanted, you wind
up with "/usr", which isn't going to be correct on most Unix flavors.
Instead, you need to check that the prefix doesn't exist or isn't a
directory before applying dirname.  (And of course, that only works on the
machine containing the paths in question.  You should be able to import
posixpath on a Mac and feed it Unix-style paths, which you won't be able to
check for existence.)

Based on this problem, I'm against documenting using dirname to trim the
commonprefix output to a directory prefix.  I'm going to submit a patch with
the test case and minimal documentation changes and leave it at that for
now.

Skip



From mal at lemburg.com  Tue Aug 22 16:43:50 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 16:43:50 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>  
	            <39A23294.B2DB3C77@lemburg.com> <200008221303.IAA09992@cj20424-a.reston1.va.home.com>
Message-ID: <39A291A6.3DC7A4E3@lemburg.com>

Guido van Rossum wrote:
> 
> > > [MAL]
> > > > How about a third variant:
> > > >
> > > > #3:
> > > > __iter = <object>.iterator()
> > > > while __iter:
> > > >    <variable> = __iter.next()
> > > >    <block>
> > > >
> > > > This adds a slot call, but removes the malloc overhead introduced
> > > > by returning a tuple for every iteration (which is likely to be
> > > > a performance problem).
> > >
> > > Are you sure the slot call doesn't cause some malloc overhead as well?
> >
> > Quite likely not, since the slot in question doesn't generate
> > Python objects (nb_nonzero).
> 
> Agreed only for built-in objects like lists.  For class instances this
> would be way more expensive, because of the two calls vs. one!

True.
 
> > > Ayway, the problem with this one is that it requires a dynamic
> > > iterator (one that generates values on the fly, e.g. something reading
> > > lines from a pipe) to hold on to the next value between "while __iter"
> > > and "__iter.next()".
> >
> > Hmm, that depends on how you look at it: I was thinking in terms
> > of reading from a file -- feof() is true as soon as the end of
> > file is reached. The same could be done for iterators.
> 
> But feof() needs to read an extra character into the buffer if the
> buffer is empty -- so it needs buffering!  That's what I'm trying to
> avoid.

Ok.
 
> > We might also consider a mixed approach:
> >
> > #5:
> > __iter = <object>.iterator()
> > while __iter:
> >    try:
> >        <variable> = __iter.next()
> >    except ExhaustedIterator:
> >        break
> >    <block>
> >
> > Some iterators may want to signal the end of iteration using
> > an exception, others via the truth text prior to calling .next(),
> > e.g. a list iterator can easily implement the truth test
> > variant, while an iterator with complex .next() processing
> > might want to use the exception variant.
> 
> Belt and suspenders.  What does this buy you over "while 1"?

It gives you two possible ways to signal "end of iteration".
But your argument about Python iterators (as opposed to
builtin ones) applies here as well, so I withdraw this one :-)
 
> > Another possibility would be using exception class objects
> > as singleton indicators of "end of iteration":
> >
> > #6:
> > __iter = <object>.iterator()
> > while 1:
> >    try:
> >        rc = __iter.next()
> >    except ExhaustedIterator:
> >        break
> >    else:
> >        if rc is ExhaustedIterator:
> >            break
> >    <variable> = rc
> >    <block>
> 
> Then I'd prefer to use a single protocol:
> 
>     #7:
>     __iter = <object>.iterator()
>     while 1:
>        rc = __iter.next()
>        if rc is ExhaustedIterator:
>            break
>        <variable> = rc
>        <block>
> 
> This means there's a special value that you can't store in lists
> though, and that would bite some introspection code (e.g. code listing
> all internal objects).

Which brings us back to the good old "end of iteration" == raise
an exception logic :-)

Would this really hurt all that much in terms of performance ?
I mean, todays for-loop code uses IndexError for much the same
thing...
 
    #8:
    __iter = <object>.iterator()
    while 1:
       try:
           <variable> = __iter.next()
       except ExhaustedIterator:
           break
       <block>

Since this will be written in C, we don't even have the costs
of setting up an exception block.

I would still suggest that the iterator provides the current
position and iteration value as attributes. This avoids some
caching of those values and also helps when debugging code
using introspection tools.

The positional attribute will probably have to be optional
since not all iterators can supply this information, but
the .value attribute is certainly within range (it would
cache the value returned by the last .next() or .prev()
call).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Tue Aug 22 17:16:38 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 17:16:38 +0200
Subject: [Python-Dev] commonprefix - the beast just won't die...
In-Reply-To: <14754.37383.913840.582313@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 22, 2000 at 09:45:27AM -0500
References: <14754.37383.913840.582313@beluga.mojam.com>
Message-ID: <20000822171638.I4933@xs4all.nl>

On Tue, Aug 22, 2000 at 09:45:27AM -0500, Skip Montanaro wrote:

> I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
> updated the tests (still to be checked in) and was starting to work on
> documentation changes, when I realized that something Guido said about using
> dirname to trim to the common directory prefix is probably not correct.
> Here's an example.  The common prefix of ["/usr/local", "/usr/local/bin"] is
> "/usr/local".  If you blindly apply dirname to that (which is what I think
> Guido suggested as the way to make commonprefix do what I wanted, you wind
> up with "/usr", which isn't going to be correct on most Unix flavors.
> Instead, you need to check that the prefix doesn't exist or isn't a
> directory before applying dirname.

And even that won't work, in a case like this:

/home/swenson/
/home/swen/

(common prefix would be /home/swen, which is a directory) or cases like
this:

/home/swenson/
/home/swenniker/

where another directory called
/home/swen

exists.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Tue Aug 22 20:14:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 20:14:56 +0200
Subject: [Python-Dev] Adding doc-strings to attributes [with patch]
Message-ID: <39A2C320.ADF5E80F@lemburg.com>

Here's a patch which roughly implements the proposed attribute
doc-string syntax and semantics:

class C:
        " class C doc-string "

        a = 1
        " attribute C.a doc-string "

        b = 2
        " attribute C.b doc-string "

The compiler would handle these cases as follows:

" class C doc-string " -> C.__doc__
" attribute C.a doc-string " -> C.__doc__a__
" attribute C.b doc-string " -> C.__doc__b__

The name mangling assures that attribute doc-strings

a) participate in class inheritence and
b) are treated as special attributes (following the __xxx__
   convention)

Also, the look&feel of this convention is similar to that
of the other existing conventions: the doc string follows
the definition of the object.

The patch is a little rough in the sense that binding the
doc-string to the attribute name is done using a helper
variable that is not reset by non-expressions during the
compile. Shouldn't be too hard to fix though... at least
not for one of you compiler wizards ;-)

What do you think ?

[I will probably have to write a PEP for this...]

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
-------------- next part --------------
--- CVS-Python/Python/compile.c	Tue Aug 22 10:31:06 2000
+++ Python+Unicode/Python/compile.c	Tue Aug 22 19:59:30 2000
@@ -293,10 +293,11 @@ struct compiling {
 	int c_last_addr, c_last_line, c_lnotab_next;
 #ifdef PRIVATE_NAME_MANGLING
 	char *c_private;	/* for private name mangling */
 #endif
 	int c_tmpname;		/* temporary local name counter */
+        char *c_last_name;       /* last assigned name */
 };
 
 
 /* Error message including line number */
 
@@ -435,12 +436,13 @@ com_init(struct compiling *c, char *file
 	c->c_stacklevel = 0;
 	c->c_maxstacklevel = 0;
 	c->c_firstlineno = 0;
 	c->c_last_addr = 0;
 	c->c_last_line = 0;
-	c-> c_lnotab_next = 0;
+	c->c_lnotab_next = 0;
 	c->c_tmpname = 0;
+	c->c_last_name = 0;
 	return 1;
 	
   fail:
 	com_free(c);
  	return 0;
@@ -1866,10 +1868,11 @@ com_assign_name(struct compiling *c, nod
 {
 	REQ(n, NAME);
 	com_addopname(c, assigning ? STORE_NAME : DELETE_NAME, n);
 	if (assigning)
 		com_pop(c, 1);
+	c->c_last_name = STR(n);
 }
 
 static void
 com_assign(struct compiling *c, node *n, int assigning)
 {
@@ -1974,18 +1977,40 @@ com_assign(struct compiling *c, node *n,
 		}
 	}
 }
 
 /* Forward */ static node *get_rawdocstring(node *);
+/* Forward */ static PyObject *get_docstring(node *);
 
 static void
 com_expr_stmt(struct compiling *c, node *n)
 {
 	REQ(n, expr_stmt); /* testlist ('=' testlist)* */
-	/* Forget it if we have just a doc string here */
-	if (!c->c_interactive && NCH(n) == 1 && get_rawdocstring(n) != NULL)
+	/* Handle attribute doc-strings here */
+	if (!c->c_interactive && NCH(n) == 1) {
+	    node *docnode = get_rawdocstring(n);
+	    if (docnode != NULL) {
+		if (c->c_last_name) {
+		    PyObject *doc = get_docstring(docnode);
+		    int i = com_addconst(c, doc);
+		    char docname[420];
+#if 0
+		    printf("found doc-string '%s' for '%s'\n", 
+			   PyString_AsString(doc),
+			   c->c_last_name);
+#endif
+		    sprintf(docname, "__doc__%.400s__", c->c_last_name);
+		    com_addoparg(c, LOAD_CONST, i);
+		    com_push(c, 1);
+		    com_addopnamestr(c, STORE_NAME, docname);
+		    com_pop(c, 1);
+		    c->c_last_name = NULL;
+		    Py_DECREF(doc);
+		}
 		return;
+	    }
+	}
 	com_node(c, CHILD(n, NCH(n)-1));
 	if (NCH(n) == 1) {
 		if (c->c_interactive)
 			com_addbyte(c, PRINT_EXPR);
 		else

From donb at init.com  Tue Aug 22 21:16:39 2000
From: donb at init.com (Donald Beaudry)
Date: Tue, 22 Aug 2000 15:16:39 -0400
Subject: [Python-Dev] Adding insint() function 
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <200008221916.PAA17130@zippy.init.com>

Andrew Kuchling <akuchlin at mems-exchange.org> wrote,
> This duplication bugs me.  Shall I submit a patch to add an API
> convenience function to do this, and change the modules to use it?
> Suggested prototype and name: PyDict_InsertInteger( dict *, string,
> long)

+0 but...

...why not:

   PyDict_SetValueString(PyObject* dict, char* key, char* fmt, ...)

and

   PyDict_SetValue(PyObject* dict, PyObject* key, char* fmt, ...)

where the fmt is Py_BuildValue() format string and the ... is, of
course, the argument list.

--
Donald Beaudry                                     Ab Initio Software Corp.
                                                   201 Spring Street
donb at init.com                                      Lexington, MA 02421
                      ...Will hack for sushi...



From ping at lfw.org  Tue Aug 22 22:02:31 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 22 Aug 2000 13:02:31 -0700 (PDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: <200008212030.PAA26887@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org>

On Mon, 21 Aug 2000, Guido van Rossum wrote:
> > > > > Summary: Allow all assignment expressions after 'import
> > > > > something as'
[...]
> I kind of doubt it, because it doesn't look useful.

Looks potentially useful to me.  If nothing else, it's certainly
easier to explain than any other behaviour i could think of, since
assignment is already well-understood.

> I do want "import foo.bar as spam" back, assigning foo.bar to spam.

No no no no.  Or at least let's step back and look at the whole
situation first.

"import foo.bar as spam" makes me uncomfortable because:

    (a) It's not clear whether spam should get foo or foo.bar, as
        evidenced by the discussion between Gordon and Thomas.

    (b) There's a straightforward and unambiguous way to express
        this already: "from foo import bar as spam".

    (c) It's not clear whether this should work only for modules
        named bar, or any symbol named bar.


Before packages, the only two forms of the import statement were:

    import <module>
    from <module> import <symbol>

After packages, the permitted forms are now:

    import <module>
    import <package>
    import <pkgpath>.<module>
    import <pkgpath>.<package>
    from <module> import <symbol>
    from <package> import <module>
    from <pkgpath>.<module> import <symbol>
    from <pkgpath>.<package> import <module>

where a <pkgpath> is a dot-separated list of package names.

With "as" clauses, we could permit:

    import <module> as <localmodule>
    import <package> as <localpackage>
??  import <pkgpath>.<module> as <localmodule>
??  import <pkgpath>.<package> as <localpackage>
??  import <module>.<symbol> as <localsymbol>
??  import <pkgpath>.<module>.<symbol> as <localsymbol>
    from <module> import <symbol> as <localsymbol>
    from <package> import <symbol> as <localsymbol>
    from <pkgpath>.<module> import <symbol> as <localsymbol>
    from <pkgpath>.<package> import <module> as <localmodule>

It's not clear that we should allow "as" on the forms marked with
??, since the other six clearly identify the thing being renamed
and they do not.

Also note that all the other forms using "as" assign exactly one
thing: the name after the "as".  Would the forms marked with ??
assign just the name after the "as" (consistent with the other
"as" forms), or also the top-level package name as well (consistent
with the current behaviour of "import <pkgpath>.<module>")?

That is, would

    import foo.bar as spam

define just spam or both foo and spam?

All these questions make me uncertain...


-- ?!ng




From jack at oratrix.nl  Wed Aug 23 00:03:24 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 23 Aug 2000 00:03:24 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is... 
In-Reply-To: Message by Thomas Wouters <thomas@xs4all.net> ,
	     Tue, 22 Aug 2000 15:35:13 +0200 , <20000822153512.H4933@xs4all.nl> 
Message-ID: <20000822220329.19A91E266F@oratrix.oratrix.nl>

Ah, forget about the snake. It was an invention of
those-who-watch-blue-screens, and I guess Guido only jumped on the
bandwagon because those-who-gave-us-bluescreens offered him lots of
money or something.

On Real Machines Python still uses the One And Only True Python Icon,
and will continue to do so by popular demand (although
those-who-used-to-watch-bluescreens-but-saw-the-light occasionally
complain:-).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From tim_one at email.msn.com  Wed Aug 23 04:43:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 22 Aug 2000 22:43:04 -0400
Subject: Not commonprefix (was RE: [Python-Dev] commonprefix - the beast just won't die...)
In-Reply-To: <20000822171638.I4933@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEGDHBAA.tim_one@email.msn.com>

[Skip Montanaro]
> I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
> updated the tests (still to be checked in) and was starting to work on
> documentation changes, when I realized that something Guido
> said about using dirname to trim to the common directory prefix is
> probably not correct.  Here's an example.  The common prefix of
>     ["/usr/local", "/usr/local/bin"]
> is
>     "/usr/local"
> If you blindly apply dirname to that (which is what I think Guido
> suggested as the way to make commonprefix do what I wanted, you wind
> up with
>     "/usr"
> which isn't going to be correct on most Unix flavors.  Instead, you
> need to check that the prefix doesn't exist or isn't a directory
> before applying dirname.

[Thomas Wouters]
> And even that won't work, in a case like this:
>
> /home/swenson/
> /home/swen/
>
> (common prefix would be /home/swen, which is a directory)

Note that Guido's suggestion does work for that, though.

> or cases like this:
>
> /home/swenson/
> /home/swenniker/
>
> where another directory called
> /home/swen
>
> exists.

Ditto.  This isn't coincidence:  Guido's suggestion works as-is *provided
that* each dirname in the original collection ends with a path separator.
Change Skip's example to

    ["/usr/local/", "/usr/local/bin/"]
                ^ stuck in slashes ^

and Guido's suggestion works fine too.  But these are purely
string-crunching functions, and "/usr/local" *screams* "directory" to people
thanks to its specific name.  Let's make the test case absurdly "simple":

    ["/x/y", "/x/y"]

What's the "common (directory) prefix" here?  Well, there's simply no way to
know at this level.  It's /x/y if y is a directory, or /x if y's just a
regular file.  Guido's suggestion returns /x in this case, or /x/y if you
add trailing slashes to both.  If you don't tell a string gimmick which
inputs are and aren't directories, you can't expect it to guess.

I'll say again, if you want a new function, press for one!  Just leave
commonprefix alone.






From tim_one at email.msn.com  Wed Aug 23 06:32:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 00:32:32 -0400
Subject: [Python-Dev] 2.0 Release Plans
Message-ID: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>

PythonLabs had a high-decibel meeting today (well, Monday), culminating in
Universal Harmony.  Jeremy will be updating PEP 200 accordingly.  Just three
highlights:

+ The release schedule has Officially Slipped by one week:  2.0b1 will ship
a week from this coming Monday.  There are too many Open and Accepted
patches backed up to meet the original date.  Also problems cropping up,
like new std tests failing to pass every day (you're not supposed to do
that, you know!  consider yourself clucked at), and patches having to be
redone because of other patches getting checked in.  We want to take the
extra week to do this right, and give you more space to do *your* part
right.

+ While only one beta release is scheduled at this time, we reserve the
right to make a second beta release if significant problems crop up during
the first beta period.  Of course that would cause additional slippage of
2.0 final, if it becomes necessary.  Note that "no features after 2.0b1 is
out!" still stands, regardless of how many beta releases there are.

+ I changed the Patch Manager guidelines at

     http://python.sourceforge.net/sf-faq.html#a1

to better reflect the way we're actually using the beast.  In a nutshell,
Rejected has been changed to mean "this patch is dead, period"; and Open
patches that are awaiting resolution of complaints should remain Open.

All right.  Time for inspiration.  From my point of view, you've all had
waaaaaay too much sleep in August!  Pull non-stop all-nighters until 2.0b1
is out the door, or go work on some project for sissies -- like Perl 6.0 or
the twelve thousandth implementation of Scheme <wink>.

no-guts-no-glory-slow-and-steady-wins-the-race-ly y'rs  - tim





From esr at thyrsus.com  Wed Aug 23 07:16:01 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Wed, 23 Aug 2000 01:16:01 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 23, 2000 at 12:32:32AM -0400
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
Message-ID: <20000823011601.D29063@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> All right.  Time for inspiration.  From my point of view, you've all had
> waaaaaay too much sleep in August!  Pull non-stop all-nighters until 2.0b1
> is out the door, or go work on some project for sissies -- like Perl 6.0 or
> the twelve thousandth implementation of Scheme <wink>.
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Hey!  I *resemble* that remark!

I don't think I'm presently responsible for anything critical.  If I've
spaced something, somebody tell me now.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

What, then is law [government]? It is the collective organization of
the individual right to lawful defense."
	-- Frederic Bastiat, "The Law"



From tim_one at email.msn.com  Wed Aug 23 08:57:07 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 02:57:07 -0400
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <20000822153512.H4933@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com>

[F]
> well, see for yourself:
>     http://www.pythonlabs.com/logos.html

We should explain that.  I'll let Bob Weiner (BeOpen's CTO) do it instead,
though, because he explained it well to us:

<BOB>
From: weiner at beopen.com
Sent: Wednesday, August 23, 2000 1:23 AM

Just to clarify, the intent of these is for use by companies or individuals
who choose on their own to link back to the PythonLabs site and show their
support for BeOpen's work on Python.  Use of any such branding is wholly
voluntary, as you might expect.

To clarify even further, we recognize and work with many wonderful parties
who contribute to Python.  We expect to continue to put out source releases
called just `Python', and brand platform-specific releases which we produce
and quality-assure ourselves as `BeOpen Python' releases.  This is similar
to what other companies do in the Linux space and other open source arenas.
We know of another company branding their Python release; this helps
potential customers differentiate offerings in the largely undifferentiated
open source space.

We believe it is important and we meet with companies every week who tell us
they want one or more companies behind the development, productization and
support of Python (like Red Hat or SuSE behind Linux).  Connecting the
BeOpen name to Python is one way in which we can help them know that we
indeed do provide these services for Python.  The BeOpen name was chosen
very carefully to encourage people to take an open approach in their
technology deployments, so we think this is a good association for Python to
have and hope that many Python users will choose to help support these
efforts.

We're also very open to working with other Python-related firms to help
build broader use and acceptance of Python.  Mail
<pythonlabs-info at beopen.com> if you'd like to work on a partnership
together.

</BOB>

See?  It's not evil.  *All* American CTOs say "space" and "arena" too much,
so don't gripe about that either.  I can tell you that BeOpen isn't exactly
getting rich off their Python support so far, wrestling with CNRI is
exhausting in more ways than one, and Bob Weiner is a nice man.  Up to this
point, his support of PythonLabs has been purely philanthropic!  If you
appreciate that, you *might* even consider grabbing a link.

[Thomas Wouters]
> Oh, that reminds me, the FAQ needs adjusting ;) It still says:
> """
> 1.2. Why is it called Python?
>
> Apart from being a computer scientist, I'm also a fan of "Monty Python's
> Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely
> -- case you didn't know). It occurred to me one day that I needed a name
> that was short, unique, and slightly mysterious. And I happened to be
> reading some scripts from the series at the time... So then I decided to
> call my language Python. But Python is not a joke. And don't you associate
> it with dangerous reptiles either! (If you need an icon, use an
> image of the
> 16-ton weight from the TV series or of a can of SPAM :-)
> """

Yes, that needs to be rewritten.  Here you go:

    Apart from being a computer scientist, I'm also a fan of
    "Monty BeOpen Python's Flying Circus" (a BBC comedy series from
    the seventies, in the -- unlikely -- case you didn't know). It
    occurred to me one day that I needed a name that was short, unique,
    and slightly mysterious. And I happened to be reading some scripts
    from the series at the time... So then I decided to call my language
    BeOpen Python. But BeOpen Python is not a joke. And don't you associate
    it with dangerous reptiles either! (If you need an icon, use an image
    of the decidedly *friendly* BeOpen reptiles at
    http://www.pythonlabs.com/logos.html).

> And while I'm at it, I hope I can say without offending anyone that I
> hope the logo is open for critisism.

You can hope all you like, and I doubt you're offending anyone, but the logo
is nevertheless not open for criticism:  the BDFL picked it Himself!  Quoth
Guido, "I think he's got a definite little smile going".  Besides, if you
don't like this logo, you're going to be sooooooooo disappointed when you
get a PythonLabs T-shirt.

> ...
> I-still-like-the-shirt-though!-ly y'rs,

Good!  In that case, I'm going to help you with your crusade after all:

Hi! I'm a .signature virus! copy me into your .signature file to
help me spread!





From mal at lemburg.com  Wed Aug 23 09:44:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 09:44:56 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
Message-ID: <39A380F8.3D1C86F6@lemburg.com>

Tim Peters wrote:
> 
> PythonLabs had a high-decibel meeting today (well, Monday), culminating in
> Universal Harmony.  Jeremy will be updating PEP 200 accordingly.  Just three
> highlights:
> 
> + The release schedule has Officially Slipped by one week:  2.0b1 will ship
> a week from this coming Monday.  There are too many Open and Accepted
> patches backed up to meet the original date.  Also problems cropping up,
> like new std tests failing to pass every day (you're not supposed to do
> that, you know!  consider yourself clucked at), and patches having to be
> redone because of other patches getting checked in.  We want to take the
> extra week to do this right, and give you more space to do *your* part
> right.
> 
> + While only one beta release is scheduled at this time, we reserve the
> right to make a second beta release if significant problems crop up during
> the first beta period.  Of course that would cause additional slippage of
> 2.0 final, if it becomes necessary.  Note that "no features after 2.0b1 is
> out!" still stands, regardless of how many beta releases there are.

Does this mean I can still splip in that minor patch to allow
for attribute doc-strings in 2.0b1 provided I write up a short
PEP really fast ;-) ?

BTW, what the new standard on releasing ideas to dev public ?
I know I'll have to write a PEP, but where should I put the
patch ? Into the SF patch manager or on a separate page on the
Internet ?

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Wed Aug 23 09:36:04 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 09:36:04 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0204.txt,1.3,1.4
In-Reply-To: <200008230542.WAA02168@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Tue, Aug 22, 2000 at 10:42:00PM -0700
References: <200008230542.WAA02168@slayer.i.sourceforge.net>
Message-ID: <20000823093604.M4933@xs4all.nl>

On Tue, Aug 22, 2000 at 10:42:00PM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/nondist/peps
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv2158
> 
> Modified Files:
> 	pep-0204.txt 
> Log Message:
> Editorial review, including:
> 
>     - Rearrange and standardize headers
>     - Removed ^L's
>     - Spellchecked
>     - Indentation and formatting
>     - Added reference to PEP 202

Damn, I'm glad I didn't rewrite it on my laptop yesterday. This looks much
better, Barry, thanx ! Want to co-author it ? :) (I really need to get
myself some proper (X)Emacs education so I can do cool things like
two-spaces-after-finished-sentences too)

> Thomas, if the open issues have been decided, they can be `closed' in
> this PEP, and then it should probably be marked as Accepted.

Well, that would require me to force the open issues, because they haven't
been decided. They have hardly been discussed ;) I'm not sure how to
properly close them, however. For instance: I would say "not now" to ranges
of something other than PyInt objects, and the same to the idea of
generators. But the issues remain open for debate in future versions. Should
there be a 'closed issues' section, or should I just not mention them and
have people start a new PEP and gather the ideas anew when the time comes ?

(And a Decisions (either a consensus one or a BDFL one) would be nice on
whether the two new PyList_ functions should be part of the API or not. The
rest of the issues I can handle.)

Don't forget, I'm a newbie in standards texts. Be gentle ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Wed Aug 23 12:17:33 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 23 Aug 2000 10:17:33 +0000
Subject: [Python-Dev] ...and the new name for our favourite little language 
 is...
References: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com>
Message-ID: <39A3A4BD.C30E4729@nowonder.de>

[Tim]
> get a PythonLabs T-shirt.

[Thomas]
> I-still-like-the-shirt-though!-ly y'rs,

Okay, folks. What's the matter? I don't see any T-shirt
references on http://pythonlabs.com. Where? How?

help-me-with-my-crusade-too-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Wed Aug 23 11:01:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 11:01:23 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <39A3A4BD.C30E4729@nowonder.de>; from nowonder@nowonder.de on Wed, Aug 23, 2000 at 10:17:33AM +0000
References: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com> <39A3A4BD.C30E4729@nowonder.de>
Message-ID: <20000823110122.N4933@xs4all.nl>

On Wed, Aug 23, 2000 at 10:17:33AM +0000, Peter Schneider-Kamp wrote:
> [Tim]
> > get a PythonLabs T-shirt.
> 
> [Thomas]
> > I-still-like-the-shirt-though!-ly y'rs,

> Okay, folks. What's the matter? I don't see any T-shirt
> references on http://pythonlabs.com. Where? How?

We were referring to the PythonLabs T-shirt that was given out (in limited
numbers, I do believe, since my perl-hugging colleague only got me one, and
couldn't get one for himself & the two python-learning colleagues *) at
O'Reilly's Open Source Conference. It has the PythonLabs logo on front (the
green snake, on a simple black background, in a white frame) with
'PYTHONLABS' underneath the logo, and on the back it says 'PYTHONLABS.COM'
and 'There Is Only One Way To Do It.'. 

I'm sure they'll have some more at the next IPC ;)

(* As a result, I can't wear this T-shirt to work, just like my X-Files
T-shirt, for fear of being forced to leave without it ;)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Wed Aug 23 11:21:25 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 23 Aug 2000 12:21:25 +0300 (IDT)
Subject: [Python-Dev] ...and the new name for our favourite little language
 is...
In-Reply-To: <20000823110122.N4933@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008231219190.8650-100000@sundial>

On Wed, 23 Aug 2000, Thomas Wouters wrote:

> We were referring to the PythonLabs T-shirt that was given out (in limited
> numbers, I do believe, since my perl-hugging colleague only got me one, and
> couldn't get one for himself & the two python-learning colleagues *) at
> O'Reilly's Open Source Conference. It has the PythonLabs logo on front (the
> green snake, on a simple black background, in a white frame) with
> 'PYTHONLABS' underneath the logo, and on the back it says 'PYTHONLABS.COM'
> and 'There Is Only One Way To Do It.'. 
> 
> I'm sure they'll have some more at the next IPC ;)

Can't they sell them over the net (at copyleft or something)? I'd love
to buy one for me and my friends, and maybe one for everyone in the
first Python-IL meeting..

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From fdrake at beopen.com  Wed Aug 23 16:38:12 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 23 Aug 2000 10:38:12 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A380F8.3D1C86F6@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
	<39A380F8.3D1C86F6@lemburg.com>
Message-ID: <14755.57812.111681.750661@cj42289-a.reston1.va.home.com>

M.-A. Lemburg writes:
 > Does this mean I can still splip in that minor patch to allow
 > for attribute doc-strings in 2.0b1 provided I write up a short
 > PEP really fast ;-) ?

  Write a PEP if you like; I think I'd really like to look at this
before you change any code, and I've not had a chance to read your
messages about this yet.  This is *awefully* late to be making a
change that hasn't been substantially hashed out and reviewed, and I'm
under the impression that this is pretty new (the past week or so).

 > BTW, what the new standard on releasing ideas to dev public ?
 > I know I'll have to write a PEP, but where should I put the
 > patch ? Into the SF patch manager or on a separate page on the
 > Internet ?

  Patches should still go to the SF patch manager.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Wed Aug 23 18:22:04 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 11:22:04 -0500
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Tue, 22 Aug 2000 13:02:31 MST."
             <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org> 
References: <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org> 
Message-ID: <200008231622.LAA02275@cj20424-a.reston1.va.home.com>

> On Mon, 21 Aug 2000, Guido van Rossum wrote:
> > > > > > Summary: Allow all assignment expressions after 'import
> > > > > > something as'
> [...]
> > I kind of doubt it, because it doesn't look useful.

[Ping]
> Looks potentially useful to me.  If nothing else, it's certainly
> easier to explain than any other behaviour i could think of, since
> assignment is already well-understood.

KISS suggests not to add it.  We had a brief discussion about this at
our 2.0 planning meeting and nobody there thought it would be worth
it, and several of us felt it would be asking for trouble.

> > I do want "import foo.bar as spam" back, assigning foo.bar to spam.
> 
> No no no no.  Or at least let's step back and look at the whole
> situation first.
> 
> "import foo.bar as spam" makes me uncomfortable because:
> 
>     (a) It's not clear whether spam should get foo or foo.bar, as
>         evidenced by the discussion between Gordon and Thomas.

As far as I recall that conversation, it's just that Thomas (more or
less accidentally) implemented what was easiest from the
implementation's point of view without thinking about what it should
mean.  *Of course* it should mean what I said if it's allowed.  Even
Thomas agrees to that now.

>     (b) There's a straightforward and unambiguous way to express
>         this already: "from foo import bar as spam".

Without syntax coloring that looks word soup to me.

  import foo.bar as spam

uses fewer words to say the same clearer.

>     (c) It's not clear whether this should work only for modules
>         named bar, or any symbol named bar.

Same as for import: bar must be a submodule (or subpackage) in package
foo.

> Before packages, the only two forms of the import statement were:
> 
>     import <module>
>     from <module> import <symbol>
> 
> After packages, the permitted forms are now:
> 
>     import <module>
>     import <package>
>     import <pkgpath>.<module>
>     import <pkgpath>.<package>
>     from <module> import <symbol>
>     from <package> import <module>
>     from <pkgpath>.<module> import <symbol>
>     from <pkgpath>.<package> import <module>

You're creating more cases than necessary to get a grip on this.  This
is enough, if you realize that a package is also a module and the
package path doesn't add any new cases:

  import <module>
  from <module> import <symbol>
  from <package> import <module>

> where a <pkgpath> is a dot-separated list of package names.
> 
> With "as" clauses, we could permit:
> 
>     import <module> as <localmodule>
>     import <package> as <localpackage>
> ??  import <pkgpath>.<module> as <localmodule>
> ??  import <pkgpath>.<package> as <localpackage>
> ??  import <module>.<symbol> as <localsymbol>
> ??  import <pkgpath>.<module>.<symbol> as <localsymbol>
>     from <module> import <symbol> as <localsymbol>
>     from <package> import <symbol> as <localsymbol>
>     from <pkgpath>.<module> import <symbol> as <localsymbol>
>     from <pkgpath>.<package> import <module> as <localmodule>

Let's simplify that to:

  import <module> as <localname>
  from <module> import <symbol> as <localname>
  from <package> import <module> as <localname>

> It's not clear that we should allow "as" on the forms marked with
> ??, since the other six clearly identify the thing being renamed
> and they do not.
> 
> Also note that all the other forms using "as" assign exactly one
> thing: the name after the "as".  Would the forms marked with ??
> assign just the name after the "as" (consistent with the other
> "as" forms), or also the top-level package name as well (consistent
> with the current behaviour of "import <pkgpath>.<module>")?
> 
> That is, would
> 
>     import foo.bar as spam
> 
> define just spam or both foo and spam?

Aargh!  Just spam, of course!

> All these questions make me uncertain...

Not me.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Wed Aug 23 17:38:31 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 17:38:31 +0200
Subject: [Python-Dev] Attribute Docstring PEP (2.0 Release Plans)
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
		<39A380F8.3D1C86F6@lemburg.com> <14755.57812.111681.750661@cj42289-a.reston1.va.home.com>
Message-ID: <39A3EFF7.A4D874EC@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
> M.-A. Lemburg writes:
>  > Does this mean I can still splip in that minor patch to allow
>  > for attribute doc-strings in 2.0b1 provided I write up a short
>  > PEP really fast ;-) ?
> 
>   Write a PEP if you like; I think I'd really like to look at this
> before you change any code, and I've not had a chance to read your
> messages about this yet.  This is *awefully* late to be making a
> change that hasn't been substantially hashed out and reviewed, and I'm
> under the impression that this is pretty new (the past week or so).

FYI, I've attached the pre-PEP below (I also sent it to Barry
for review).

This PEP is indeed very new, but AFAIK it doesn't harm any existing
code and also doesn't add much code complexity to achieve what it's
doing (see the patch).

>  > BTW, what the new standard on releasing ideas to dev public ?
>  > I know I'll have to write a PEP, but where should I put the
>  > patch ? Into the SF patch manager or on a separate page on the
>  > Internet ?
> 
>   Patches should still go to the SF patch manager.

Here it is:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
-------------- next part --------------
PEP: 224
Title: Attribute Docstrings
Version: $Revision: 1.0 $
Owner: mal at lemburg.com (Marc-Andre Lemburg)
Python-Version: 2.0
Status: Draft
Created: 23-Aug-2000
Type: Standards Track


Introduction

    This PEP describes the "attribute docstring" proposal for Python
    2.0. This PEP tracks the status and ownership of this feature. It
    contains a description of the feature and outlines changes
    necessary to support the feature. The CVS revision history of this
    file contains the definitive historical record.


Rationale

    This PEP proposes a small addition to the way Python currently
    handles docstrings embedded in Python code. 

    Until now, Python only handles the case of docstrings which appear
    directly after a class definition, a function definition or as
    first string literal in a module. The string literals are added to
    the objects in question under the __doc__ attribute and are from
    then on available for introspecition tools which can extract the
    contained information for help, debugging and documentation
    purposes.

    Docstrings appearing in other locations as the ones mentioned are
    simply ignored and don't result in any code generation.

    Here is an example:

    class C:
	    " class C doc-string "

	    a = 1
	    " attribute C.a doc-string (1)"

	    b = 2
	    " attribute C.b doc-string (2)"

    The docstrings (1) and (2) are currently being ignored by the
    Python byte code compiler, but could obviously be put to good use
    for documenting the named assignments that preceed them.
    
    This PEP proposes to also make use of these cases by proposing
    semantics for adding their content to the objects in which they
    appear under new generated attribute names.

    The original idea behind this approach which also inspired the
    above example was to enable inline documentation of class
    attributes, which can currently only be documented in the class'
    docstring or using comments which are not available for
    introspection.


Implementation

    Docstrings are handled by the byte code compiler as expressions.
    The current implementation special cases the few locations
    mentioned above to make use of these expressions, but otherwise
    ignores the strings completely.

    To enable use of these docstrings for documenting named
    assignments (which is the natural way of defining e.g. class
    attributes), the compiler will have to keep track of the last
    assigned name and then use this name to assign the content of the
    docstring to an attribute of the containing object by means of
    storing it in as a constant which is then added to the object's
    namespace during object construction time.

    In order to preserve features like inheritence and hiding of
    Python's special attributes (ones with leading and trailing double
    underscores), a special name mangling has to be applied which
    uniquely identifies the docstring as belonging to the name
    assignment and allows finding the docstring later on by inspecting
    the namespace.

    The following name mangling scheme achieves all of the above:

		      __doc__<attributename>__

    To keep track of the last assigned name, the byte code compiler
    stores this name in a variable of the compiling structure. This
    variable defaults to NULL. When it sees a docstring, it then
    checks the variable and uses the name as basis for the above name
    mangling to produce an implicit assignment of the docstring to the
    mangled name. It then resets the variable to NULL to avoid
    duplicate assignments.

    If the variable does not point to a name (i.e. is NULL), no
    assignments are made.  These will continue to be ignored like
    before.  All classical docstrings fall under this case, so no
    duplicate assignments are done.

    In the above example this would result in the following new class
    attributes to be created:

    C.__doc__a__ == " attribute C.a doc-string (1)"
    C.__doc__b__ == " attribute C.b doc-string (2)"

    A patch to the current CVS version of Python 2.0 which implements
    the above is available on SourceForge at [1].


Caveats of the Implementation
    
    Since the implementation does not reset the compiling structure
    variable when processing a non-expression, e.g. a function definition,
    the last assigned name remains active until either the next assignment
    or the next occurrence of a docstring.

    This can lead to cases where the docstring and assignment may be
    separated by other expressions:

    class C:
	"C doc string"

	b = 2

	def x(self):
	    "C.x doc string"
	    y = 3
	    return 1

	"b's doc string"

    Since the definition of method "x" currently does not reset the
    used assignment name variable, it is still valid when the compiler
    reaches 

    
Copyright

    This document has been placed in the Public Domain.


References

    [1]
http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

From tim_one at email.msn.com  Wed Aug 23 17:40:46 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 11:40:46 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A380F8.3D1C86F6@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>

[MAL]
> Does this mean I can still splip in that minor patch to allow
> for attribute doc-strings in 2.0b1 provided I write up a short
> PEP really fast ;-) ?

2.0 went into feature freeze the Monday *before* this one!  So, no.  The "no
new features after 2.0b1" refers mostly to the patches currently in Open and
Accepted:  if *they're* not checked in before 2.0b1 goes out, they don't get
into 2.0 either.

Ideas that were accepted by Guido for 2.0 before last Monday aren't part of
the general "feature freeze".  Any new feature proposed *since* then has
been Postponed without second thought.  Guido accepted several ideas before
feature freeze that still haven't been checked in (in some cases, still not
coded!), and just dealing with them has already caused a slip in the
schedule.  We simply can't afford to entertain new ideas too now (indeed,
that's why "feature freeze" exists:  focus).

For you in particular <wink>, how about dealing with Open patch 100899?
It's been assigned to you for 5 weeks, and if you're not going to review it
or kick /F in the butt, assign it to someone else.

> BTW, what the new standard on releasing ideas to dev public ?
> I know I'll have to write a PEP, but where should I put the
> patch ? Into the SF patch manager or on a separate page on the
> Internet ?

The PEP should be posted to both Python-Dev and comp.lang.python after its
first stab is done.  If you don't at least post a link to the patch in the
SF Patch Manager, the patch doesn't officially exist.  I personally prefer
one-stop shopping, and SF is the Python Developer's Mall; but there's no
rule about that yet (note that 100899's patch was apparently so big SF
wouldn't accept it, so /F *had* to post just a URL to the Patch Manager).





From bwarsaw at beopen.com  Wed Aug 23 18:01:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 23 Aug 2000 12:01:32 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
	<39A380F8.3D1C86F6@lemburg.com>
Message-ID: <14755.62812.185580.367242@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> Does this mean I can still splip in that minor patch to allow
    M> for attribute doc-strings in 2.0b1 provided I write up a short
    M> PEP really fast ;-) ?

Well, it's really the 2.0 release manager's job to disappoint you, so
I won't. :) But yes a PEP would probably be required.  However, after
our group meeting yesterday, I'm changing the requirements for PEP
number assignment.  You need to send me a rough draft, not just an
abstract (there's too many incomplete PEPs already).

    M> BTW, what the new standard on releasing ideas to dev public ?
    M> I know I'll have to write a PEP, but where should I put the
    M> patch ? Into the SF patch manager or on a separate page on the
    M> Internet ?

Better to put the patches on SF.

-Barry



From bwarsaw at beopen.com  Wed Aug 23 18:09:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 23 Aug 2000 12:09:32 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0204.txt,1.3,1.4
References: <200008230542.WAA02168@slayer.i.sourceforge.net>
	<20000823093604.M4933@xs4all.nl>
Message-ID: <14755.63292.825567.868362@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Damn, I'm glad I didn't rewrite it on my laptop
    TW> yesterday. This looks much better, Barry, thanx ! Want to
    TW> co-author it ? :)

Naw, that's what an editor is for (actually, I thought an editor was
for completely covering your desktop like lox on a bagel).
    
    TW> (I really need to get myself some proper (X)Emacs education so
    TW> I can do cool things like two-spaces-after-finished-sentences
    TW> too)

Heh, that's just finger training, but I do it only because it works
well with XEmacs's paragraph filling.

    TW> Well, that would require me to force the open issues, because
    TW> they haven't been decided. They have hardly been discussed ;)
    TW> I'm not sure how to properly close them, however. For
    TW> instance: I would say "not now" to ranges of something other
    TW> than PyInt objects, and the same to the idea of
    TW> generators. But the issues remain open for debate in future
    TW> versions. Should there be a 'closed issues' section, or should
    TW> I just not mention them and have people start a new PEP and
    TW> gather the ideas anew when the time comes ?

    TW> (And a Decisions (either a consensus one or a BDFL one) would
    TW> be nice on whether the two new PyList_ functions should be
    TW> part of the API or not. The rest of the issues I can handle.)

The thing to do is to request BDFL pronouncement on those issues for
2.0, and write them up in a "BDFL Pronouncements" section at the end
of the PEP.  See PEP 201 for an example.  You should probably email
Guido directly and ask him to rule.  If he doesn't, then they'll get
vetoed by default once 2.0beta1 is out.

IMO, if some extension of range literals is proposed for a future
release of Python, then we'll issue a new PEP for those.

-Barry



From mal at lemburg.com  Wed Aug 23 17:56:17 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 17:56:17 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <39A3F421.415107E6@lemburg.com>

Tim Peters wrote:
> 
> [MAL]
> > Does this mean I can still splip in that minor patch to allow
> > for attribute doc-strings in 2.0b1 provided I write up a short
> > PEP really fast ;-) ?
> 
> 2.0 went into feature freeze the Monday *before* this one!  So, no.  The "no
> new features after 2.0b1" refers mostly to the patches currently in Open and
> Accepted:  if *they're* not checked in before 2.0b1 goes out, they don't get
> into 2.0 either.

Ah, ok. 

Pity I just started to do some heaviy doc-string extracting
just last week... oh, well.
 
> Ideas that were accepted by Guido for 2.0 before last Monday aren't part of
> the general "feature freeze".  Any new feature proposed *since* then has
> been Postponed without second thought.  Guido accepted several ideas before
> feature freeze that still haven't been checked in (in some cases, still not
> coded!), and just dealing with them has already caused a slip in the
> schedule.  We simply can't afford to entertain new ideas too now (indeed,
> that's why "feature freeze" exists:  focus).
> 
> For you in particular <wink>, how about dealing with Open patch 100899?
> It's been assigned to you for 5 weeks, and if you're not going to review it
> or kick /F in the butt, assign it to someone else.

AFAIK, Fredrik hasn't continued work on that patch and some
important parts are still missing, e.g. the generator scripts
and a description of how the whole thing works.

It's not that important though, since the patch is a space
optimization of what is already in Python 2.0 (and has been
for quite a while now): the Unicode database.
 
Perhaps I should simply post-pone the patch to 2.1 ?!

> > BTW, what the new standard on releasing ideas to dev public ?
> > I know I'll have to write a PEP, but where should I put the
> > patch ? Into the SF patch manager or on a separate page on the
> > Internet ?
> 
> The PEP should be posted to both Python-Dev and comp.lang.python after its
> first stab is done.  If you don't at least post a link to the patch in the
> SF Patch Manager, the patch doesn't officially exist.  I personally prefer
> one-stop shopping, and SF is the Python Developer's Mall; but there's no
> rule about that yet (note that 100899's patch was apparently so big SF
> wouldn't accept it, so /F *had* to post just a URL to the Patch Manager).

I've just posted the PEP here, CCed it to Barry and uploaded the
patch to SF. I'll post it to c.l.p tomorrow (don't know what that's
good for though, since I don't read c.l.p anymore).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Wed Aug 23 19:49:28 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 23 Aug 2000 13:49:28 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
References: <39A380F8.3D1C86F6@lemburg.com>
	<LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <14756.3752.23014.786587@bitdiddle.concentric.net>

I wanted to confirm: Tim is channeling the release manager just
fine.  We are in feature freeze for 2.0.

Jeremy



From jeremy at beopen.com  Wed Aug 23 19:55:34 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 23 Aug 2000 13:55:34 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A3F421.415107E6@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
	<39A3F421.415107E6@lemburg.com>
Message-ID: <14756.4118.865603.363166@bitdiddle.concentric.net>

>>>>> "MAL" == M-A Lemburg <mal at lemburg.com> writes:
>>>>> "TP" == Tim Peters <tpeters at beopen.com> writes:

  TP> For you in particular <wink>, how about dealing with Open patch
  TP>> 100899?  It's been assigned to you for 5 weeks, and if you're not
  TP> going to review it or kick /F in the butt, assign it to someone
  TP> else.

  MAL> AFAIK, Fredrik hasn't continued work on that patch and some
  MAL> important parts are still missing, e.g. the generator scripts
  MAL> and a description of how the whole thing works.

  MAL> It's not that important though, since the patch is a space
  MAL> optimization of what is already in Python 2.0 (and has been for
  MAL> quite a while now): the Unicode database.
 
  MAL> Perhaps I should simply post-pone the patch to 2.1 ?!

Thanks for clarifying the issue with this patch.  

I would like to see some compression in the release, but agree that it
is not an essential optimization.  People have talked about it for a
couple of months, and we haven't found someone to work on it because
at various times pirx and /F said they were working on it.

If we don't hear from /F by tomorrow promising he will finish it before
the beta release, let's postpone it.

Jeremy



From tim_one at email.msn.com  Wed Aug 23 20:32:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 14:32:20 -0400
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release Plans)
In-Reply-To: <14756.4118.865603.363166@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>

[Jeremy Hylton]
> I would like to see some compression in the release, but agree that it
> is not an essential optimization.  People have talked about it for a
> couple of months, and we haven't found someone to work on it because
> at various times pirx and /F said they were working on it.
>
> If we don't hear from /F by tomorrow promising he will finish it before
> the beta release, let's postpone it.

There was an *awful* lot of whining about the size increase without this
optimization, and the current situation violates the "no compiler warnings!"
rule too (at least under MSVC 6).  That means it's going to fail to compile
at all on *some* feebler system.  We said we'd put it in, so I'm afraid I
think it falls on PythonLabs to finish it if /F can't.





From thomas at xs4all.net  Wed Aug 23 20:59:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 20:59:20 +0200
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release Plans)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 23, 2000 at 02:32:20PM -0400
References: <14756.4118.865603.363166@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>
Message-ID: <20000823205920.A7566@xs4all.nl>

On Wed, Aug 23, 2000 at 02:32:20PM -0400, Tim Peters wrote:
> [Jeremy Hylton]
> > I would like to see some compression in the release, but agree that it
> > is not an essential optimization.  People have talked about it for a
> > couple of months, and we haven't found someone to work on it because
> > at various times pirx and /F said they were working on it.
> >
> > If we don't hear from /F by tomorrow promising he will finish it before
> > the beta release, let's postpone it.

> There was an *awful* lot of whining about the size increase without this
> optimization, and the current situation violates the "no compiler warnings!"
> rule too (at least under MSVC 6).

For the record, you can't compile unicodedatabase.c with g++ because of it's
size: g++ complains that the switch is too large to compile. Under gcc it
compiles, but only by trying really really hard, and I don't know how it
performs under other versions of gcc (in particular more heavily optimizing
ones -- might run into other limits in those situations.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Wed Aug 23 21:00:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 15:00:33 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <003b01c00d1f$3ef70fe0$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIOHBAA.tim_one@email.msn.com>

[Fredrik Lundh]

[on patch 100899]
> mal has reviewed the patch, and is waiting for an update
> from me.

Thanks!  On that basis, I've reassigned the patch to you.

> PS. the best way to get me to do something is to add a
> task to the task manager.

Yikes!  I haven't looked at the thing since the day after I enabled it
<wink> -- thanks for the clue.

> I currently have three things on my slate:
>
>     17333 add os.popen2 support for Unix

Guido definitely wants this for 2.0, but there's no patch for it and no
entry in PEP 200.  Jeremy, please add it.

>     17334 add PyErr_Format to errors module
>     17335 add compressed unicode database

Those two are in Open patches, and both assigned to you.

> if I missed something, let me know.

In your email (to Guido and me) from Monday, 31-July-2000,

> so to summarize, Python 2.0 will support the following
> hex-escapes:
>
>    \xNN
>    \uNNNN
>    \UNNNNNNNN
>
> where the last two are only supported in Unicode and
> SRE strings.
>
> I'll provide patches later this week, once the next SRE
> release is wrapped up (later tonight, I hope).

This apparently fell through the cracks, and I finally remembered it last
Friday, and added them to PEP 200 recently.  Guido wants this in 2.0, and
accepted them long before feature-freeze.  I'm currently writing a PEP for
the \x change (because it has a surreal chance of breaking old code).  I
haven't written any code for it.  The new \U escape is too minor to need a
PEP (according to me).





From effbot at telia.com  Wed Aug 23 18:28:58 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 18:28:58 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <003b01c00d1f$3ef70fe0$f2a6b5d4@hagrid>

tim wrote:
> For you in particular <wink>, how about dealing with Open patch 100899?
> It's been assigned to you for 5 weeks, and if you're not going to review it
> or kick /F in the butt, assign it to someone else.

mal has reviewed the patch, and is waiting for an update
from me.

</F>

PS. the best way to get me to do something is to add a
task to the task manager.  I currently have three things
on my slate:

    17333 add os.popen2 support for Unix 
    17334 add PyErr_Format to errors module 
    17335 add compressed unicode database 

if I missed something, let me know.




From thomas at xs4all.net  Wed Aug 23 21:29:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 21:29:47 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src README,1.89,1.90
In-Reply-To: <200008231901.MAA31275@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Wed, Aug 23, 2000 at 12:01:47PM -0700
References: <200008231901.MAA31275@slayer.i.sourceforge.net>
Message-ID: <20000823212946.B7566@xs4all.nl>

On Wed, Aug 23, 2000 at 12:01:47PM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv31228
> 
> Modified Files:
> 	README 
> Log Message:
> Updated some URLs; removed mention of copyright (we'll have to add
> something in later after that discussion is over); remove explanation
> of 2.0 version number.

I submit that this file needs some more editing for 2.0. For instance, it
mentions that 'some modules' will not compile on old SunOS compilers because
they are written in ANSI C. It also has a section on threads which needs to
be rewritten to reflect that threads are *on* by default, and explain how to
turn them off. I also think it should put some more emphasis on editing
Modules/Setup, which is commonly forgotten by newbies. Either that or make
some more things 'standard', like '*shared*'.

(It mentions '... editing a file, typing make, ...' in the overview, but
doesn't actually mention which file to edit until much later, in a sideways
kind of way in the machine-specific section, and even later in a separate
section.)

It also has some teensy small bugs: it says "uncomment" when it should say
"comment out" in the Cray T3E section, and it's "glibc2" or "libc6", not
"glibc6", in the Linux section. (it's glibc version 2, but the interface
number is 6.) I would personally suggest removing that entire section, it's
a bit outdated. But the same might go for other sections!

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Wed Aug 23 21:50:21 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 21:50:21 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCIEIOHBAA.tim_one@email.msn.com>
Message-ID: <006601c00d3b$610f5440$f2a6b5d4@hagrid>

tim wrote:
> > I currently have three things on my slate:
> >
> >     17333 add os.popen2 support for Unix
> 
> Guido definitely wants this for 2.0, but there's no patch for it and no
> entry in PEP 200.  Jeremy, please add it.

to reduce my load somewhat, maybe someone who does
Python 2.0 development on a Unix box could produce that
patch?

(all our unix boxes are at the office, but I cannot run CVS
over SSH from there -- and sorting that one out will take
more time than I have right now...)

:::

anyway, fixing this is pretty straightforward:

1) move the class (etc) from popen2.py to os.py

2) modify the "if hasattr" stuff; change

    # popen2.py
    if hasattr(os, "popen2"):
        def popen2(...):
            # compatbility code, using os.popen2
    else:
        def popen2(...):
            # unix implementation

to

    # popen2.py
    def popen2(...):
        # compatibility code

    # os.py
    def popen2(...)
        # unix implementation, with the order of
        # the return values changed to (child_stdin,
        # child_stdout, child_stderr)

:::

> > so to summarize, Python 2.0 will support the following
> > hex-escapes:
> >
> >    \xNN
> >    \uNNNN
> >    \UNNNNNNNN
> >
> > where the last two are only supported in Unicode and
> > SRE strings.
> 
> This apparently fell through the cracks, and I finally remembered it last
> Friday, and added them to PEP 200 recently.  Guido wants this in 2.0, and
> accepted them long before feature-freeze.  I'm currently writing a PEP for
> the \x change (because it has a surreal chance of breaking old code).  I
> haven't written any code for it.  The new \U escape is too minor to need a
> PEP (according to me).

if someone else can do the popen2 stuff, I'll take care
of this one!

</F>




From effbot at telia.com  Wed Aug 23 23:47:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 23:47:01 +0200
Subject: [Python-Dev] anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <002001c00d4b$acf32ca0$f2a6b5d4@hagrid>

doesn't work too well for me -- Tkinter._test() tends to hang
when I press quit (not every time, though).  the only way to
shut down the process is to reboot.

any ideas?

(msvc 5, win95).

</F>




From tim_one at email.msn.com  Wed Aug 23 23:30:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 17:30:23 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <006601c00d3b$610f5440$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>

[/F, on "add os.popen2 support for Unix"]
> to reduce my load somewhat, maybe someone who does
> Python 2.0 development on a Unix box could produce that
> patch?

Sounds like a more than reasonable idea to me; heck, AFAIK, until you
mentioned you thought it was on your plate, we didn't think it was on
*anyone's* plate.  It simply "came up" on its own at the PythonLabs mtg
yesterday (which I misidentified as "Monday" in an earlier post).

Can we get a volunteer here?  Here's /F's explanation:

> anyway, fixing this is pretty straightforward:
>
> 1) move the class (etc) from popen2.py to os.py
>
> 2) modify the "if hasattr" stuff; change
>
>     # popen2.py
>     if hasattr(os, "popen2"):
>         def popen2(...):
>             # compatbility code, using os.popen2
>     else:
>         def popen2(...):
>             # unix implementation
>
> to
>
>     # popen2.py
>     def popen2(...):
>         # compatibility code
>
>     # os.py
>     def popen2(...)
>         # unix implementation, with the order of
>         # the return values changed to (child_stdin,
>         # child_stdout, child_stderr)

[on \x, \u and \U]
> if someone else can do the popen2 stuff, I'll take care
> of this one!

It's a deal as far as I'm concerned.  Thanks!  I'll finish the \x PEP
anyway, though, as it's already in progress.

Jeremy, please update PEP 200 accordingly (after you volunteer to do the
os.popen2 etc bit for Unix(tm) <wink>).





From effbot at telia.com  Wed Aug 23 23:59:50 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 23:59:50 +0200
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <000d01c00d4d$78a81c60$f2a6b5d4@hagrid>

I wrote:


> doesn't work too well for me -- Tkinter._test() tends to hang
> when I press quit (not every time, though).  the only way to
> shut down the process is to reboot.

hmm.  it looks like it's more likely to hang if the program
uses unicode strings.

    Tkinter._test() hangs about 2 times out of three

    same goes for a simple test program that passes a
    unicode string constant (containing Latin-1 chars)
    to a Label

    the same test program using a Latin-1 string (which,
    I suppose, is converted to Unicode inside Tk) hangs
    in about 1/3 of the runs.

    the same test program with a pure ASCII string
    never hangs...

confusing.

</F>




From thomas at xs4all.net  Wed Aug 23 23:53:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 23:53:45 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
Message-ID: <20000823235345.C7566@xs4all.nl>

While re-writing the PyNumber_InPlace*() functions in augmented assignment
to something Guido and I agree on should be the Right Way, I found something
that *might* be a bug. But I'm not sure.

The PyNumber_*() methods for binary operations (found in abstract.c) have
the following construct:

        if (v->ob_type->tp_as_number != NULL) {
                PyObject *x = NULL;
                PyObject * (*f)(PyObject *, PyObject *);
                if (PyNumber_Coerce(&v, &w) != 0)
                        return NULL;
                if ((f = v->ob_type->tp_as_number->nb_xor) != NULL)
                        x = (*f)(v, w);
                Py_DECREF(v);
                Py_DECREF(w);
                if (f != NULL)
                        return x;
        }

(This is after a check if either argument is an instance object, so both are
C objects here.) Now, I'm not sure how coercion is supposed to work, but I
see one problem here: 'v' can be changed by PyNumber_Coerce(), and the new
object's tp_as_number pointer could be NULL. I bet it's pretty unlikely that
(numeric) coercion of a numeric object and an unspecified object turns up a
non-numeric object, but I don't see anything guaranteeing it won't, either.

Is this a non-issue, or should I bother with adding the extra check in the
current binary operations (and the new inplace ones) ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Wed Aug 23 23:58:30 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 23 Aug 2000 17:58:30 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>
References: <006601c00d3b$610f5440$f2a6b5d4@hagrid>
	<LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>
Message-ID: <14756.18694.812840.428389@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Sounds like a more than reasonable idea to me; heck, AFAIK, until you
 > mentioned you thought it was on your plate, we didn't think it was on
 > *anyone's* plate.  It simply "came up" on its own at the PythonLabs mtg
 > yesterday (which I misidentified as "Monday" in an earlier post).
...
 > Jeremy, please update PEP 200 accordingly (after you volunteer to do the
 > os.popen2 etc bit for Unix(tm) <wink>).

  Note that Guido asked me to do this, and I've updated the SF Task
Manager with the appropriate information.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Thu Aug 24 01:08:13 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:08:13 -0500
Subject: [Python-Dev] The Python 1.6 License Explained
Message-ID: <200008232308.SAA02986@cj20424-a.reston1.va.home.com>

[Also posted to c.l.py]

With BeOpen's help, CNRI has prepared a FAQ about the new license
which should answer those questions.  The official URL for the Python
1.6 license FAQ is http://www.python.org/1.6/license_faq.html (soon on
a mirror site near you), but I'm also appending it here.

We expect that we will be able to issue the final 1.6 release very
soon.  We're also working hard on the first beta release of Python
2.0, slated for September 4; the final 2.0 release should be ready in
October.  See http://www.pythonlabs.com/tech/python2.html for
up-to-date 2.0 information.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

Python 1.6 License FAQ

This FAQ addresses questions concerning the CNRI Open Source License
and its impact on past and future Python releases. The text below has
been approved for posting on the Python website and newsgroup by
CNRI's president, Dr. Robert E. Kahn.

    1.The old Python license from CWI worked well for almost 10
    years. Why a new license for Python 1.6?

      CNRI claims copyright in Python code and documentation from
      releases 1.3 through 1.6 inclusive.  However, for a number of
      technical reasons, CNRI never formally licensed this work for
      Internet download, although it did permit Guido to share the
      results with the Python community. As none of this work was
      published either, there were no CNRI copyright notices placed on
      these Python releases prior to 1.6. A CNRI copyright notice will
      appear on the official release of Python 1.6. The CNRI license
      was created to clarify for all users that CNRI's intent is to
      enable Python extensions to be developed in an extremely open
      form, in the best interests of the Python community.

    2.Why isn't the new CNRI license as short and simple as the CWI
    license? Are there any issues with it?

      A license is a legally binding document, and the CNRI Open
      Source License is-- according to CNRI --as simple as they were
      able to make it at this time while still maintaining a balance
      between the need for access and other use of Python with CNRI's
      rights.

    3.Are you saying that the CWI license did not protect our rights?

      CNRI has held copyright and other rights to the code but never
      codified them into a CNRI-blessed license prior to 1.6. The CNRI
      Open Source License is a binding contract between CNRI and
      Python 1.6's users and, unlike the CWI statement, cannot be
      revoked except for a material breach of its terms.  So this
      provides a licensing certainty to Python users that never really
      existed before.

    4.What is CNRI's position on prior Python releases, e.g. Python
    1.5.2?

      Releases of Python prior to 1.6 were shared with the community
      without a formal license from CNRI.  The CWI Copyright Notice
      and Permissions statement (which was included with Python
      releases prior to 1.6b1), as well as the combined CWI-CNRI
      disclaimer, were required to be included as a condition for
      using the prior Python software. CNRI does not intend to require
      users of prior versions of Python to upgrade to 1.6 unless they
      voluntarily choose to do so.

    5.OK, on to the new license. Is it an Open Source license?

      Yes. The board of the Open Source Initiative certified the CNRI
      Open Source License as being fully Open Source compliant.

    6.Has it been approved by the Python Consortium?

      Yes, the Python Consortium members approved the new CNRI Open
      Source License at a meeting of the Python Consortium on Friday,
      July 21, 2000 in Monterey, California.

    7.Is it compatible with the GNU Public License (GPL)?

      Legal counsel for both CNRI and BeOpen.com believe that it is
      fully compatible with the GPL. However, the Free Software
      Foundation attorney and Richard Stallman believe there may be
      one incompatibility, i.e., the CNRI License specifies a legal
      venue to interpret its License while the GPL is silent on the
      issue of jurisdiction. Resolution of this issue is being
      pursued.

    8.So that means it has a GPL-like "copyleft" provision?

      No. "GPL-compatible" means that code licensed under the terms of
      the CNRI Open Source License may be combined with GPLed
      code. The CNRI license imposes fewer restrictions than does the
      GPL.  There is no "copyleft" provision in the CNRI Open Source
      License.

    9.So it supports proprietary ("closed source") use of Python too?

      Yes, provided you abide by the terms of the CNRI license and
      also include the CWI Copyright Notice and Permissions Statement.

   10.I have some questions about those! First, about the "click to
   accept" business. What if I have a derivative work that has no GUI?

      As the text says, "COPYING, INSTALLING OR OTHERWISE USING THE
      SOFTWARE" also constitutes agreement to the terms of the
      license, so there is no requirement to use the click to accept
      button if that is not appropriate. CNRI prefers to offer the
      software via the Internet by first presenting the License and
      having a prospective user click an Accept button. Others may
      offer it in different forms (e.g.  CD-ROM) and thus clicking the
      Accept button is one means but not the only one.

   11.Virginia is one of the few states to have adopted the Uniform
   Computer Information Transactions Act, and paragraph 7 requires
   that the license be interpreted under Virginia law.  Is the "click
   clause" a way to invoke UCITA?

      CNRI needs a body of law to define what its License means, and,
      since its headquarters are in Virginia, Virginia law is a
      logical choice. The adoption of UCITA in Virginia was not a
      motivating factor. If CNRI didn't require that its License be
      interpreted under Virginia law, then anyone could interpret the
      license under very different laws than the ones under which it
      is intended to be interpreted. In particular in a jurisdiction
      that does not recognize general disclaimers of liability (such
      as in CNRI license's paragraphs 4 and 5).

   12.Suppose I embed Python in an application such that the user
   neither knows nor cares about the existence of Python. Does the
   install process have to inform my app's users about the CNRI
   license anyway?

      No, the license does not specify this. For example, in addition
      to including the License text in the License file of a program
      (or in the installer as well), you could just include a
      reference to it in the Readme file.  There is also no need to
      include the full License text in the program (the License
      provides for an alternative reference using the specified handle
      citation). Usage of the software amounts to license acceptance.

   13.In paragraph 2, does "provided, however, that CNRI's License
   Agreement is retained in Python 1.6 beta 1, alone or in any
   derivative version prepared by Licensee" mean that I can make and
   retain a derivative version of the license instead?

      The above statement applies to derivative versions of Python 1.6
      beta 1. You cannot revise the CNRI License. You must retain the
      CNRI License (or their defined reference to it)
      verbatim. However, you can make derivative works and license
      them as a whole under a different but compatible license.

   14.Since I have to retain the CNRI license in my derivative work,
   doesn't that mean my work must be released under exactly the same
   terms as Python?

      No. Paragraph 1 explicitly names Python 1.6 beta 1 as the only
      software covered by the CNRI license.  Since it doesn't name
      your derivative work, your derivative work is not bound by the
      license (except to the extent that it binds you to meet the
      requirements with respect to your use of Python 1.6). You are,
      of course, free to add your own license distributing your
      derivative work under terms similar to the CNRI Open Source
      License, but you are not required to do so.

      In other words, you cannot change the terms under which CNRI
      licenses Python 1.6, and must retain the CNRI License Agreement
      to make that clear, but you can (via adding your own license)
      set your own terms for your derivative works. Note that there is
      no requirement to distribute the Python source code either, if
      this does not make sense for your application.

   15.Does that include, for example, releasing my derivative work
   under the GPL?

      Yes, but you must retain the CNRI License Agreement in your
      work, and it will continue to apply to the Python 1.6 beta 1
      portion of your work (as is made explicit in paragraph 1 of the
      CNRI License).

   16.With regard to paragraph 3, what does "make available to the
   public" mean? If I embed Python in an application and make it
   available for download on the Internet, does that fit the meaning
   of this clause?

      Making the application generally available for download on the
      Internet would be making it available to the public.

   17.In paragraph 3, what does "indicate in any such work the nature
   of the modifications made to Python 1.6 beta 1" mean? Do you mean I
   must publish a patch? A textual description? If a description, how
   detailed must it be? For example, is "Assorted speedups"
   sufficient? Or "Ported to new architecture"? What if I merely add a
   new Python module, or C extension module? Does that constitute "a
   modification" too? What if I just use the freeze tool to change the
   way the distribution is packaged? Or change the layout of files and
   directories from the way CNRI ships them? Or change some file names
   to match my operating system's restrictions?  What if I merely use
   the documentation, as a basis for a brand new implementation of
   Python?

      This license clause is in discussion right now. CNRI has stated
      that the intent is just to have people provide a very high level
      summary of changes, e.g. includes new features X, Y and Z. There
      is no requirement for a specific level of detail. Work is in
      progress to clarify the intent of this clause so as to be
      clearer as to what the standard is. CNRI has already indicated
      that whatever has been done in the past to indicate changes in
      python releases would be sufficient.

   18.In paragraph 6, is automatic termination of the license upon
   material breach immediate?

      Yes. CNRI preferred to give the users a 60 day period to cure
      any deficiencies, but this was deemed incompatible with the GPL
      and CNRI reluctantly agreed to use the automatic termination
      language instead.

   19.Many licenses allow a 30 to 60 day period during which breaches
   can be corrected.

      Immediate termination is actually required for GPL
      compatibility, as the GPL terminates immediately upon a material
      breach. However, there is little you can do to breach the
      license based on usage of the code, since almost any usage is
      allowed by the license. You can breach it by not including the
      appropriate License information or by misusing CNRI's name and
      logo - to give two examples. As indicated above, CNRI actually
      preferred a 60 day cure period but GPL-compatibility required
      otherwise. In practice, the immediate termination clause is
      likely to have no substantive effect. Since breaches are simple
      to cure, most will have no substantive liability associated with
      them. CNRI can take legal steps to prevent eggregious and
      persistent offenders from relicensing the code, but this is a
      step they will not take cavalierly.

   20.What if people already downloaded a million copies of my
   derivative work before CNRI informs me my license has been
   terminated? What am I supposed to do then? Contact every one of
   them and tell them to download a new copy? I won't even know who
   they are!

      This is really up to the party that chooses to enforce such
      licensing. With the cure period removed for compliance with the
      GPL, CNRI is under no obligation to inform you of a
      termination. If you repair any such breach than you are in
      conformance with the License. Enforcement of the CNRI License is
      up to CNRI. Again, there are very few ways to violate the
      license.

   21.Well, I'm not even sure what "material breach" means. What's an
   example?

      This is a well-defined legal term. Very few examples of breaches
      can be given, because the CNRI license imposes very few
      requirements on you. A clear example is if you violate the
      requirement in paragraph 2 to retain CNRI's License Agreement
      (or their defined reference to it) in derivative works.  So
      simply retain the agreement, and you'll have no problem with
      that. Also, if you don't misuse CNRI's name and logo you'll be
      fine.

   22.OK, I'll retain the License Agreement in my derivative works,
   Does that mean my users and I then enter into this license
   agreement too?

      Yes, with CNRI but not with each other. As explained in
      paragraph 1, the license is between CNRI and whoever is using
      Python 1.6 beta 1.

   23.So you mean that everyone who uses my derivative work is
   entering into a contract with CNRI?

      With respect to the Python 1.6 beta 1 portion of your work,
      yes. This is what assures their right to use the Python 1.6 beta
      1 portion of your work-- which is licensed by CNRI, not by you
      --regardless of whatever other restrictions you may impose in
      your license.

   24.In paragraph 7, is the name "Python" a "CNRI trademark or trade
   name"?

      CNRI has certain trademark rights based on its use of the name
      Python. CNRI has begun discussing an orderly transition of the
      www.python.org site with Guido and the trademark matters will be
      addressed in that context.

   25.Will the license change for Python 2.0?

      BeOpen.com, who is leading future Python development, will make
      that determination at the appropriate time. Throughout the
      licensing process, BeOpen.com will be working to keep things as
      simple and as compatible with existing licenses as
      possible. BeOpen.com will add its copyright notice to Python but
      understands the complexities of licensing and so will work to
      avoid adding any further confusion on any of these issues. This
      is why BeOpen.com and CNRI are working together now to finalize
      a license.

   26.What about the copyrights? Will CNRI assign its copyright on
   Python to BeOpen.com or to Guido? If you say you want to clarify
   the legal status of the code, establishing a single copyright
   holder would go a long way toward achieving that!

      There is no need for a single copyright holder. Most composite
      works involve licensing of rights from parties that hold the
      rights to others that wish to make use of them. CNRI will retain
      copyright to its work on Python. CNRI has also worked to get wet
      signatures for major contributions to Python which assign rights
      to it, and email agreements to use minor contributions, so that
      it can license the bulk of the Python system for the public
      good. CNRI also worked with Guido van Rossum and CWI to clarify
      the legal status with respect to permissions for Python 1.2 and
      earlier versions.

August 23, 2000



From guido at beopen.com  Thu Aug 24 01:25:57 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:25:57 -0500
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:59:50 +0200."
             <000d01c00d4d$78a81c60$f2a6b5d4@hagrid> 
References: <000d01c00d4d$78a81c60$f2a6b5d4@hagrid> 
Message-ID: <200008232325.SAA03130@cj20424-a.reston1.va.home.com>

> > doesn't work too well for me -- Tkinter._test() tends to hang
> > when I press quit (not every time, though).  the only way to
> > shut down the process is to reboot.
> 
> hmm.  it looks like it's more likely to hang if the program
> uses unicode strings.
> 
>     Tkinter._test() hangs about 2 times out of three
> 
>     same goes for a simple test program that passes a
>     unicode string constant (containing Latin-1 chars)
>     to a Label
> 
>     the same test program using a Latin-1 string (which,
>     I suppose, is converted to Unicode inside Tk) hangs
>     in about 1/3 of the runs.
> 
>     the same test program with a pure ASCII string
>     never hangs...
> 
> confusing.

Try going back to Tk 8.2.

We had this problem with Tk 8.3.1 in Python 1.6a1; for a2, I went back
to 8.2.x (the latest).  Then for 1.6b1 I noticed that 8.3.2 was out
and after a light test it appeared to be fine, so I switched to
8.3.2.  But I've seen this too, and maybe 8.3 still isn't stable
enough.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 24 01:28:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:28:03 -0500
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:53:45 +0200."
             <20000823235345.C7566@xs4all.nl> 
References: <20000823235345.C7566@xs4all.nl> 
Message-ID: <200008232328.SAA03141@cj20424-a.reston1.va.home.com>

> While re-writing the PyNumber_InPlace*() functions in augmented assignment
> to something Guido and I agree on should be the Right Way, I found something
> that *might* be a bug. But I'm not sure.
> 
> The PyNumber_*() methods for binary operations (found in abstract.c) have
> the following construct:
> 
>         if (v->ob_type->tp_as_number != NULL) {
>                 PyObject *x = NULL;
>                 PyObject * (*f)(PyObject *, PyObject *);
>                 if (PyNumber_Coerce(&v, &w) != 0)
>                         return NULL;
>                 if ((f = v->ob_type->tp_as_number->nb_xor) != NULL)
>                         x = (*f)(v, w);
>                 Py_DECREF(v);
>                 Py_DECREF(w);
>                 if (f != NULL)
>                         return x;
>         }
> 
> (This is after a check if either argument is an instance object, so both are
> C objects here.) Now, I'm not sure how coercion is supposed to work, but I
> see one problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> object's tp_as_number pointer could be NULL. I bet it's pretty unlikely that
> (numeric) coercion of a numeric object and an unspecified object turns up a
> non-numeric object, but I don't see anything guaranteeing it won't, either.
> 
> Is this a non-issue, or should I bother with adding the extra check in the
> current binary operations (and the new inplace ones) ?

I think this currently can't happen because coercions never return
non-numeric objects, but it sounds like a good sanity check to add.

Please check this in as a separate patch (not as part of the huge
augmented assignment patch).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Aug 24 01:09:41 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 24 Aug 2000 01:09:41 +0200
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <200008232309.BAA01070@loewis.home.cs.tu-berlin.de>

> hmm.  it looks like it's more likely to hang if the program
> uses unicode strings.

Are you sure it hangs? It may just take a lot of time to determine
which font is best to display the strings.

Of course, if it is not done after an hour or so, it probably hangs...
Alternatively, a debugger could tell what it is actually doing.

Regards,
Martin



From thomas at xs4all.net  Thu Aug 24 01:15:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 01:15:20 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: <200008232328.SAA03141@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 23, 2000 at 06:28:03PM -0500
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com>
Message-ID: <20000824011519.D7566@xs4all.nl>

On Wed, Aug 23, 2000 at 06:28:03PM -0500, Guido van Rossum wrote:

> > Now, I'm not sure how coercion is supposed to work, but I see one
> > problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> > object's tp_as_number pointer could be NULL. I bet it's pretty unlikely
> > that (numeric) coercion of a numeric object and an unspecified object
> > turns up a non-numeric object, but I don't see anything guaranteeing it
> > won't, either.

> I think this currently can't happen because coercions never return
> non-numeric objects, but it sounds like a good sanity check to add.

> Please check this in as a separate patch (not as part of the huge
> augmented assignment patch).

Alright, checking it in after 'make test' finishes. I'm also removing some
redundant PyInstance_Check() calls in PyNumber_Multiply: the first thing in
that function is a BINOP call, which expands to

        if (PyInstance_Check(v) || PyInstance_Check(w)) \
                return PyInstance_DoBinOp(v, w, opname, ropname, thisfunc)

So after the BINOP call, neither argument can be an instance, anyway.


Also, I'll take this opportunity to explain what I'm doing with the
PyNumber_InPlace* functions, for those that are interested. The comment I'm
placing in the code should be enough information:

/* The in-place operators are defined to fall back to the 'normal',
   non in-place operations, if the in-place methods are not in place, and to
   take class instances into account. This is how it is supposed to work:

   - If the left-hand-side object (the first argument) is an
     instance object, let PyInstance_DoInPlaceOp() handle it.  Pass the
     non in-place variant of the function as callback, because it will only
     be used if any kind of coercion has been done, and if an object has
     been coerced, it's a new object and shouldn't be modified in-place.

   - Otherwise, if the object has the appropriate struct members, and they
     are filled, call that function and return the result. No coercion is
     done on the arguments; the left-hand object is the one the operation is
     performed on, and it's up to the function to deal with the right-hand
     object.

   - Otherwise, if the second argument is an Instance, let
     PyInstance_DoBinOp() handle it, but not in-place. Again, pass the
     non in-place function as callback.

   - Otherwise, both arguments are C objects. Try to coerce them and call
     the ordinary (not in-place) function-pointer from the type struct.
     
   - Otherwise, we are out of options: raise a type error.

   */

If anyone sees room for unexpected behaviour under these rules, let me know
and you'll get an XS4ALL shirt! (Sorry, only ones I can offer ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From DavidA at ActiveState.com  Thu Aug 24 02:25:55 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 23 Aug 2000 17:25:55 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] [Announce] ActivePython 1.6 beta release (fwd)
Message-ID: <Pine.WNT.4.21.0008231725340.272-100000@loom>

It is my pleasure to announce the availability of the beta release of
ActivePython 1.6, build 100.

This binary distribution, based on Python 1.6b1, is available from
ActiveState's website at:

    http://www.ActiveState.com/Products/ActivePython/

ActiveState is committed to making Python easy to install and use on all
major platforms. ActivePython contains the convenience of swift
installation, coupled with commonly used modules, providing you with a
total package to meets your Python needs. Additionally, for Windows users,
ActivePython provides a suite of Windows tools, developed by Mark Hammond.

ActivePython is provided in convenient binary form for Windows, Linux and
Solaris under a variety of installation packages, available at:

    http://www.ActiveState.com/Products/ActivePython/Download.html

For support information, mailing list subscriptions and archives, a bug
reporting system, and fee-based technical support, please go to

    http://www.ActiveState.com/Products/ActivePython/

Please send us feedback regarding this release, either through the mailing
list or through direct email to ActivePython-feedback at ActiveState.com.

ActivePython is free, and redistribution of ActivePython within your
organization is allowed.  The ActivePython license is available at
http://www.activestate.com/Products/ActivePython/License_Agreement.html
and in the software packages.

We look forward to your comments and to making ActivePython suit your
Python needs in future releases.

Thank you,

-- David Ascher & the ActivePython team
   ActiveState Tool Corporation










From tim_one at email.msn.com  Thu Aug 24 05:39:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 23:39:43 -0400
Subject: [Python-Dev] [PEP 223]  Change the Meaning of \x Escapes
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com>

An HTML version of the attached can be viewed at

    http://python.sourceforge.net/peps/pep-0223.html

This will be adopted for 2.0 unless there's an uproar.  Note that it *does*
have potential for breaking existing code -- although no real-life instance
of incompatibility has yet been reported.  This is explained in detail in
the PEP; check your code now.

although-if-i-were-you-i-wouldn't-bother<0.5-wink>-ly y'rs  - tim


PEP: 223
Title: Change the Meaning of \x Escapes
Version: $Revision: 1.4 $
Author: tpeters at beopen.com (Tim Peters)
Status: Active
Type: Standards Track
Python-Version: 2.0
Created: 20-Aug-2000
Post-History: 23-Aug-2000


Abstract

    Change \x escapes, in both 8-bit and Unicode strings, to consume
    exactly the two hex digits following.  The proposal views this as
    correcting an original design flaw, leading to clearer expression
    in all flavors of string, a cleaner Unicode story, better
    compatibility with Perl regular expressions, and with minimal risk
    to existing code.


Syntax

    The syntax of \x escapes, in all flavors of non-raw strings, becomes

        \xhh

    where h is a hex digit (0-9, a-f, A-F).  The exact syntax in 1.5.2 is
    not clearly specified in the Reference Manual; it says

        \xhh...

    implying "two or more" hex digits, but one-digit forms are also
    accepted by the 1.5.2 compiler, and a plain \x is "expanded" to
    itself (i.e., a backslash followed by the letter x).  It's unclear
    whether the Reference Manual intended either of the 1-digit or
    0-digit behaviors.


Semantics

    In an 8-bit non-raw string,
        \xij
    expands to the character
        chr(int(ij, 16))
    Note that this is the same as in 1.6 and before.

    In a Unicode string,
        \xij
    acts the same as
        \u00ij
    i.e. it expands to the obvious Latin-1 character from the initial
    segment of the Unicode space.

    An \x not followed by at least two hex digits is a compile-time error,
    specifically ValueError in 8-bit strings, and UnicodeError (a subclass
    of ValueError) in Unicode strings.  Note that if an \x is followed by
    more than two hex digits, only the first two are "consumed".  In 1.6
    and before all but the *last* two were silently ignored.


Example

    In 1.5.2:

        >>> "\x123465"  # same as "\x65"
        'e'
        >>> "\x65"
        'e'
        >>> "\x1"
        '\001'
        >>> "\x\x"
        '\\x\\x'
        >>>

    In 2.0:

        >>> "\x123465" # \x12 -> \022, "3456" left alone
        '\0223456'
        >>> "\x65"
        'e'
        >>> "\x1"
        [ValueError is raised]
        >>> "\x\x"
        [ValueError is raised]
        >>>


History and Rationale

    \x escapes were introduced in C as a way to specify variable-width
    character encodings.  Exactly which encodings those were, and how many
    hex digits they required, was left up to each implementation.  The
    language simply stated that \x "consumed" *all* hex digits following,
    and left the meaning up to each implementation.  So, in effect, \x in C
    is a standard hook to supply platform-defined behavior.

    Because Python explicitly aims at platform independence, the \x escape
    in Python (up to and including 1.6) has been treated the same way
    across all platforms:  all *except* the last two hex digits were
    silently ignored.  So the only actual use for \x escapes in Python was
    to specify a single byte using hex notation.

    Larry Wall appears to have realized that this was the only real use for
    \x escapes in a platform-independent language, as the proposed rule for
    Python 2.0 is in fact what Perl has done from the start (although you
    need to run in Perl -w mode to get warned about \x escapes with fewer
    than 2 hex digits following -- it's clearly more Pythonic to insist on
    2 all the time).

    When Unicode strings were introduced to Python, \x was generalized so
    as to ignore all but the last *four* hex digits in Unicode strings.
    This caused a technical difficulty for the new regular expression
engine:
    SRE tries very hard to allow mixing 8-bit and Unicode patterns and
    strings in intuitive ways, and it no longer had any way to guess what,
    for example, r"\x123456" should mean as a pattern:  is it asking to
match
    the 8-bit character \x56 or the Unicode character \u3456?

    There are hacky ways to guess, but it doesn't end there.  The ISO C99
    standard also introduces 8-digit \U12345678 escapes to cover the entire
    ISO 10646 character space, and it's also desired that Python 2 support
    that from the start.  But then what are \x escapes supposed to mean?
    Do they ignore all but the last *eight* hex digits then?  And if less
    than 8 following in a Unicode string, all but the last 4?  And if less
    than 4, all but the last 2?

    This was getting messier by the minute, and the proposal cuts the
    Gordian knot by making \x simpler instead of more complicated.  Note
    that the 4-digit generalization to \xijkl in Unicode strings was also
    redundant, because it meant exactly the same thing as \uijkl in Unicode
    strings.  It's more Pythonic to have just one obvious way to specify a
    Unicode character via hex notation.


Development and Discussion

    The proposal was worked out among Guido van Rossum, Fredrik Lundh and
    Tim Peters in email.  It was subsequently explained and disussed on
    Python-Dev under subject "Go \x yourself", starting 2000-08-03.
    Response was overwhelmingly positive; no objections were raised.


Backward Compatibility

    Changing the meaning of \x escapes does carry risk of breaking existing
    code, although no instances of incompabitility have yet been discovered.
    The risk is believed to be minimal.

    Tim Peters verified that, except for pieces of the standard test suite
    deliberately provoking end cases, there are no instances of \xabcdef...
    with fewer or more than 2 hex digits following, in either the Python
    CVS development tree, or in assorted Python packages sitting on his
    machine.

    It's unlikely there are any with fewer than 2, because the Reference
    Manual implied they weren't legal (although this is debatable!).  If
    there are any with more than 2, Guido is ready to argue they were buggy
    anyway <0.9 wink>.

    Guido reported that the O'Reilly Python books *already* document that
    Python works the proposed way, likely due to their Perl editing
    heritage (as above, Perl worked (very close to) the proposed way from
    its start).

    Finn Bock reported that what JPython does with \x escapes is
    unpredictable today.  This proposal gives a clear meaning that can be
    consistently and easily implemented across all Python implementations.


Effects on Other Tools

    Believed to be none.  The candidates for breakage would mostly be
    parsing tools, but the author knows of none that worry about the
    internal structure of Python strings beyond the approximation "when
    there's a backslash, swallow the next character".  Tim Peters checked
    python-mode.el, the std tokenize.py and pyclbr.py, and the IDLE syntax
    coloring subsystem, and believes there's no need to change any of
    them.  Tools like tabnanny.py and checkappend.py inherit their immunity
    from tokenize.py.


Reference Implementation

    The code changes are so simple that a separate patch will not be
produced.
    Fredrik Lundh is writing the code, is an expert in the area, and will
    simply check the changes in before 2.0b1 is released.


BDFL Pronouncements

    Yes, ValueError, not SyntaxError.  "Problems with literal
interpretations
    traditionally raise 'runtime' exceptions rather than syntax errors."


Copyright

    This document has been placed in the public domain.





From guido at beopen.com  Thu Aug 24 07:34:15 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 00:34:15 -0500
Subject: [Python-Dev] [PEP 223] Change the Meaning of \x Escapes
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:39:43 -0400."
             <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com> 
Message-ID: <200008240534.AAA00885@cj20424-a.reston1.va.home.com>

> An HTML version of the attached can be viewed at
> 
>     http://python.sourceforge.net/peps/pep-0223.html

Nice PEP!

> Effects on Other Tools
> 
>     Believed to be none.  [...]

I believe that Fredrik also needs to fix SRE's interpretation of \xhh.
Unless he's already done that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Thu Aug 24 07:31:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 24 Aug 2000 01:31:04 -0400
Subject: [Python-Dev] [PEP 223] Change the Meaning of \x Escapes
In-Reply-To: <200008240534.AAA00885@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEKNHBAA.tim_one@email.msn.com>

[Guido]
> Nice PEP!

Thanks!  I thought the kids could stand a simple example of what you'd like
to read <wink>.

> I believe that Fredrik also needs to fix SRE's interpretation of \xhh.
> Unless he's already done that.

I'm sure he's acutely aware of that, since that's how this started!  And
he's implementing \x in strings too.  I knew you wouldn't read it to the end
<0.9 wink>.

put-the-refman-stuff-briefly-at-the-front-and-save-the-blather-for-
    the-end-ly y'rs  - tim





From ping at lfw.org  Thu Aug 24 11:14:12 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 24 Aug 2000 04:14:12 -0500 (CDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import
 something as'
In-Reply-To: <200008231622.LAA02275@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>

On Wed, 23 Aug 2000, Guido van Rossum wrote:
> [Ping]
> > Looks potentially useful to me.  If nothing else, it's certainly
> > easier to explain than any other behaviour i could think of, since
> > assignment is already well-understood.
> 
> KISS suggests not to add it.  We had a brief discussion about this at
> our 2.0 planning meeting and nobody there thought it would be worth
> it, and several of us felt it would be asking for trouble.

What i'm trying to say is that it's *easier* to explain "import as"
with Thomas' enhancement than without it.

The current explanation of "import <x> as <y>" is something like

    Find and import the module named <x> and assign it to <y>
    in the normal way you do assignment, except <y> has to be
    a pure name.

Thomas' suggestion lifts the restriction and makes the explanation
simpler than it would have been:

    Find and import the module named <x> and assign it to <y>
    in the normal way you do assignment.

"The normal way you do assignment" is shorthand for "decide
whether to assign to the local or global namespace depending on
whether <y> has been assigned to in the current scope, unless
<y> has been declared global with a 'global' statement" -- and
that applies in any case.  Luckily, it's a concept that has
been explained before and which Python programmers already
need to understand anyway.

The net effect is essentially a direct translation to

    <y> = __import__("<x>")

> > "import foo.bar as spam" makes me uncomfortable because:
> > 
> >     (a) It's not clear whether spam should get foo or foo.bar, as
> >         evidenced by the discussion between Gordon and Thomas.
> 
> As far as I recall that conversation, it's just that Thomas (more or
> less accidentally) implemented what was easiest from the
> implementation's point of view without thinking about what it should
> mean.  *Of course* it should mean what I said if it's allowed.  Even
> Thomas agrees to that now.

Careful:

    import foo.bar          "import the package named foo and its submodule bar,
                             then put *foo* into the current namespace"
    import foo.bar as spam  "import the package named foo and its submodule bar,
                             then put *bar* into the current namespace, as spam"

Only this case causes import to import a *different* object just because
you used "as".

    import foo              "import the module named foo, then put foo into
                             the current namespace"
    import foo as spam      "import the module named foo, then put foo into
                             the current namespace, as spam"

The above, and all the other forms of "import ... as", put the *same*
object into the current namespace as they would have done, without the
"as" clause.

> >     (b) There's a straightforward and unambiguous way to express
> >         this already: "from foo import bar as spam".
> 
> Without syntax coloring that looks word soup to me.
> 
>   import foo.bar as spam
> 
> uses fewer words to say the same clearer.

But then:

        from foo import bar as spam    # give me bar, but name it spam
        import foo.bar as spam         # give me bar, but name it spam

are two ways to say the same thing -- but only if bar is a module.
If bar happens to be some other kind of symbol, the first works but
the second doesn't!

Not so without "as spam":

        from foo import bar            # give me bar
        import foo.bar                 # give me foo

> > That is, would
> > 
> >     import foo.bar as spam
> > 
> > define just spam or both foo and spam?
> 
> Aargh!  Just spam, of course!

I apologize if this is annoying you.  I hope you see the inconsistency
that i'm trying to point out, though.  If you see it and decide that
it's okay to live with the inconsistency, that's okay with me.


-- ?!ng




From thomas at xs4all.net  Thu Aug 24 12:18:58 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 12:18:58 +0200
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>; from ping@lfw.org on Thu, Aug 24, 2000 at 04:14:12AM -0500
References: <200008231622.LAA02275@cj20424-a.reston1.va.home.com> <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>
Message-ID: <20000824121858.E7566@xs4all.nl>

On Thu, Aug 24, 2000 at 04:14:12AM -0500, Ka-Ping Yee wrote:

> The current explanation of "import <x> as <y>" is something like

>     Find and import the module named <x> and assign it to <y>
>     in the normal way you do assignment, except <y> has to be
>     a pure name.

> Thomas' suggestion lifts the restriction and makes the explanation
> simpler than it would have been:

>     Find and import the module named <x> and assign it to <y>
>     in the normal way you do assignment.

> "The normal way you do assignment" is shorthand for "decide
> whether to assign to the local or global namespace depending on
> whether <y> has been assigned to in the current scope, unless
> <y> has been declared global with a 'global' statement" -- and
> that applies in any case.  Luckily, it's a concept that has
> been explained before and which Python programmers already
> need to understand anyway.

This is not true. The *current* situation already does the local/global
namespace trick, except that 'import ..' *is* a local assignment, so the
resulting name is always local (unless there is a "global" statement.)

My patch wouldn't change that one bit. It would only expand the allowable
expressions in the 'as' clause: is it a normal name-binding assignment (like
now), or a subscription-assignment, or a slice-assignment, or an
attribute-assignment. In other words, all types of assignment.

> The net effect is essentially a direct translation to

>     <y> = __import__("<x>")

Exactly :)

> Careful:

>     import foo.bar          "import the package named foo and its
>                              submodule bar, then put *foo* into the
>                              current namespace"

Wrong. What it does is: import the package named foo and its submodule bar,
and make it so you can access foo.bar via the name 'foo.bar'. That this has
to put 'foo' in the local namespace is a side issue :-) And when seen like
that,

>     import foo.bar as spam  "import the package named foo and its
>                              submodule bar, then put *bar* into the
>                              current namespace, as spam"

Becomes obvious as well.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Thu Aug 24 13:22:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 13:22:32 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl>
Message-ID: <39A50578.C08B9F14@lemburg.com>

Thomas Wouters wrote:
> 
> On Wed, Aug 23, 2000 at 06:28:03PM -0500, Guido van Rossum wrote:
> 
> > > Now, I'm not sure how coercion is supposed to work, but I see one
> > > problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> > > object's tp_as_number pointer could be NULL. I bet it's pretty unlikely
> > > that (numeric) coercion of a numeric object and an unspecified object
> > > turns up a non-numeric object, but I don't see anything guaranteeing it
> > > won't, either.
> 
> > I think this currently can't happen because coercions never return
> > non-numeric objects, but it sounds like a good sanity check to add.
> 
> > Please check this in as a separate patch (not as part of the huge
> > augmented assignment patch).
> 
> Alright, checking it in after 'make test' finishes. I'm also removing some
> redundant PyInstance_Check() calls in PyNumber_Multiply: the first thing in
> that function is a BINOP call, which expands to
> 
>         if (PyInstance_Check(v) || PyInstance_Check(w)) \
>                 return PyInstance_DoBinOp(v, w, opname, ropname, thisfunc)
> 
> So after the BINOP call, neither argument can be an instance, anyway.
> 
> Also, I'll take this opportunity to explain what I'm doing with the
> PyNumber_InPlace* functions, for those that are interested. The comment I'm
> placing in the code should be enough information:
> 
> /* The in-place operators are defined to fall back to the 'normal',
>    non in-place operations, if the in-place methods are not in place, and to
>    take class instances into account. This is how it is supposed to work:
> 
>    - If the left-hand-side object (the first argument) is an
>      instance object, let PyInstance_DoInPlaceOp() handle it.  Pass the
>      non in-place variant of the function as callback, because it will only
>      be used if any kind of coercion has been done, and if an object has
>      been coerced, it's a new object and shouldn't be modified in-place.
> 
>    - Otherwise, if the object has the appropriate struct members, and they
>      are filled, call that function and return the result. No coercion is
>      done on the arguments; the left-hand object is the one the operation is
>      performed on, and it's up to the function to deal with the right-hand
>      object.
> 
>    - Otherwise, if the second argument is an Instance, let
>      PyInstance_DoBinOp() handle it, but not in-place. Again, pass the
>      non in-place function as callback.
> 
>    - Otherwise, both arguments are C objects. Try to coerce them and call
>      the ordinary (not in-place) function-pointer from the type struct.
> 
>    - Otherwise, we are out of options: raise a type error.
> 
>    */
> 
> If anyone sees room for unexpected behaviour under these rules, let me know
> and you'll get an XS4ALL shirt! (Sorry, only ones I can offer ;)

I just hope that with all these new operators you haven't
closed the door for switching to argument based handling of
coercion.

One of these days (probably for 2.1), I would like to write up the
proposal I made on my Python Pages about a new coercion mechanism
as PEP. The idea behind it is to only use centralized coercion
as fall-back solution in case the arguments can't handle the
operation with the given type combination.

To implement this, all builtin types will have to be changed
to support mixed type argument slot functions (this ability will
be signalled to the interpreter using a type flag).

More infos on the proposal page at:

  http://starship.python.net/crew/lemburg/CoercionProposal.html

Is this still possible under the new code you've added ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug 24 13:37:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 13:37:28 +0200
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release 
 Plans)
References: <14756.4118.865603.363166@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com> <20000823205920.A7566@xs4all.nl>
Message-ID: <39A508F8.44C921D4@lemburg.com>

Thomas Wouters wrote:
> 
> On Wed, Aug 23, 2000 at 02:32:20PM -0400, Tim Peters wrote:
> > [Jeremy Hylton]
> > > I would like to see some compression in the release, but agree that it
> > > is not an essential optimization.  People have talked about it for a
> > > couple of months, and we haven't found someone to work on it because
> > > at various times pirx and /F said they were working on it.
> > >
> > > If we don't hear from /F by tomorrow promising he will finish it before
> > > the beta release, let's postpone it.
> 
> > There was an *awful* lot of whining about the size increase without this
> > optimization, and the current situation violates the "no compiler warnings!"
> > rule too (at least under MSVC 6).
> 
> For the record, you can't compile unicodedatabase.c with g++ because of it's
> size: g++ complains that the switch is too large to compile. Under gcc it
> compiles, but only by trying really really hard, and I don't know how it
> performs under other versions of gcc (in particular more heavily optimizing
> ones -- might run into other limits in those situations.)

Are you sure this is still true with the latest CVS tree version ?

I split the unicodedatabase.c static array into chunks of
4096 entries each -- that should really be managable by all
compilers.

But perhaps you are talking about the switch in unicodectype.c 
(there are no large switches in unicodedatabase.c) ? In that
case, Jack Janssen has added a macro switch which breaks that
switch in multiple parts too (see the top of that file).

It should be no problem adding a few more platforms to the list
of platforms which have this switch defined per default (currently
Macs and MS Win64).

I see no problem taking the load off of Fredrik an postponing
the patch to 2.1.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Thu Aug 24 16:00:56 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 09:00:56 -0500
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: Your message of "Thu, 24 Aug 2000 13:22:32 +0200."
             <39A50578.C08B9F14@lemburg.com> 
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl>  
            <39A50578.C08B9F14@lemburg.com> 
Message-ID: <200008241400.JAA01806@cj20424-a.reston1.va.home.com>

> I just hope that with all these new operators you haven't
> closed the door for switching to argument based handling of
> coercion.

Far from it!  Actually, the inplace operators won't do any coercions
when the left argument supports the inplace version, and otherwise
exactly the same rules apply as for the non-inplace version.  (I
believe this isn't in the patch yet, but it will be when Thomas checks
it in.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 24 15:14:55 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 15:14:55 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: <200008241400.JAA01806@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 24, 2000 at 09:00:56AM -0500
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl> <39A50578.C08B9F14@lemburg.com> <200008241400.JAA01806@cj20424-a.reston1.va.home.com>
Message-ID: <20000824151455.F7566@xs4all.nl>

On Thu, Aug 24, 2000 at 09:00:56AM -0500, Guido van Rossum wrote:
> > I just hope that with all these new operators you haven't
> > closed the door for switching to argument based handling of
> > coercion.

> Far from it!  Actually, the inplace operators won't do any coercions
> when the left argument supports the inplace version, and otherwise
> exactly the same rules apply as for the non-inplace version.  (I
> believe this isn't in the patch yet, but it will be when Thomas checks
> it in.)

Exactly. (Actually, I'm again re-working the patch: If I do it the way I
intended to, you'd sometimes get the 'non in-place' error messages, instead
of the in-place ones. But the result will be the same.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug 24 17:52:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 11:52:35 -0400 (EDT)
Subject: [Python-Dev] Need help with SF bug #112558
Message-ID: <14757.17603.237768.174359@cj42289-a.reston1.va.home.com>

  I'd like some help with fixing a bug in dictobject.c.  The bug is on
SourceForge as #112558, and my attempted fix is SourceForge patch
#101277.
  The original bug is that exceptions raised by an object's __cmp__()
during dictionary lookup are not cleared, and can be propogated during
a subsequent lookup attempt.  I've made more detailed comments at
SourceForge at the patch:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470

  Thanks for any suggestions!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From mal at lemburg.com  Thu Aug 24 18:53:35 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 18:53:35 +0200
Subject: [Python-Dev] Need help with SF bug #112558
References: <14757.17603.237768.174359@cj42289-a.reston1.va.home.com>
Message-ID: <39A5530F.FF2DA5C4@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   I'd like some help with fixing a bug in dictobject.c.  The bug is on
> SourceForge as #112558, and my attempted fix is SourceForge patch
> #101277.
>   The original bug is that exceptions raised by an object's __cmp__()
> during dictionary lookup are not cleared, and can be propogated during
> a subsequent lookup attempt.  I've made more detailed comments at
> SourceForge at the patch:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470
> 
>   Thanks for any suggestions!

Here are some:

* Please be very careful when patching this area of the interpreter:
  it is *very* performance sensitive.

* I'd remove the cmp variable and do a PyErr_Occurred() directly
  in all cases where PyObect_Compare() returns != 0.

* Exceptions during dict lookups are rare. I'm not sure about
  failing lookups... Valdimir ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From trentm at ActiveState.com  Thu Aug 24 19:46:27 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 24 Aug 2000 10:46:27 -0700
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
Message-ID: <20000824104627.C15992@ActiveState.com>

Hey all,

I recently checked in the Monterey stuff (patch
http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
) but the checkin did not show up on python-checkins and the comment and
status change to "Closed" did not show up on python-patches. My checkin was
about a full day ago.

Is this a potential SourceForge bug? The delay con't be *that* long.

Regards,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From fdrake at beopen.com  Thu Aug 24 20:39:01 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 14:39:01 -0400 (EDT)
Subject: [Python-Dev] CVS patch fixer?
Message-ID: <14757.27589.366614.231055@cj42289-a.reston1.va.home.com>

  Someone (don't remember who) posted a Perl script to either this
list or the patches list, perhaps a month or so ago(?), which could
massage a CVS-generated patch to make it easier to apply.
  Can anyone provide a copy of this, or a link to it?
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 24 21:50:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 21:50:53 +0200
Subject: [Python-Dev] Augmented assignment
Message-ID: <20000824215053.G7566@xs4all.nl>

I've finished rewriting the PyNumber_InPlace*() calls in the augmented
assignment patch and am about to check the entire thing in. I'll be checking
it in in parts, with the grammar/compile/ceval things last, but you might
get some weird errors in the next hour or so, depending on my link to
sourceforge. (I'm doing some last minute checks before checking it in ;)

Part of it will be docs, but not terribly much yet. I'm still working on
those, though, and I have a bit over a week before I leave on vacation, so I
think I can finish them for the most part.) I'm also checking in a test
case, and some modifications to the std library: support for += in UserList,
UserDict, UserString, and rfc822.Addresslist. Reviewers are more than
welcome, though I realize how large a patch it is. (Boy, do I realize that!)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Thu Aug 24 23:45:53 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 16:45:53 -0500
Subject: [Python-Dev] Augmented assignment
In-Reply-To: Your message of "Thu, 24 Aug 2000 21:50:53 +0200."
             <20000824215053.G7566@xs4all.nl> 
References: <20000824215053.G7566@xs4all.nl> 
Message-ID: <200008242145.QAA01306@cj20424-a.reston1.va.home.com>

Congratulations, Thomas!  Megathanks for carrying this proposal to a
happy ending.  I'm looking forward to using the new feature!

Nits: Lib/symbol.py and Lib/token.py need to be regenerated and
checked in; (see the comments at the top of the file).

Also, tokenizer.py probably needs to have the new tokens += etc. added
manually.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 24 23:09:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 23:09:49 +0200
Subject: [Python-Dev] Augmented assignment
In-Reply-To: <200008242145.QAA01306@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 24, 2000 at 04:45:53PM -0500
References: <20000824215053.G7566@xs4all.nl> <200008242145.QAA01306@cj20424-a.reston1.va.home.com>
Message-ID: <20000824230949.O4798@xs4all.nl>

On Thu, Aug 24, 2000 at 04:45:53PM -0500, Guido van Rossum wrote:

> Nits: Lib/symbol.py and Lib/token.py need to be regenerated and
> checked in; (see the comments at the top of the file).

Checking them in now.

> Also, tokenizer.py probably needs to have the new tokens += etc. added
> manually.

Okay. I'm not entirely sure how to do this, but I *think* this does it:
replace

Operator = group('\+', '\-', '\*\*', '\*', '\^', '~', '/', '%', '&', '\|',
                 '<<', '>>', '==', '<=', '<>', '!=', '>=', '=', '<', '>')

with

Operator = group('\+=', '\-=', '\*=', '%=', '/=', '\*\*=', '&=', '\|=',
                 '\^=', '>>=', '<<=', '\+', '\-', '\*\*', '\*', '\^', '~',
                 '/', '%', '&', '\|', '<<', '>>', '==', '<=', '<>', '!=',
                 '>=', '=', '<', '>')

Placing the augmented-assignment operators at the end doesn't work, but this
seems to do the trick. However, I can't really test this module, just check
its output. It seems okay, but I would appreciate either an 'okay' or a
more extensive test before checking it in. No, I can't start IDLE right now,
I'm working over a 33k6 leased line and my home machine doesn't have an
augmented Python yet :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Thu Aug 24 23:35:38 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Thu, 24 Aug 2000 23:35:38 +0200
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
Message-ID: <20000824213543.5D902D71F9@oratrix.oratrix.nl>

Both regexp and sre don't behave well under low-memory conditions.

I noticed this because test_longexp basically ate all my memory (sigh, 
I think I'll finally have to give up my private memory allocator and
take the 15% performance hit, until I find the time to dig into
Vladimir's stuff) so the rest of the regressions tests ran under very
tight memory conditions.

test_re wasn't so bad, the only problem was that it crashed with a
"NULL return without an exception". test_regexp was worse, it crashed
my machine.

If someone feels the urge maybe they could run the testsuite on unix
with a sufficiently low memory-limit.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From jeremy at beopen.com  Fri Aug 25 00:17:56 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:17:56 -0400 (EDT)
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: <20000824104627.C15992@ActiveState.com>
References: <20000824104627.C15992@ActiveState.com>
Message-ID: <14757.40724.86552.609923@bitdiddle.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

  TM> Hey all,

  TM> I recently checked in the Monterey stuff (patch
  TM> http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
  TM> ) but the checkin did not show up on python-checkins and the
  TM> comment and status change to "Closed" did not show up on
  TM> python-patches. My checkin was about a full day ago.

  TM> Is this a potential SourceForge bug? The delay con't be *that*
  TM> long.

Weird.  I haven't even received the message quoted above.  There's
something very weird going on.

I have not seen a checkin message for a while, though I have made a
few checkins myself.  It looks like the problem I'm seeing here is
with between python.org and beopen.com, because the messages are in
the archive.

The problem you are seeing is different.  The most recent checkin
message from you is dated Aug. 16.  Could it be a problem with your
local mail?  The message would be sent from you account.  Perhaps
there is more info. in your system's mail log.

Jeremy




From skip at mojam.com  Fri Aug 25 00:05:20 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 24 Aug 2000 17:05:20 -0500 (CDT)
Subject: [Python-Dev] Check your "Accepted" patches
Message-ID: <14757.39968.498536.643301@beluga.mojam.com>

There are 8 patches with status "Accepted".  They are assigned to akuchling,
bwarsaw, jhylton, fdrake, ping and prescod.  I had not been paying attention
to that category and then saw this in the Open Items of PEP 0200:

    Get all patches out of Accepted.

I checked and found one of mine there.

Skip




From trentm at ActiveState.com  Fri Aug 25 00:32:55 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 24 Aug 2000 15:32:55 -0700
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: <14757.40724.86552.609923@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:17:56PM -0400
References: <20000824104627.C15992@ActiveState.com> <14757.40724.86552.609923@bitdiddle.concentric.net>
Message-ID: <20000824153255.B27016@ActiveState.com>

On Thu, Aug 24, 2000 at 06:17:56PM -0400, Jeremy Hylton wrote:
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
>   TM> I recently checked in the Monterey stuff (patch
>   TM> http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
>   TM> ) but the checkin did not show up on python-checkins and the
>   TM> comment and status change to "Closed" did not show up on
>   TM> python-patches. My checkin was about a full day ago.
> 
> I have not seen a checkin message for a while, though I have made a
> few checkins myself.  It looks like the problem I'm seeing here is
> with between python.org and beopen.com, because the messages are in
> the archive.
> 
> The problem you are seeing is different.  The most recent checkin
> message from you is dated Aug. 16.  Could it be a problem with your
> local mail?  The message would be sent from you account.  Perhaps

The cvs checkin message is made from my local machine?! Really? I thought
that would be on the server side. Our email *is* a little backed up here but
I don't think *that* backed up.

In any case, that does not explain why patches at python.org did not a mail
regarding my update of the patch on SourceForge. *Two* emails have gone
astray here.

I am really not so curious that I want to hunt it down. Just a heads up for
people.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From jeremy at beopen.com  Fri Aug 25 00:44:27 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:44:27 -0400 (EDT)
Subject: [Python-Dev] two tests fail
Message-ID: <14757.42315.528801.142803@bitdiddle.concentric.net>

After the augmented assignment checkin (yay!), I see two failing
tests: test_augassign and test_parser.  Do you see the same problem?

Jeremy



From thomas at xs4all.net  Fri Aug 25 00:50:35 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 00:50:35 +0200
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <14757.42315.528801.142803@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:44:27PM -0400
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
Message-ID: <20000825005035.P4798@xs4all.nl>

On Thu, Aug 24, 2000 at 06:44:27PM -0400, Jeremy Hylton wrote:
> After the augmented assignment checkin (yay!), I see two failing
> tests: test_augassign and test_parser.  Do you see the same problem?

Hm, neither is failing, for me, in a tree that has no differences with the
CVS tree according to CVS itself. I'll see if I can reproduce it by
using a different tree, just to be sure.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at beopen.com  Fri Aug 25 00:56:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:56:15 -0400 (EDT)
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <20000825005035.P4798@xs4all.nl>
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
	<20000825005035.P4798@xs4all.nl>
Message-ID: <14757.43023.497909.568824@bitdiddle.concentric.net>

Oops.  My mistake.  I hadn't rebuilt the parser.

Jeremy



From thomas at xs4all.net  Fri Aug 25 00:53:18 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 00:53:18 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib UserString.py,1.5,1.6
In-Reply-To: <200008242147.OAA05606@slayer.i.sourceforge.net>; from nowonder@users.sourceforge.net on Thu, Aug 24, 2000 at 02:47:36PM -0700
References: <200008242147.OAA05606@slayer.i.sourceforge.net>
Message-ID: <20000825005318.H7566@xs4all.nl>

On Thu, Aug 24, 2000 at 02:47:36PM -0700, Peter Schneider-Kamp wrote:
> Update of /cvsroot/python/python/dist/src/Lib
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv5582
> 
> Modified Files:
> 	UserString.py 
> Log Message:
> 
> simple typo that makes regression test test_userstring fail

WTF ? Hmm. I was pretty damned sure I'd fixed that one. I saw it two
times, fixed it in two trees at least, but apparently not the one I commited
:P I'll get some sleep, soon :P

> ***************
> *** 56,60 ****
>           elif isinstance(other, StringType) or isinstance(other, UnicodeType):
>               self.data += other
> !         else
>               self.data += str(other)
>           return self
> --- 56,60 ----
>           elif isinstance(other, StringType) or isinstance(other, UnicodeType):
>               self.data += other
> !         else:
>               self.data += str(other)
>           return self


-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 25 01:03:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 01:03:49 +0200
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <14757.43023.497909.568824@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:56:15PM -0400
References: <14757.42315.528801.142803@bitdiddle.concentric.net> <20000825005035.P4798@xs4all.nl> <14757.43023.497909.568824@bitdiddle.concentric.net>
Message-ID: <20000825010349.Q4798@xs4all.nl>

On Thu, Aug 24, 2000 at 06:56:15PM -0400, Jeremy Hylton wrote:

> Oops.  My mistake.  I hadn't rebuilt the parser.

Well, you were on to something, of course. The parsermodule will have to be
modified to accept augmented assignment as well. (Or at least, so I assume.)
The test just doesn't test that part yet ;-) Fred, do you want me to do
that? I'm not sure on the parsermodule internals, but maybe if you can give
me some pointers I can work it out.

(The same goes for Tools/compiler/compiler, by the way, which I think also
needs to be taught list comprehensions.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From ping at lfw.org  Fri Aug 25 01:38:02 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 24 Aug 2000 19:38:02 -0400 (EDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import
 something as'
In-Reply-To: <20000824121858.E7566@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008241935250.1061-100000@skuld.lfw.org>

On Thu, 24 Aug 2000, Thomas Wouters wrote:
> >     import foo.bar          "import the package named foo and its
> >                              submodule bar, then put *foo* into the
> >                              current namespace"
> 
> Wrong. What it does is: import the package named foo and its submodule bar,
> and make it so you can access foo.bar via the name 'foo.bar'. That this has
> to put 'foo' in the local namespace is a side issue

I understand now.  Sorry for my thickheadedness.  Yes, when i look
at it as "please give this to me as foo.bar", it makes much more sense.

Apologies, Guido.  That's two brain-farts in a day or so.  :(


-- ?!ng




From fdrake at beopen.com  Fri Aug 25 01:36:54 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 19:36:54 -0400 (EDT)
Subject: [Python-Dev] two tests fail
In-Reply-To: <14757.42315.528801.142803@bitdiddle.concentric.net>
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
Message-ID: <14757.45462.717663.782865@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > After the augmented assignment checkin (yay!), I see two failing
 > tests: test_augassign and test_parser.  Do you see the same problem?

  I'll be taking care of the parser module update tonight (late) or
tomorrow morning.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From akuchlin at cnri.reston.va.us  Fri Aug 25 03:32:47 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 24 Aug 2000 21:32:47 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules pyexpat.c,2.12,2.13
In-Reply-To: <200008242157.OAA06909@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Thu, Aug 24, 2000 at 02:57:46PM -0700
References: <200008242157.OAA06909@slayer.i.sourceforge.net>
Message-ID: <20000824213247.A2318@newcnri.cnri.reston.va.us>

On Thu, Aug 24, 2000 at 02:57:46PM -0700, Fred L. Drake wrote:
>Remove the Py_FatalError() from initpyexpat(); the Guido has decreed
>that this is not appropriate.

So what is going to catch errors while initializing a module?  Or is
PyErr_Occurred() called after a module's init*() function?

--amk



From MarkH at ActiveState.com  Fri Aug 25 03:56:10 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 11:56:10 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules pyexpat.c,2.12,2.13
In-Reply-To: <20000824213247.A2318@newcnri.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCEHADGAA.MarkH@ActiveState.com>

Andrew writes:

> On Thu, Aug 24, 2000 at 02:57:46PM -0700, Fred L. Drake wrote:
> >Remove the Py_FatalError() from initpyexpat(); the Guido has decreed
> >that this is not appropriate.
>
> So what is going to catch errors while initializing a module?  Or is
> PyErr_Occurred() called after a module's init*() function?

Yes!  All errors are handled correctly (as of somewhere in the 1.5 family,
I believe)

Note that Py_FatalError() is _evil_ - it can make your program die without
a chance to see any error message or other diagnostic.  It should be
avoided if at all possible.

Mark.




From guido at beopen.com  Fri Aug 25 06:11:54 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 23:11:54 -0500
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
In-Reply-To: Your message of "Thu, 24 Aug 2000 23:35:38 +0200."
             <20000824213543.5D902D71F9@oratrix.oratrix.nl> 
References: <20000824213543.5D902D71F9@oratrix.oratrix.nl> 
Message-ID: <200008250411.XAA08797@cj20424-a.reston1.va.home.com>

> test_re wasn't so bad, the only problem was that it crashed with a
> "NULL return without an exception". test_regexp was worse, it crashed
> my machine.

That's regex, right?  regexp was the *really* old regular expression
module we once had.

Anyway, I don't care about regex, it's old.

The sre code needs to be robustified, but it's not a high priority for
me.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug 25 06:19:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 23:19:39 -0500
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: Your message of "Thu, 24 Aug 2000 15:32:55 MST."
             <20000824153255.B27016@ActiveState.com> 
References: <20000824104627.C15992@ActiveState.com> <14757.40724.86552.609923@bitdiddle.concentric.net>  
            <20000824153255.B27016@ActiveState.com> 
Message-ID: <200008250419.XAA08826@cj20424-a.reston1.va.home.com>

> In any case, that does not explain why patches at python.org did not a mail
> regarding my update of the patch on SourceForge. *Two* emails have gone
> astray here.

This is compensated though by the patch and bug managers, which often
sends me two or three copies of the email for each change to an
entry.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From guido at beopen.com  Fri Aug 25 07:58:15 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 00:58:15 -0500
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: Your message of "Thu, 24 Aug 2000 14:41:55 EST."
Message-ID: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>

Here's a patch that Tim & I believe should solve the thread+fork
problem properly.  I'll try to explain it briefly.

I'm not checking this in yet because I need more eyeballs, and because
I don't actually have a test to prove that I've fixed the problem.
However, our theory is very hopeful.

(1) BACKGROUND: A Python lock may be released by a different thread
than who aqcuired it, and it may be acquired by the same thread
multiple times.  A pthread mutex must always be unlocked by the same
thread that locked it, and can't be locked more than once.  So, a
Python lock can't be built out of a simple pthread mutex; instead, a
Python lock is built out of a "locked" flag and a <condition variable,
mutex> pair.  The mutex is locked for at most a few cycles, to protect
the flag.  This design is Tim's (while still at KSR).

(2) PROBLEM: If you fork while another thread holds a mutex, that
mutex will never be released, because only the forking thread survives
in the child.  The LinuxThread manual recommends to use
pthread_atfork() to acquire all locks in locking order before the
fork, and release them afterwards.  A problem with Tim's design here
is that even if the forking thread has Python's global interpreter
lock, another thread trying to acquire the lock may still hold the
mutex at the time of the fork, causing it to be held forever in the
child.  Charles has posted an effective hack that allocates a new
global interpreter lock in the child, but this doesn't solve the
problem for other locks.

(3) BRAINWAVE: If we use a single mutex shared by all locks, instead
of a mutex per lock, we can lock this mutex around the fork and thus
prevent any other thread from locking it.  This is okay because, while
a condition variable always needs a mutex to go with it, there's no
rule that the same mutex can't be shared by many condition variables.
The code below implements this.

(4) MORE WORK: (a) The PyThread API also defines semaphores, which may
have a similar problem.  But I'm not aware of any use of these (I'm
not quite sure why semaphore support was added), so I haven't patched
these.  (b) The thread_pth.h file define locks in the same way; there
may be others too.  I haven't touched these.

(5) TESTING: Charles Waldman posted this code to reproduce the
problem.  Unfortunately I haven't had much success with it; it seems
to hang even when I apply Charles' patch.

    import thread
    import os, sys
    import time

    def doit(name):
	while 1:
	    if os.fork()==0:
		print name, 'forked', os.getpid()
		os._exit(0)
	    r = os.wait()

    for x in range(50):
	name = 't%s'%x
	print 'starting', name
	thread.start_new_thread(doit, (name,))

    time.sleep(300)

Here's the patch:

*** Python/thread_pthread.h	2000/08/23 21:33:05	2.29
--- Python/thread_pthread.h	2000/08/25 04:29:43
***************
*** 84,101 ****
   * and a <condition, mutex> pair.  In general, if the bit can be acquired
   * instantly, it is, else the pair is used to block the thread until the
   * bit is cleared.     9 May 1994 tim at ksr.com
   */
  
  typedef struct {
  	char             locked; /* 0=unlocked, 1=locked */
  	/* a <cond, mutex> pair to handle an acquire of a locked lock */
  	pthread_cond_t   lock_released;
- 	pthread_mutex_t  mut;
  } pthread_lock;
  
  #define CHECK_STATUS(name)  if (status != 0) { perror(name); error = 1; }
  
  /*
   * Initialization.
   */
  
--- 84,125 ----
   * and a <condition, mutex> pair.  In general, if the bit can be acquired
   * instantly, it is, else the pair is used to block the thread until the
   * bit is cleared.     9 May 1994 tim at ksr.com
+  *
+  * MODIFICATION: use a single mutex shared by all locks.
+  * This should make it easier to cope with fork() while threads exist.
+  * 24 Aug 2000 {guido,tpeters}@beopen.com
   */
  
  typedef struct {
  	char             locked; /* 0=unlocked, 1=locked */
  	/* a <cond, mutex> pair to handle an acquire of a locked lock */
  	pthread_cond_t   lock_released;
  } pthread_lock;
  
+ static pthread_mutex_t locking_mutex = PTHREAD_MUTEX_INITIALIZER;
+ 
  #define CHECK_STATUS(name)  if (status != 0) { perror(name); error = 1; }
  
  /*
+  * Callbacks for pthread_atfork().
+  */
+ 
+ static void prefork_callback()
+ {
+ 	pthread_mutex_lock(&locking_mutex);
+ }
+ 
+ static void parent_callback()
+ {
+ 	pthread_mutex_unlock(&locking_mutex);
+ }
+ 
+ static void child_callback()
+ {
+ 	pthread_mutex_unlock(&locking_mutex);
+ }
+ 
+ /*
   * Initialization.
   */
  
***************
*** 113,118 ****
--- 137,144 ----
  	pthread_t thread1;
  	pthread_create(&thread1, NULL, (void *) _noop, &dummy);
  	pthread_join(thread1, NULL);
+ 	/* XXX Is the following supported here? */
+ 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
  }
  
  #else /* !_HAVE_BSDI */
***************
*** 123,128 ****
--- 149,156 ----
  #if defined(_AIX) && defined(__GNUC__)
  	pthread_init();
  #endif
+ 	/* XXX Is the following supported everywhere? */
+ 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
  }
  
  #endif /* !_HAVE_BSDI */
***************
*** 260,269 ****
  	if (lock) {
  		lock->locked = 0;
  
- 		status = pthread_mutex_init(&lock->mut,
- 					    pthread_mutexattr_default);
- 		CHECK_STATUS("pthread_mutex_init");
- 
  		status = pthread_cond_init(&lock->lock_released,
  					   pthread_condattr_default);
  		CHECK_STATUS("pthread_cond_init");
--- 288,293 ----
***************
*** 286,294 ****
  
  	dprintf(("PyThread_free_lock(%p) called\n", lock));
  
- 	status = pthread_mutex_destroy( &thelock->mut );
- 	CHECK_STATUS("pthread_mutex_destroy");
- 
  	status = pthread_cond_destroy( &thelock->lock_released );
  	CHECK_STATUS("pthread_cond_destroy");
  
--- 310,315 ----
***************
*** 304,314 ****
  
  	dprintf(("PyThread_acquire_lock(%p, %d) called\n", lock, waitflag));
  
! 	status = pthread_mutex_lock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_lock[1]");
  	success = thelock->locked == 0;
  	if (success) thelock->locked = 1;
! 	status = pthread_mutex_unlock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_unlock[1]");
  
  	if ( !success && waitflag ) {
--- 325,335 ----
  
  	dprintf(("PyThread_acquire_lock(%p, %d) called\n", lock, waitflag));
  
! 	status = pthread_mutex_lock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_lock[1]");
  	success = thelock->locked == 0;
  	if (success) thelock->locked = 1;
! 	status = pthread_mutex_unlock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_unlock[1]");
  
  	if ( !success && waitflag ) {
***************
*** 316,330 ****
  
  		/* mut must be locked by me -- part of the condition
  		 * protocol */
! 		status = pthread_mutex_lock( &thelock->mut );
  		CHECK_STATUS("pthread_mutex_lock[2]");
  		while ( thelock->locked ) {
  			status = pthread_cond_wait(&thelock->lock_released,
! 						   &thelock->mut);
  			CHECK_STATUS("pthread_cond_wait");
  		}
  		thelock->locked = 1;
! 		status = pthread_mutex_unlock( &thelock->mut );
  		CHECK_STATUS("pthread_mutex_unlock[2]");
  		success = 1;
  	}
--- 337,351 ----
  
  		/* mut must be locked by me -- part of the condition
  		 * protocol */
! 		status = pthread_mutex_lock( &locking_mutex );
  		CHECK_STATUS("pthread_mutex_lock[2]");
  		while ( thelock->locked ) {
  			status = pthread_cond_wait(&thelock->lock_released,
! 						   &locking_mutex);
  			CHECK_STATUS("pthread_cond_wait");
  		}
  		thelock->locked = 1;
! 		status = pthread_mutex_unlock( &locking_mutex );
  		CHECK_STATUS("pthread_mutex_unlock[2]");
  		success = 1;
  	}
***************
*** 341,352 ****
  
  	dprintf(("PyThread_release_lock(%p) called\n", lock));
  
! 	status = pthread_mutex_lock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_lock[3]");
  
  	thelock->locked = 0;
  
! 	status = pthread_mutex_unlock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_unlock[3]");
  
  	/* wake up someone (anyone, if any) waiting on the lock */
--- 362,373 ----
  
  	dprintf(("PyThread_release_lock(%p) called\n", lock));
  
! 	status = pthread_mutex_lock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_lock[3]");
  
  	thelock->locked = 0;
  
! 	status = pthread_mutex_unlock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_unlock[3]");
  
  	/* wake up someone (anyone, if any) waiting on the lock */

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From DavidA at ActiveState.com  Fri Aug 25 07:07:02 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 24 Aug 2000 22:07:02 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.WNT.4.21.0008242203060.1060-100000@cr469175-a>

On Fri, 25 Aug 2000, Guido van Rossum wrote:

> (4) MORE WORK: (a) The PyThread API also defines semaphores, which may
> have a similar problem.  But I'm not aware of any use of these (I'm
> not quite sure why semaphore support was added), so I haven't patched
> these. 

IIRC, we had a discussion a while back about semaphore support in the
PyThread API and agreed that they were not implemented on enough platforms
to be a useful part of the PyThread API.  I can't find it right now, alas.

--david




From MarkH at ActiveState.com  Fri Aug 25 07:16:56 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 15:16:56 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>

Something strange is happening in my Windows Debug builds (fresh CVS tree)

If you remove "urllib.pyc", and execute 'python_d -c "import urllib"',
Python dies after printing the message:

FATAL: node type 305, required 311

It also happens for a number of other files (compileall.py will show you
:-)

Further analysis shows this deep in the compiler, and triggered by this
macro in node.h:

---
/* Assert that the type of a node is what we expect */
#ifndef Py_DEBUG
#define REQ(n, type) { /*pass*/ ; }
#else
#define REQ(n, type) \
	{ if (TYPE(n) != (type)) { \
	    fprintf(stderr, "FATAL: node type %d, required %d\n", \
		    TYPE(n), type); \
	    abort(); \
	} }
#endif
---

Is this pointing to a deeper problem, or is the assertion incorrect?

Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
find a simple way to turn it on to confirm it also exists on Linux...

Any ideas?

Mark.




From thomas at xs4all.net  Fri Aug 25 07:23:52 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:23:52 +0200
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 25, 2000 at 12:58:15AM -0500
References: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <20000825072351.I7566@xs4all.nl>

On Fri, Aug 25, 2000 at 12:58:15AM -0500, Guido van Rossum wrote:

> + 	/* XXX Is the following supported here? */
> + 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
>   }
>   
>   #else /* !_HAVE_BSDI */

To answer that question: yes. BSDI from 3.0 onward has pthread_atfork(),
though threads remain unusable until BSDI 4.1 (because of a bug in libc
where pause() stops listening to signals when compiling for threads.) I
haven't actaully tested this patch yet, just gave it a once-over ;) I will
test it on all types of machines we have, though.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 25 07:24:12 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 01:24:12 -0400 (EDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <14758.764.705006.937500@cj42289-a.reston1.va.home.com>

Mark Hammond writes:
 > Is this pointing to a deeper problem, or is the assertion incorrect?

  I expect that there's an incorrect assertion that was fine until one
of the recent grammar changes; the augmented assignment patch is
highly suspect given that it's the most recent.  Look for problems
handling expr_stmt nodes.

 > Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
 > find a simple way to turn it on to confirm it also exists on Linux...

  I don't think I've ever used it, either on Linux or any other Unix.
We should definately have an easy way to turn it on!  Probably at
configure time would be good.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 25 07:29:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:29:53 +0200
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 25, 2000 at 03:16:56PM +1000
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <20000825072953.J7566@xs4all.nl>

On Fri, Aug 25, 2000 at 03:16:56PM +1000, Mark Hammond wrote:

> Something strange is happening in my Windows Debug builds (fresh CVS tree)

> If you remove "urllib.pyc", and execute 'python_d -c "import urllib"',
> Python dies after printing the message:
> 
> FATAL: node type 305, required 311
> 
> It also happens for a number of other files (compileall.py will show you
> :-)

> Further analysis shows this deep in the compiler, and triggered by this
> macro in node.h:

> #define REQ(n, type) \
> 	{ if (TYPE(n) != (type)) { \
> 	    fprintf(stderr, "FATAL: node type %d, required %d\n", \
> 		    TYPE(n), type); \
> 	    abort(); \
> 	} }

> Is this pointing to a deeper problem, or is the assertion incorrect?

At first sight, I would say "yes, the assertion is wrong". That doesn't mean
it shouldn't be fixed ! It's probably caused by augmented assignment or list
comprehensions, though I have used both with Py_DEBUG enabled a few times,
so I don't know for sure. I'm compiling with debug right now, to inspect
this, though.

Another thing that might cause it is an out-of-date graminit.h file
somewhere. The one in the CVS tree is up to date, but maybe you have a copy
stashed somewhere ?

> Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
> find a simple way to turn it on to confirm it also exists on Linux...

There's undoubtedly a good way, but I usually just chicken out and add
'#define Py_DEBUG 1' at the bottom of config.h ;) That also makes sure I
don't keep it around too long, as config.h gets regenerated often enough :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 25 07:44:41 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:44:41 +0200
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <20000825072953.J7566@xs4all.nl>; from thomas@xs4all.net on Fri, Aug 25, 2000 at 07:29:53AM +0200
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com> <20000825072953.J7566@xs4all.nl>
Message-ID: <20000825074440.K7566@xs4all.nl>

On Fri, Aug 25, 2000 at 07:29:53AM +0200, Thomas Wouters wrote:
> On Fri, Aug 25, 2000 at 03:16:56PM +1000, Mark Hammond wrote:

> > FATAL: node type 305, required 311

> > Is this pointing to a deeper problem, or is the assertion incorrect?
> 
> At first sight, I would say "yes, the assertion is wrong". That doesn't mean
> it shouldn't be fixed ! It's probably caused by augmented assignment or list
> comprehensions, 

Actually, it was a combination of removing UNPACK_LIST and adding
list comprehensions. I just checked in a fix for this. Can you confirm that
this fixes it for the windows build, too ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Fri Aug 25 07:44:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 01:44:11 -0400
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMENNHBAA.tim_one@email.msn.com>

[Guido]
> ...
> (1) BACKGROUND: A Python lock may be released by a different thread
> than who aqcuired it, and it may be acquired by the same thread
> multiple times.  A pthread mutex must always be unlocked by the same
> thread that locked it, and can't be locked more than once.

The business about "multiple times" may be misleading, as it makes Windows
geeks think of reentrant locks.  The Python lock is not reentrant.  Instead,
it's perfectly OK for a thread that has acquired a Python lock to *try* to
acquire it again (but is not OK for a thread that has locked a pthread mutex
to try to lock it again):  the acquire attempt simply blocks until *another*
thread releases the Python lock.  By "Python lock" here I mean at the Python
C API level, and as exposed by the thread module; the threading module
exposes fancier locks (including reentrant locks).

> So, a Python lock can't be built out of a simple pthread mutex; instead,
> a Python lock is built out of a "locked" flag and a <condition variable,
> mutex> pair.  The mutex is locked for at most a few cycles, to protect
> the flag.  This design is Tim's (while still at KSR).

At that time, a pthread mutex was generally implemented as a pure spin lock,
so it was important to hold a pthread mutex for as short a span as possible
(and, indeed, the code never holds a pthread mutex for longer than across 2
simple C stmts).

> ...
> (3) BRAINWAVE: If we use a single mutex shared by all locks, instead
> of a mutex per lock, we can lock this mutex around the fork and thus
> prevent any other thread from locking it.  This is okay because, while
> a condition variable always needs a mutex to go with it, there's no
> rule that the same mutex can't be shared by many condition variables.
> The code below implements this.

Before people panic <wink>, note that this is "an issue" only for those
thread_xxx.h implementations such that fork() is supported *and* the child
process nukes threads in the child, leaving its mutexes and the data they
protect in an insane state.  They're the ones creating problems, so they're
the ones that pay.

> (4) MORE WORK: (a) The PyThread API also defines semaphores, which may
> have a similar problem.  But I'm not aware of any use of these (I'm
> not quite sure why semaphore support was added), so I haven't patched
> these.

I'm almost certain we all agreed (spurred by David Ascher) to get rid of the
semaphore implementations a while back.

> (b) The thread_pth.h file define locks in the same way; there
> may be others too.  I haven't touched these.

(c) While the scheme protects mutexes from going nuts in the child, that
doesn't necessarily imply that the data mutexes *protect* won't go nuts.
For example, this *may* not be enough to prevent insanity in import.c:  if
another thread is doing imports at the time a fork() occurs,
import_lock_level could be left at an arbitrarily high value in import.c.
But the thread doing the import has gone away in the child, so can't restore
import_lock_level to a sane value there.  I'm not convinced that matters in
this specific case, just saying we've got some tedious headwork to review
all the cases.

> (5) TESTING: Charles Waldman posted this code to reproduce the
> problem.  Unfortunately I haven't had much success with it; it seems
> to hang even when I apply Charles' patch.

What about when you apply *your* patch?

>     import thread
>     import os, sys
>     import time
>
>     def doit(name):
> 	while 1:
> 	    if os.fork()==0:
> 		print name, 'forked', os.getpid()
> 		os._exit(0)
> 	    r = os.wait()
>
>     for x in range(50):
> 	name = 't%s'%x
> 	print 'starting', name
> 	thread.start_new_thread(doit, (name,))
>
>     time.sleep(300)
>
> Here's the patch:

> ...
> + static pthread_mutex_t locking_mutex = PTHREAD_MUTEX_INITIALIZER;

Anyone know whether this gimmick is supported by all pthreads
implementations?

> ...
> + 	/* XXX Is the following supported here? */
> + 	pthread_atfork(&prefork_callback, &parent_callback,
> &child_callback);

I expect we need some autoconf stuff for that, right?

Thanks for writing this up!  Even more thanks for thinking of it <wink>.





From MarkH at ActiveState.com  Fri Aug 25 07:55:42 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 15:55:42 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <20000825074440.K7566@xs4all.nl>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEHJDGAA.MarkH@ActiveState.com>

> Actually, it was a combination of removing UNPACK_LIST and adding
> list comprehensions. I just checked in a fix for this. Can you 
> confirm that
> this fixes it for the windows build, too ?

It does - thank you!

Mark.




From tim_one at email.msn.com  Fri Aug 25 10:08:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 04:08:23 -0400
Subject: [Python-Dev] RE: Passwords after CVS commands
In-Reply-To: <PGECLPOBGNBNKHNAGIJHAEAECEAA.andy@reportlab.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOBHBAA.tim_one@email.msn.com>

The latest version of Andy Robinson's excellent instructions for setting up
a cmdline CVS using SSH under Windows are now available:

    http://python.sourceforge.net/winssh.txt

This is also linked to from the Python-at-SourceForge FAQ:

    http://python.sourceforge.net/sf-faq.html

where it replaces the former "let's try to pretend Windows is Unix(tm)"
mish-mash.  Riaan Booysen cracked the secret of how to get the Windows
ssh-keygen to actually generate keys (ha!  don't think I can't hear you
Unix(tm) weenies laughing <wink>), and that's the main change from the last
version of these instructions I posted here.  I added a lot of words to
Riann's, admonishing you not to leave the passphrase empty, but so
unconvincingly I bet you won't heed my professional advice.

and-not-revealing-whether-i-did-ly y'rs  - tim





From thomas at xs4all.net  Fri Aug 25 13:16:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 13:16:20 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0203.txt,1.11,1.12
In-Reply-To: <200008251111.EAA13270@slayer.i.sourceforge.net>; from twouters@users.sourceforge.net on Fri, Aug 25, 2000 at 04:11:31AM -0700
References: <200008251111.EAA13270@slayer.i.sourceforge.net>
Message-ID: <20000825131620.B16377@xs4all.nl>

On Fri, Aug 25, 2000 at 04:11:31AM -0700, Thomas Wouters wrote:

> !     [XXX so I am accepting this, but I'm a bit worried about the
> !     argument coercion.  For x+=y, if x supports augmented assignment,
> !     y should only be cast to x's type, not the other way around!]

Oh, note that I chose not to do *any* coercion, if x supports the in-place
operation. I'm not sure how valuable coercion would be, here, at least not
in its current form. (Isn't coercion mostly used by integer types ? And
aren't they immutable ? If an in-place method wants to have its argument
coerced, it should do so itself, just like with direct method calls.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Fri Aug 25 14:04:27 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:04:27 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
Message-ID: <39A660CB.7661E20E@lemburg.com>

I've asked this question before: when are we going to see
comp.lang.python.announce back online ?

I know that everyone is busy with getting the betas ready,
but looking at www.python.org I find that the "latest"
special announcement is dated 22-Mar-2000. People will get
the false idea that Python isn't moving anywhere... at least
not in the spirit of OSS' "release early and often".

Could someone please summarize what needs to be done to
post a message to comp.lang.python.announce without taking
the path via the official (currently defunct) moderator ?

I've had a look at the c.l.p.a postings and the only special
header they include is the "Approved: fleck at informatik.uni-bonn.de"
header.

If this is all it takes to post to a moderated newsgroup,
fixing Mailman to do the trick should be really simple.

I'm willing to help here to get this done *before* the Python
2.0beta1 announcement.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Fri Aug 25 14:14:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 14:14:20 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: <39A660CB.7661E20E@lemburg.com>; from mal@lemburg.com on Fri, Aug 25, 2000 at 02:04:27PM +0200
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <20000825141420.C16377@xs4all.nl>

On Fri, Aug 25, 2000 at 02:04:27PM +0200, M.-A. Lemburg wrote:

> I've asked this question before: when are we going to see
> comp.lang.python.announce back online ?

Barry is working on this, by modifying Mailman to play moderator (via the
normal list-admin's post-approval mechanism.) As I'm sure he'll tell you
himself, when he wakes up ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From just at letterror.com  Fri Aug 25 15:25:02 2000
From: just at letterror.com (Just van Rossum)
Date: Fri, 25 Aug 2000 14:25:02 +0100
Subject: [Python-Dev] (214)
Message-ID: <l03102805b5cc22d9c375@[193.78.237.177]>

(Just to make sure you guys know; there's currently a thread in c.l.py
about the new 2.0 features. Not a *single* person stood up to defend PEP
214: noone seems to like it.)

Just





From mal at lemburg.com  Fri Aug 25 14:17:41 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:17:41 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <20000825141420.C16377@xs4all.nl>
Message-ID: <39A663E5.A1E85044@lemburg.com>

Thomas Wouters wrote:
> 
> On Fri, Aug 25, 2000 at 02:04:27PM +0200, M.-A. Lemburg wrote:
> 
> > I've asked this question before: when are we going to see
> > comp.lang.python.announce back online ?
> 
> Barry is working on this, by modifying Mailman to play moderator (via the
> normal list-admin's post-approval mechanism.) As I'm sure he'll tell you
> himself, when he wakes up ;)

This sounds like an aweful lot of work... wouldn't a quick hack
as intermediate solution suffice for the moment (it needen't
even go into any public Mailman release -- just the Mailman
installation at python.org which handles the announcement
list).

Ok, I'll wait for Barry to wake up ;-) ... <ringring>
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Fri Aug 25 15:30:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 08:30:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0203.txt,1.11,1.12
In-Reply-To: Your message of "Fri, 25 Aug 2000 13:16:20 +0200."
             <20000825131620.B16377@xs4all.nl> 
References: <200008251111.EAA13270@slayer.i.sourceforge.net>  
            <20000825131620.B16377@xs4all.nl> 
Message-ID: <200008251330.IAA19481@cj20424-a.reston1.va.home.com>

> Oh, note that I chose not to do *any* coercion, if x supports the in-place
> operation. I'm not sure how valuable coercion would be, here, at least not
> in its current form. (Isn't coercion mostly used by integer types ? And
> aren't they immutable ? If an in-place method wants to have its argument
> coerced, it should do so itself, just like with direct method calls.)

All agreed!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug 25 15:34:44 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 08:34:44 -0500
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: Your message of "Fri, 25 Aug 2000 14:04:27 +0200."
             <39A660CB.7661E20E@lemburg.com> 
References: <39A660CB.7661E20E@lemburg.com> 
Message-ID: <200008251334.IAA19600@cj20424-a.reston1.va.home.com>

> I've asked this question before: when are we going to see
> comp.lang.python.announce back online ?
> 
> I know that everyone is busy with getting the betas ready,
> but looking at www.python.org I find that the "latest"
> special announcement is dated 22-Mar-2000. People will get
> the false idea that Python isn't moving anywhere... at least
> not in the spirit of OSS' "release early and often".
> 
> Could someone please summarize what needs to be done to
> post a message to comp.lang.python.announce without taking
> the path via the official (currently defunct) moderator ?
> 
> I've had a look at the c.l.p.a postings and the only special
> header they include is the "Approved: fleck at informatik.uni-bonn.de"
> header.
> 
> If this is all it takes to post to a moderated newsgroup,
> fixing Mailman to do the trick should be really simple.
> 
> I'm willing to help here to get this done *before* the Python
> 2.0beta1 announcement.

Coincidence!  Barry just wrote the necessary hacks that allow a
Mailman list to be used to moderate a newsgroup, and installed them in
python.org.  He's testing the setup today and I expect that we'll be
able to solicit for moderators tonight!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Fri Aug 25 14:47:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:47:06 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <200008251334.IAA19600@cj20424-a.reston1.va.home.com>
Message-ID: <39A66ACA.F638215A@lemburg.com>

Guido van Rossum wrote:
> 
> > I've asked this question before: when are we going to see
> > comp.lang.python.announce back online ?
> >
> > I know that everyone is busy with getting the betas ready,
> > but looking at www.python.org I find that the "latest"
> > special announcement is dated 22-Mar-2000. People will get
> > the false idea that Python isn't moving anywhere... at least
> > not in the spirit of OSS' "release early and often".
> >
> > Could someone please summarize what needs to be done to
> > post a message to comp.lang.python.announce without taking
> > the path via the official (currently defunct) moderator ?
> >
> > I've had a look at the c.l.p.a postings and the only special
> > header they include is the "Approved: fleck at informatik.uni-bonn.de"
> > header.
> >
> > If this is all it takes to post to a moderated newsgroup,
> > fixing Mailman to do the trick should be really simple.
> >
> > I'm willing to help here to get this done *before* the Python
> > 2.0beta1 announcement.
> 
> Coincidence!  Barry just wrote the necessary hacks that allow a
> Mailman list to be used to moderate a newsgroup, and installed them in
> python.org.  He's testing the setup today and I expect that we'll be
> able to solicit for moderators tonight!

Way cool :-) Thanks.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Fri Aug 25 15:17:17 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 09:17:17 -0400 (EDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <14758.29149.992343.502526@bitdiddle.concentric.net>

>>>>> "MH" == Mark Hammond <MarkH at ActiveState.com> writes:

  MH> Does the Linux community ever run with Py_DEBUG defined?  I
  MH> couldn't even find a simple way to turn it on to confirm it also
  MH> exists on Linux...

I build a separate version of Python using make OPT="-Wall -DPy_DEBUG"

On Linux, the sre test fails.  Do you see the same problem on Windows?

Jeremy



From tim_one at email.msn.com  Fri Aug 25 15:24:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 09:24:40 -0400
Subject: [Python-Dev] (214)
In-Reply-To: <l03102805b5cc22d9c375@[193.78.237.177]>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>

[Just van Rossum]
> (Just to make sure you guys know; there's currently a thread in c.l.py
> about the new 2.0 features. Not a *single* person stood up to defend
> PEP 214: noone seems to like it.)

But that's not true!  I defended it <wink>.  Alas (or "thank God!",
depending on how you look at it), I sent my "In praise of" post to the
mailing list and apparently the list->news gateway dropped it on the floor.

It most reminds me of the introduction of class.__private names.  Except I
don't think *anyone* was a fan of those besides your brother (I was neutral,
but we had a long & quite fun Devil's Advocate debate anyway), and the
opposition was far more strident than it's yet gotten on PEP 214.  I liked
__private names a lot after I used them, and, as I said in my unseen post,
having used the new print gimmick several times "for real" now I don't ever
want to go back.

The people most opposed seem to be those who worked hard to learn about
sys.__stdout__ and exactly why they need a try/finally block <0.9 wink>.
Some of the Python-Dev'ers have objected too, but much more quietly --
principled objections always get lost in the noise.

doubting-that-python's-future-hangs-in-the-balance-ly y'rs  - tim





From moshez at math.huji.ac.il  Fri Aug 25 15:48:26 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 25 Aug 2000 16:48:26 +0300 (IDT)
Subject: [Python-Dev] Tasks
Message-ID: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>

This is a summary of problems I found with the task page:

Tasks which I was sure were complete
------------------------------------
17336 -- Add augmented assignments -- marked 80%. Thomas?
17346 -- Add poll() to selectmodule -- marked 50%. Andrew?

Duplicate tasks
---------------
17923 seems to be a duplicate of 17922

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From fdrake at beopen.com  Fri Aug 25 15:51:14 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 09:51:14 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <200008251344.GAA16623@slayer.i.sourceforge.net>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
Message-ID: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > + Accepted and in progress
...
 > +     * Support for opcode arguments > 2**16 - Charles Waldman
 > +       SF Patch 100893

  I checked this in 23 Aug.

 > +     * Range literals - Thomas Wouters
 > +       SF Patch 100902

  I thought this was done as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 25 15:53:34 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 15:53:34 +0200
Subject: [Python-Dev] Tasks
In-Reply-To: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 25, 2000 at 04:48:26PM +0300
References: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>
Message-ID: <20000825155334.D16377@xs4all.nl>

On Fri, Aug 25, 2000 at 04:48:26PM +0300, Moshe Zadka wrote:

> Tasks which I was sure were complete
> ------------------------------------
> 17336 -- Add augmented assignments -- marked 80%. Thomas?

It isn't complete. It's missing documentation. I'm done with meetings today
(*yay!*) so I'm in the process of updating all that, as well as working on
it :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 25 15:57:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 15:57:53 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 25, 2000 at 09:51:14AM -0400
References: <200008251344.GAA16623@slayer.i.sourceforge.net> <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <20000825155752.E16377@xs4all.nl>

On Fri, Aug 25, 2000 at 09:51:14AM -0400, Fred L. Drake, Jr. wrote:
>  > +     * Range literals - Thomas Wouters
>  > +       SF Patch 100902

>   I thought this was done as well.

No, it just hasn't been touched in a while :) I need to finish up the PEP
(move the Open Issues to "BDFL Pronouncements", and include said
pronouncements) and sync the patch with the CVS tree. Oh, and it needs to be
accepted, too ;) Tim claims he's going to review it.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at beopen.com  Fri Aug 25 16:03:16 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 10:03:16 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
	<14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <14758.31908.552647.739111@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Jeremy Hylton writes:
  >> + Accepted and in progress
  FLD> ...
  >> + * Support for opcode arguments > 2**16 - Charles Waldman
  >> + SF Patch 100893

  FLD>   I checked this in 23 Aug.

Ok.

  >> + * Range literals - Thomas Wouters
  >> + SF Patch 100902

  FLD>   I thought this was done as well.

There's still an open patch for it.

Jeremy



From mal at lemburg.com  Fri Aug 25 16:06:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 16:06:57 +0200
Subject: [Python-Dev] (214)
References: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <39A67D81.FD56F2C7@lemburg.com>

Tim Peters wrote:
> 
> [Just van Rossum]
> > (Just to make sure you guys know; there's currently a thread in c.l.py
> > about the new 2.0 features. Not a *single* person stood up to defend
> > PEP 214: noone seems to like it.)
> 
> But that's not true!  I defended it <wink>. 

Count me in on that one too... it's just great for adding a few
quick debugging prints into the program.

The only thing I find non-Pythonesque is that an operator
is used. I would have opted for something like:

	print on <stream> x,y,z

instead of

	print >> <stream> x,y,z

But I really don't mind since I don't use "print" in production
code for anything other than debugging anyway :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Fri Aug 25 16:26:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 10:26:15 -0400 (EDT)
Subject: [Python-Dev] compiling with SSL support on Windows
Message-ID: <14758.33287.507315.396536@bitdiddle.concentric.net>

https://sourceforge.net/bugs/?func=detailbug&bug_id=110683&group_id=5470

We have a bug report about compilation problems in the socketmodule on
Windows when using SSL support.  Is there any Windows user with
OpenSSL who can look into this problem?

Jeremy



From guido at beopen.com  Fri Aug 25 17:24:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 10:24:03 -0500
Subject: [Python-Dev] (214)
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:24:40 -0400."
             <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> 
Message-ID: <200008251524.KAA19935@cj20424-a.reston1.va.home.com>

I've just posted a long response to the whole thread in c.l.py, and
added the essence (a long new section titled "More Justification by
the BDFL")) of it to the PEP.  See
http://python.sourceforge.net/peps/pep-0214.html

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
	



From guido at beopen.com  Fri Aug 25 17:32:57 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 10:32:57 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:51:14 -0400."
             <14758.31186.670323.159875@cj42289-a.reston1.va.home.com> 
References: <200008251344.GAA16623@slayer.i.sourceforge.net>  
            <14758.31186.670323.159875@cj42289-a.reston1.va.home.com> 
Message-ID: <200008251532.KAA20007@cj20424-a.reston1.va.home.com>

>  > +     * Range literals - Thomas Wouters
>  > +       SF Patch 100902
> 
>   I thought this was done as well.

No:

$ ./python
Python 2.0b1 (#79, Aug 25 2000, 08:31:47)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> [1:10]
  File "<stdin>", line 1
    [1:10]
      ^
SyntaxError: invalid syntax
>>> 

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jack at oratrix.nl  Fri Aug 25 16:48:24 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Fri, 25 Aug 2000 16:48:24 +0200
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions 
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
	     Thu, 24 Aug 2000 23:11:54 -0500 , <200008250411.XAA08797@cj20424-a.reston1.va.home.com> 
Message-ID: <20000825144829.CB29FD71F9@oratrix.oratrix.nl>

Recently, Guido van Rossum <guido at beopen.com> said:
> > test_re wasn't so bad, the only problem was that it crashed with a
> > "NULL return without an exception". test_regexp was worse, it crashed
> > my machine.
> 
> That's regex, right?  regexp was the *really* old regular expression
> module we once had.
> 
> Anyway, I don't care about regex, it's old.
> 
> The sre code needs to be robustified, but it's not a high priority for
> me.

Ok, fine with me.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From mal at lemburg.com  Fri Aug 25 17:05:38 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 17:05:38 +0200
Subject: [Python-Dev] [PEP 224] Attribute Docstrings
Message-ID: <39A68B42.4E3F8A3D@lemburg.com>

An HTML version of the attached can be viewed at

    http://python.sourceforge.net/peps/pep-0224.html

Even though the implementation won't go into Python 2.0, it
is worthwhile discussing this now, since adding these attribute
docstrings to existing code already works: Python simply ignores
them. What remains is figuring out a way to make use of them and
this is what the proposal is all about...

--

PEP: 224
Title: Attribute Docstrings
Version: $Revision: 1.2 $
Author: mal at lemburg.com (Marc-Andre Lemburg)
Status: Draft
Type: Standards Track
Python-Version: 2.1
Created: 23-Aug-2000
Post-History:


Introduction

    This PEP describes the "attribute docstring" proposal for Python
    2.0.  This PEP tracks the status and ownership of this feature.
    It contains a description of the feature and outlines changes
    necessary to support the feature.  The CVS revision history of
    this file contains the definitive historical record.


Rationale

    This PEP proposes a small addition to the way Python currently
    handles docstrings embedded in Python code.

    Python currently only handles the case of docstrings which appear
    directly after a class definition, a function definition or as
    first string literal in a module.  The string literals are added
    to the objects in question under the __doc__ attribute and are
    from then on available for introspection tools which can extract
    the contained information for help, debugging and documentation
    purposes.

    Docstrings appearing in locations other than the ones mentioned
    are simply ignored and don't result in any code generation.

    Here is an example:

        class C:
            "class C doc-string"

            a = 1
            "attribute C.a doc-string (1)"

            b = 2
            "attribute C.b doc-string (2)"

    The docstrings (1) and (2) are currently being ignored by the
    Python byte code compiler, but could obviously be put to good use
    for documenting the named assignments that precede them.
    
    This PEP proposes to also make use of these cases by proposing
    semantics for adding their content to the objects in which they
    appear under new generated attribute names.

    The original idea behind this approach which also inspired the
    above example was to enable inline documentation of class
    attributes, which can currently only be documented in the class's
    docstring or using comments which are not available for
    introspection.


Implementation

    Docstrings are handled by the byte code compiler as expressions.
    The current implementation special cases the few locations
    mentioned above to make use of these expressions, but otherwise
    ignores the strings completely.

    To enable use of these docstrings for documenting named
    assignments (which is the natural way of defining e.g. class
    attributes), the compiler will have to keep track of the last
    assigned name and then use this name to assign the content of the
    docstring to an attribute of the containing object by means of
    storing it in as a constant which is then added to the object's
    namespace during object construction time.

    In order to preserve features like inheritance and hiding of
    Python's special attributes (ones with leading and trailing double
    underscores), a special name mangling has to be applied which
    uniquely identifies the docstring as belonging to the name
    assignment and allows finding the docstring later on by inspecting
    the namespace.

    The following name mangling scheme achieves all of the above:

        __doc__<attributename>__

    To keep track of the last assigned name, the byte code compiler
    stores this name in a variable of the compiling structure.  This
    variable defaults to NULL.  When it sees a docstring, it then
    checks the variable and uses the name as basis for the above name
    mangling to produce an implicit assignment of the docstring to the
    mangled name.  It then resets the variable to NULL to avoid
    duplicate assignments.

    If the variable does not point to a name (i.e. is NULL), no
    assignments are made.  These will continue to be ignored like
    before.  All classical docstrings fall under this case, so no
    duplicate assignments are done.

    In the above example this would result in the following new class
    attributes to be created:

        C.__doc__a__ == "attribute C.a doc-string (1)"
        C.__doc__b__ == "attribute C.b doc-string (2)"

    A patch to the current CVS version of Python 2.0 which implements
    the above is available on SourceForge at [1].


Caveats of the Implementation
    
    Since the implementation does not reset the compiling structure
    variable when processing a non-expression, e.g. a function
    definition, the last assigned name remains active until either the
    next assignment or the next occurrence of a docstring.

    This can lead to cases where the docstring and assignment may be
    separated by other expressions:

        class C:
            "C doc string"

            b = 2

            def x(self):
                "C.x doc string"
                y = 3
                return 1

            "b's doc string"

    Since the definition of method "x" currently does not reset the
    used assignment name variable, it is still valid when the compiler
    reaches the docstring "b's doc string" and thus assigns the string
    to __doc__b__.

    A possible solution to this problem would be resetting the name
    variable for all non-expression nodes.

    
Copyright

    This document has been placed in the Public Domain.


References

    [1] http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/




From bwarsaw at beopen.com  Fri Aug 25 17:12:34 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:12:34 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <14758.36066.49304.190172@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> I've asked this question before: when are we going to see
    M> comp.lang.python.announce back online ?

    M> If this is all it takes to post to a moderated newsgroup,
    M> fixing Mailman to do the trick should be really simple.

    M> I'm willing to help here to get this done *before* the Python
    M> 2.0beta1 announcement.

MAL, you must be reading my mind!

I've actually been working on some unofficial patches to Mailman that
will let list admins moderate a moderated newsgroup.  The technical
details are described in a recent post to mailman-developers[1].

I'm testing it out right now.  I first installed this on starship, but
there's no nntp server that starship can post to, so I've since moved
the list to python.org.  However, I'm still having some problems with
the upstream feed, or at least I haven't seen approved messages
appearing on deja or my ISP's server.  I'm not exactly sure why; could
just be propagation delays.

Anyway, if anybody does see my test messages show up in the newsgroup
(not the gatewayed mailing list -- sorry David), please let me know.

-Barry

[1] http://www.python.org/pipermail/mailman-developers/2000-August/005388.html



From bwarsaw at beopen.com  Fri Aug 25 17:16:30 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:16:30 -0400 (EDT)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
	<20000825141420.C16377@xs4all.nl>
	<39A663E5.A1E85044@lemburg.com>
Message-ID: <14758.36302.521877.833943@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> This sounds like an aweful lot of work... wouldn't a quick hack
    M> as intermediate solution suffice for the moment (it needen't
    M> even go into any public Mailman release -- just the Mailman
    M> installation at python.org which handles the announcement
    M> list).

Naw, it's actually the least amount of work, since all the mechanism
is already there.  You just need to add a flag and another hold
criteria.  It's unofficial because I'm in feature freeze.

    M> Ok, I'll wait for Barry to wake up ;-) ... <ringring>

Who says I'm awake?  Don't you know I'm a very effective sleep hacker?
I'm also an effective sleep gardener and sometimes the urge to snore
and plant takes over.  You should see my cucumbers!

the-only-time-in-the-last-year-i've-been-truly-awake-was-when-i
jammed-with-eric-at-ipc8-ly y'rs,
-Barry



From bwarsaw at beopen.com  Fri Aug 25 17:21:43 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:21:43 -0400 (EDT)
Subject: [Python-Dev] (214)
References: <l03102805b5cc22d9c375@[193.78.237.177]>
	<LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <14758.36615.589212.75065@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> But that's not true!  I defended it <wink>.  Alas (or "thank
    TP> God!", depending on how you look at it), I sent my "In praise
    TP> of" post to the mailing list and apparently the list->news
    TP> gateway dropped it on the floor.

Can other people confirm that list->news is broken?  If so, then that
would explain my c.l.py.a moderation problems.  I know that my
approved test message showed up on CNRI's internal news server because
at least one list member of the c.l.py.a gateway got it, but I haven't
seen it upstream of CNRI.  I'll contact their admins and let them know
the upstream feed could be broken.

-Barry



From tim_one at email.msn.com  Fri Aug 25 17:34:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 11:34:47 -0400
Subject: [Python-Dev] (214)
In-Reply-To: <14758.36615.589212.75065@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEPFHBAA.tim_one@email.msn.com>

[Barry]
> Can other people confirm that list->news is broken?

I don't believe that it is (e.g., several of my c.l.py list mailings today
have already shown up my ISP's news server).

The post in question was mailed

    Thu 8/24/00 3:15 AM (EDT)

Aahz (a fellow mailing-list devotee) noted on c.l.py that it had never shown
up on the newsgroup, and after poking around I couldn't find it anywhere
either.

> ...
> I'll contact their admins and let them know the upstream feed could
> be broken.

Well, you can *always* let them know that <wink>.





From thomas at xs4all.net  Fri Aug 25 17:36:50 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 17:36:50 +0200
Subject: [Python-Dev] (214)
In-Reply-To: <14758.36615.589212.75065@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 25, 2000 at 11:21:43AM -0400
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net>
Message-ID: <20000825173650.G16377@xs4all.nl>

On Fri, Aug 25, 2000 at 11:21:43AM -0400, Barry A. Warsaw wrote:

> Can other people confirm that list->news is broken? 

No, not really. I can confirm that not all messages make it to the
newsgroup: I can't find Tim's posting on PEP 214 anywhere on comp.lang.py.
(and our new super-newsserver definately keeps the postings around long
enough, so I should be able to see it, and I did get it through
python-list!)

However, I *can* find some of my python-list submissions from earlier today,
so it hasn't completely gone to meet its maker, either.

I can also confirm that python-dev itself seems to be missing some messages.
I occasionally see messages quoted which I haven't seen myself, and I've
seen others complain that they haven't seen my messages, as quoted in other
mailings. Not more than a handful in the last week or two, though, and they
*could* be attributed to dementia.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Fri Aug 25 17:39:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 17:39:06 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <14758.36066.49304.190172@anthem.concentric.net>
Message-ID: <39A6931A.5B396D26@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> I've asked this question before: when are we going to see
>     M> comp.lang.python.announce back online ?
> 
>     M> If this is all it takes to post to a moderated newsgroup,
>     M> fixing Mailman to do the trick should be really simple.
> 
>     M> I'm willing to help here to get this done *before* the Python
>     M> 2.0beta1 announcement.
> 
> MAL, you must be reading my mind!
> 
> I've actually been working on some unofficial patches to Mailman that
> will let list admins moderate a moderated newsgroup.  The technical
> details are described in a recent post to mailman-developers[1].

Cool... :-)
 
> I'm testing it out right now.  I first installed this on starship, but
> there's no nntp server that starship can post to, so I've since moved
> the list to python.org.  However, I'm still having some problems with
> the upstream feed, or at least I haven't seen approved messages
> appearing on deja or my ISP's server.  I'm not exactly sure why; could
> just be propagation delays.
> 
> Anyway, if anybody does see my test messages show up in the newsgroup
> (not the gatewayed mailing list -- sorry David), please let me know.

Nothing has appeared at my ISP yet. Looking at the mailing list
archives, the postings don't have the Approved: header (but
perhaps it's just the archive which doesn't include it).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Fri Aug 25 18:20:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:20:59 -0400 (EDT)
Subject: [Python-Dev] (214)
References: <l03102805b5cc22d9c375@[193.78.237.177]>
	<LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
	<14758.36615.589212.75065@anthem.concentric.net>
	<20000825173650.G16377@xs4all.nl>
Message-ID: <14758.40171.159233.521885@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    >> Can other people confirm that list->news is broken?

    TW> No, not really. I can confirm that not all messages make it to
    TW> the newsgroup: I can't find Tim's posting on PEP 214 anywhere
    TW> on comp.lang.py.  (and our new super-newsserver definately
    TW> keeps the postings around long enough, so I should be able to
    TW> see it, and I did get it through python-list!)

    TW> However, I *can* find some of my python-list submissions from
    TW> earlier today, so it hasn't completely gone to meet its maker,
    TW> either.

    TW> I can also confirm that python-dev itself seems to be missing
    TW> some messages.  I occasionally see messages quoted which I
    TW> haven't seen myself, and I've seen others complain that they
    TW> haven't seen my messages, as quoted in other mailings. Not
    TW> more than a handful in the last week or two, though, and they
    TW> *could* be attributed to dementia.

I found Tim's message in the archives, so I'm curious whether those
missing python-dev messages are also in the archives?  If so, that's a
good indication that Mailman is working, so the problem is upstream
from there.  I'm also not seeing any errors in the log files that
would indicate a Mailman problem.

I have seen some weird behavior from Postfix on that machine:
occasionally messages to my python.org addr, which should be forwarded
to my beopen.com addr just don't get forwarded.  They get dropped in
my spool file.  I have no idea why, and the mail logs don't give a
clue.  I don't know if any of that is related, although I did just
upgrade Postfix to the latest revision.  And there are about 3k
messages sitting in Postfix's queue waiting to go out though.

Sigh.  I really don't want to spend the next week debugging this
stuff. ;/

-Barry



From bwarsaw at beopen.com  Fri Aug 25 18:22:05 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:22:05 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
	<14758.36066.49304.190172@anthem.concentric.net>
	<39A6931A.5B396D26@lemburg.com>
Message-ID: <14758.40237.49311.811744@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> Nothing has appeared at my ISP yet. Looking at the mailing list
    M> archives, the postings don't have the Approved: header (but
    M> perhaps it's just the archive which doesn't include it).

Correct.  They're stripped out of the archives.  My re-homed nntpd
test worked though all the way through, so one more test and we're
home free.

-Barry



From thomas at xs4all.net  Fri Aug 25 18:32:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 18:32:24 +0200
Subject: [Python-Dev] (214)
In-Reply-To: <14758.40171.159233.521885@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 25, 2000 at 12:20:59PM -0400
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net> <20000825173650.G16377@xs4all.nl> <14758.40171.159233.521885@anthem.concentric.net>
Message-ID: <20000825183224.N15110@xs4all.nl>

On Fri, Aug 25, 2000 at 12:20:59PM -0400, Barry A. Warsaw wrote:

> I found Tim's message in the archives, so I'm curious whether those
> missing python-dev messages are also in the archives?  If so, that's a
> good indication that Mailman is working, so the problem is upstream
> from there.  I'm also not seeing any errors in the log files that
> would indicate a Mailman problem.

Well, I saw one message from Guido, where he was replying to someone who was
replying to Mark. Guido claimed he hadn't seen that original message
(Mark's), though I am certain I did see it. The recollections on missing
messages on my part are much more vague, though, so it *still* could be
attributed to dementia (of people, MUA's or MTA's ;)

I'll keep a closer eye on it, though.

> I have seen some weird behavior from Postfix on that machine:
> occasionally messages to my python.org addr, which should be forwarded
> to my beopen.com addr just don't get forwarded.  They get dropped in
> my spool file.  I have no idea why, and the mail logs don't give a
> clue.  I don't know if any of that is related, although I did just
> upgrade Postfix to the latest revision.  And there are about 3k
> messages sitting in Postfix's queue waiting to go out though.

Sendmail, baby! <duck> We're currently running postfix on a single machine
(www.hal2001.org, which also does the Mailman for it) mostly because our
current Sendmail setup has one huge advantage: it works. And it works fine.
We just don't want to change the sendmail rules or fiddle with out
mailertable-setup, but it works ! :-) 

> Sigh.  I really don't want to spend the next week debugging this
> stuff. ;/

So don't. Do what any proper developer would do: proclaim there isn't enough
info (there isn't, unless you can find the thread I'm talking about, above.
I'll see if I can locate it for you, since I think I saved the entire thread
with 'must check this' in the back of my head) and don't fix it until it
happens again. I do not think this is Mailman related, though it might be
python.org-mailman related (as in, the postfix or the link on that machine,
or something.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Aug 25 19:39:41 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 12:39:41 -0500
Subject: [Python-Dev] (214)
In-Reply-To: Your message of "Fri, 25 Aug 2000 12:20:59 -0400."
             <14758.40171.159233.521885@anthem.concentric.net> 
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net> <20000825173650.G16377@xs4all.nl>  
            <14758.40171.159233.521885@anthem.concentric.net> 
Message-ID: <200008251739.MAA20815@cj20424-a.reston1.va.home.com>

> Sigh.  I really don't want to spend the next week debugging this
> stuff. ;/

Please don't.  This happened to me before, and eventually everything
came through -- sometimes with days delay.  So it's just slowness.

There's a new machine waiting for us at VA Linux.  I'll ask Kahn again
to speed up the transition.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From DavidA at ActiveState.com  Fri Aug 25 18:50:47 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Fri, 25 Aug 2000 09:50:47 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: <14758.36302.521877.833943@anthem.concentric.net>
Message-ID: <Pine.WNT.4.21.0008250949150.816-100000@loom>

> the-only-time-in-the-last-year-i've-been-truly-awake-was-when-i
> jammed-with-eric-at-ipc8-ly y'rs,

And that was really good!  You should do it more often!

Let's make sure we organize a jam session in advance for icp9 -- that way
we can get more folks to bring instruments, berries, sugar, bread, butter,
etc.

i-don't-jam-i-listen-ly y'rs,

--david





From bwarsaw at beopen.com  Fri Aug 25 18:56:12 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:56:12 -0400 (EDT)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <14758.36302.521877.833943@anthem.concentric.net>
	<Pine.WNT.4.21.0008250949150.816-100000@loom>
Message-ID: <14758.42284.829235.406950@anthem.concentric.net>

>>>>> "DA" == David Ascher <DavidA at ActiveState.com> writes:

    DA> And that was really good!  You should do it more often!

Thanks!

    DA> Let's make sure we organize a jam session in advance for icp9
    DA> -- that way we can get more folks to bring instruments,
    DA> berries, sugar, bread, butter, etc.

    DA> i-don't-jam-i-listen-ly y'rs,

Okay, so who's gonna webcast IPC9? :)

-B



From bwarsaw at beopen.com  Fri Aug 25 19:05:22 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 13:05:22 -0400 (EDT)
Subject: [Python-Dev] The resurrection of comp.lang.python.announce
Message-ID: <14758.42834.289193.548978@anthem.concentric.net>

Well, after nearly 6 months of inactivity, I'm very happy to say that
comp.lang.python.announce is being revived.  It will now be moderated
by a team of volunteers (see below) using a Mailman mailing list.
Details about comp.lang.python.announce, and its mailing list gateway
python-announce-list at python.org can be found at

   http://www.python.org/psa/MailingLists.html#clpa

Posting guidelines can be found at

   ftp://rtfm.mit.edu/pub/usenet/comp.lang.python.announce/python-newsgroup-faq

This message also serves as a call for moderators.  I am looking for 5
experienced Python folks who would like to team moderator the
newsgroup.  It is a big plus if you've moderated newsgroups before.

If you are interested in volunteering, please email me directly.  Once
I've chosen the current crop of moderators, I'll give you instructions
on how to do it.  Don't worry if you don't get chosen this time
around; I'm sure we'll have some rotation in the moderators ranks as
time goes on.

Cheers,
-Barry



From guido at beopen.com  Fri Aug 25 20:12:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 13:12:28 -0500
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:50:47 MST."
             <Pine.WNT.4.21.0008250949150.816-100000@loom> 
References: <Pine.WNT.4.21.0008250949150.816-100000@loom> 
Message-ID: <200008251812.NAA21141@cj20424-a.reston1.va.home.com>

> And that was really good!  You should do it more often!

Agreed!

> Let's make sure we organize a jam session in advance for icp9 -- that way
> we can get more folks to bring instruments, berries, sugar, bread, butter,
> etc.

This sounds much more fun (and more Pythonic) than a geeks-with-guns
event! :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Fri Aug 25 19:25:13 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 13:25:13 -0400 (EDT)
Subject: [Python-Dev] warning in initpyexpat
Message-ID: <14758.44025.333241.758233@bitdiddle.concentric.net>

gcc -Wall is complaining about possible use of errors_module without
initialization in the initpyexpat function.  Here's the offending code:

    sys_modules = PySys_GetObject("modules");
    {
        PyObject *errmod_name = PyString_FromString("pyexpat.errors");

        if (errmod_name != NULL) {
            errors_module = PyDict_GetItem(d, errmod_name);
            if (errors_module == NULL) {
                errors_module = PyModule_New("pyexpat.errors");
                if (errors_module != NULL) {
                    PyDict_SetItemString(d, "errors", errors_module);
                    PyDict_SetItem(sys_modules, errmod_name, errors_module);
                }
            }
            Py_DECREF(errmod_name);
            if (errors_module == NULL)
                /* Don't code dump later! */
                return;
        }
    }
    errors_dict = PyModule_GetDict(errors_module);

It is indeed the case that errors_module can be used without
initialization.  If PyString_FromString("pyexpat.errors") fails, you
ignore the error and will immediately call PyModule_GetDict with an
uninitialized variable.

You ought to check for the error condition and bail cleanly, rather
than ignoring it and failing somewhere else.

I also wonder why the code that does this check is in its own set of
curly braces; thus, the post to python-dev to discuss the style issue.
Why did you do this?  Is it approved Python style?  It looks cluttered
to me.

Jeremy




From fdrake at beopen.com  Fri Aug 25 19:36:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 13:36:53 -0400 (EDT)
Subject: [Python-Dev] Re: warning in initpyexpat
In-Reply-To: <14758.44025.333241.758233@bitdiddle.concentric.net>
References: <14758.44025.333241.758233@bitdiddle.concentric.net>
Message-ID: <14758.44725.345785.430141@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > It is indeed the case that errors_module can be used without
 > initialization.  If PyString_FromString("pyexpat.errors") fails, you
 > ignore the error and will immediately call PyModule_GetDict with an
 > uninitialized variable.

  I'll fix that.

 > I also wonder why the code that does this check is in its own set of
 > curly braces; thus, the post to python-dev to discuss the style issue.
 > Why did you do this?  Is it approved Python style?  It looks cluttered
 > to me.

  I don't like it either.  ;)  I just wanted a temporary variable, but
I can declare that at the top of initpyexpat().  This will be
corrected as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gward at mems-exchange.org  Fri Aug 25 20:16:24 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 25 Aug 2000 14:16:24 -0400
Subject: [Python-Dev] If you thought there were too many PEPs...
Message-ID: <20000825141623.G17277@ludwig.cnri.reston.va.us>

...yow: the Perl community is really going overboard in proposing
enhancements:

[from the Perl "daily" news]
>   [3] Perl 6 RFCs Top 150 Mark; New Perl 6 Lists Added [Links]
> 
>         The number of [4]Perl 6 RFCs hit 161 today. The 100th RFC was
>         [5]Embed full URI support into Perl by Nathan Wiger, allowing
>         URIs like "file:///local/etc/script.conf" to be passed to builtin
>         file functions and operators. The 150th was [6]Extend regex
>         syntax to provide for return of a hash of matched subpatterns by
>         Kevin Walker, and the latest, 161, is [7]OO Integration/Migration
>         Path by Matt Youell.
> 
>         New [8]Perl 6 mailing lists include perl6-language- sublists
>         objects, datetime, errors, data, and regex. perl6-bootstrap is
>         being closed, and perl6-meta is taking its place (the subscriber
>         list will not be transferred).
[...]
>    3. http://www.news.perl.org/perl-news.cgi?item=967225716%7C10542
>    4. http://dev.perl.org/rfc/
>    5. http://dev.perl.org/rfc/100.pod
>    6. http://dev.perl.org/rfc/150.pod
>    7. http://dev.perl.org/rfc/161.pod
>    8. http://dev.perl.org/lists.shtml

-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From gvwilson at nevex.com  Fri Aug 25 20:30:53 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Fri, 25 Aug 2000 14:30:53 -0400 (EDT)
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
In-Reply-To: <20000825141623.G17277@ludwig.cnri.reston.va.us>
Message-ID: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>

> On Fri, 25 Aug 2000, Greg Ward wrote:
> >         The number of [4]Perl 6 RFCs hit 161 today...
> >         New [8]Perl 6 mailing lists include perl6-language- sublists
> >         objects, datetime, errors, data, and regex. perl6-bootstrap is
> >         being closed, and perl6-meta is taking its place (the subscriber
> >         list will not be transferred).

I've heard from several different sources that when Guy Steele Jr was
hired by Sun to help define the Java language standard, his first proposal
was that the length of the standard be fixed --- anyone who wanted to add
a new feature had to identify an existing feature that would be removed
from the language to make room.  Everyone said, "That's so cool --- but of
course we can't do it..."

Think how much simpler Java would be today if...

;-)

Greg




From effbot at telia.com  Fri Aug 25 21:11:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 25 Aug 2000 21:11:16 +0200
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
References: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>
Message-ID: <01b701c00ec8$3f47ebe0$f2a6b5d4@hagrid>

greg wrote:
> I've heard from several different sources that when Guy Steele Jr was
> hired by Sun to help define the Java language standard, his first proposal
> was that the length of the standard be fixed.

    "C. A. R. Hoare has suggested that as a rule of
    thumb a language is too complicated if it can't
    be described precisely and readably in fifty
    pages. The Modula-3 committee elevated this to a
    design principle: we gave ourselves a
    "complexity budget" of fifty pages, and chose
    the most useful features that we could
    accommodate within this budget. In the end, we
    were over budget by six lines plus the syntax
    equations. This policy is a bit arbitrary, but
    there are so many good ideas in programming
    language design that some kind of arbitrary
    budget seems necessary to keep a language from
    getting too complicated."

    from "Modula-3: Language definition"
    http://research.compaq.com/SRC/m3defn/html/complete.html

</F>




From akuchlin at mems-exchange.org  Fri Aug 25 21:05:10 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 25 Aug 2000 15:05:10 -0400
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
In-Reply-To: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>; from gvwilson@nevex.com on Fri, Aug 25, 2000 at 02:30:53PM -0400
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>
Message-ID: <20000825150510.A22028@kronos.cnri.reston.va.us>

On Fri, Aug 25, 2000 at 02:30:53PM -0400, Greg Wilson wrote:
>was that the length of the standard be fixed --- anyone who wanted to add
>a new feature had to identify an existing feature that would be removed
>from the language to make room.  Everyone said, "That's so cool --- but of

Something similar was done with Modula-3, as GvR is probably well
aware; one of the goals was to keep the language spec less than 50
pages.  In the end I think it winds up being a bit larger, but it was
good discipline anyway.

--amk



From jeremy at beopen.com  Fri Aug 25 22:44:44 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 16:44:44 -0400 (EDT)
Subject: [Python-Dev] Python 1.6 bug fix strategy
Message-ID: <14758.55996.11900.114220@bitdiddle.concentric.net>

We have gotten several bug reports recently based on 1.6b1.  What
plans, if any, are there to fix these bugs before the 1.6 final
release?  We clearly need to fix them for 2.0b1, but I don't know
about 1.6 final.

Among the bugs are 111403 and 11860, which cause core dumps.  The
former is an obvious bug and has a fairly clear fix.

Jeremy

PS Will 1.6 final be released before 2.0b1?





From tim_one at email.msn.com  Sat Aug 26 01:16:00 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 19:16:00 -0400
Subject: [Python-Dev] Python 1.6 bug fix strategy
In-Reply-To: <14758.55996.11900.114220@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com>

[Jeremy Hylton]
> We have gotten several bug reports recently based on 1.6b1.  What
> plans, if any, are there to fix these bugs before the 1.6 final
> release?

My understanding is that 1.6final is done, except for plugging in a license;
i.e., too late even for bugfixes.  If true, "Fixed in 2.0" will soon be a
popular response to all sorts of things -- unless CNRI intends to do its own
work on 1.6.





From MarkH at ActiveState.com  Sat Aug 26 01:57:48 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sat, 26 Aug 2000 09:57:48 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <14758.29149.992343.502526@bitdiddle.concentric.net>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>

[Jeremy]

> On Linux, the sre test fails.  Do you see the same problem on Windows?

Not with either debug or release builds.

Mark.




From skip at mojam.com  Sat Aug 26 02:08:52 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 25 Aug 2000 19:08:52 -0500 (CDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>
References: <14758.29149.992343.502526@bitdiddle.concentric.net>
	<ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>
Message-ID: <14759.2708.62485.72631@beluga.mojam.com>

    Mark> [Jeremy]
    >> On Linux, the sre test fails.  Do you see the same problem on Windows?

    Mark> Not with either debug or release builds.

Nor I on Mandrake Linux.

Skip




From cgw at fnal.gov  Sat Aug 26 02:34:23 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 25 Aug 2000 19:34:23 -0500 (CDT)
Subject: [Python-Dev] Compilation failure, current CVS
Message-ID: <14759.4239.276417.473973@buffalo.fnal.gov>

Just a heads-up - I suspect this is a trivial problem, but I don't
have time to investigate right now ("real life").

Linux buffalo.fnal.gov 2.2.16 #31 SMP
gcc version 2.95.2 19991024 (release)

After cvs update and make distclean, I get this error:

make[1]: Entering directory `/usr/local/src/Python-CVS/python/dist/src/Python'
gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c errors.c -o errors.o
errors.c:368: arguments given to macro `PyErr_BadInternalCall'
make[1]: *** [errors.o] Error 1




From cgw at fnal.gov  Sat Aug 26 03:23:08 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 25 Aug 2000 20:23:08 -0500 (CDT)
Subject: [Python-Dev] CVS weirdness (was:  Compilation failure, current CVS)
In-Reply-To: <14759.4239.276417.473973@buffalo.fnal.gov>
References: <14759.4239.276417.473973@buffalo.fnal.gov>
Message-ID: <14759.7164.55022.134730@buffalo.fnal.gov>

I blurted out:

 > After cvs update and make distclean, I get this error:
 > 
 > make[1]: Entering directory `/usr/local/src/Python-CVS/python/dist/src/Python'
 > gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c errors.c -o errors.o
 > errors.c:368: arguments given to macro `PyErr_BadInternalCall'
 > make[1]: *** [errors.o] Error 1

There is (no surprise) no problem with Python; but there *is* some
problem with me or my setup or some tool I use or the CVS server.  cvs
update -dAP fixed my problems.  This is the second time I've gotten
these sticky CVS date tags which I never meant to set.

Sorry-for-the-false-alarm-ly yr's,
			     -C




From tim_one at email.msn.com  Sat Aug 26 04:12:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 22:12:11 -0400
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
Message-ID: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>

Somebody recently added DL_IMPORT macros to two module init functions that
already used their names in DL_EXPORT macros (pyexpat.c and parsermodule.c).
On Windows, that yields the result I (naively?) expected:  compiler warnings
about inconsistent linkage declarations.

This is your basic Undocumented X-Platform Macro Hell, and I suppose the
Windows build should be #define'ing USE_DL_EXPORT for these subprojects
anyway (?), but if I don't hear a good reason for *why* both macros are used
on the same name in the same file, I'll be irresistibly tempted to just
delete the new DL_IMPORT lines.  That is, why would we *ever* use DL_IMPORT
on the name of a module init function?  They only exist to be exported.

baffled-in-reston-ly y'rs  - tim





From fdrake at beopen.com  Sat Aug 26 04:49:30 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 22:49:30 -0400 (EDT)
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
Message-ID: <14759.12346.778540.252012@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Somebody recently added DL_IMPORT macros to two module init functions that
 > already used their names in DL_EXPORT macros (pyexpat.c and parsermodule.c).

  That was me.

 > On Windows, that yields the result I (naively?) expected:  compiler warnings
 > about inconsistent linkage declarations.

  Ouch.

 > This is your basic Undocumented X-Platform Macro Hell, and I suppose the
 > Windows build should be #define'ing USE_DL_EXPORT for these subprojects
 > anyway (?), but if I don't hear a good reason for *why* both macros are used
 > on the same name in the same file, I'll be irresistibly tempted to just
 > delete the new DL_IMPORT lines.  That is, why would we *ever* use DL_IMPORT
 > on the name of a module init function?  They only exist to be exported.

  Here's how I arrived at it, but appearantly this doesn't make sense,
because Windows has too many linkage options.  ;)
  Compiling with gcc using the -Wmissing-prototypes option causes a
warning to be printed if there isn't a prototype at all:

cj42289-a(.../linux-beowolf/Modules); gcc -fpic  -g -ansi -Wall -Wmissing-prototypes  -O2 -I../../Include -I.. -DHAVE_CONFIG_H -c ../../Modules/parsermodule.c
../../Modules/parsermodule.c:2852: warning: no previous prototype for `initparser'

  I used the DL_IMPORT since that's how all the prototypes in the
Python headers are set up.  I can either change these to "normal"
prototypes (no DL_xxPORT macros), DL_EXPORT prototypes, or remove the
prototypes completely, and we'll just have to ignore the warning.
  If you can write a few sentences explaining each of these macros and
when they should be used, I'll make sure they land in the
documentation.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From MarkH at ActiveState.com  Sat Aug 26 06:06:40 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sat, 26 Aug 2000 14:06:40 +1000
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEKHDGAA.MarkH@ActiveState.com>

> This is your basic Undocumented X-Platform Macro Hell, and I suppose the
> Windows build should be #define'ing USE_DL_EXPORT for these subprojects
> anyway (?), but if I don't hear a good reason for *why* both
> macros are used

This is a mess that should be cleaned up.

I take some blame for DL_IMPORT :-(  Originally (and still, as far as I can
tell), DL_IMPORT really means "Python symbol visible outside the core" -
ie, any symbol a dynamic module or embedded application may ever need
(documented, or not :-)

The "import" part of DL_IMPORT is supposed to be from the _clients_ POV.
These apps/extensions are importing these definitions.

This is clearly a poor choice of names, IMO, as the macro USE_DL_EXPORT
changes the meaning from import to export, which is clearly confusing.


DL_EXPORT, on the other hand, seems to have grown while I wasnt looking :-)
As far as I can tell:
* It is used in ways where the implication is clearly "export this symbol
always".
* It is used for extension modules, whether they are builtin or not (eg,
"array" etc use it.
* It behaves differently under Windows than under BeOS, at least.  BeOS
unconditionally defines it as an exported symbol.  Windows only defines it
when building the core.  Extension modules attempting to use this macro to
export them do not work - eg, "winsound.c" uses DL_EXPORT, but is still
forced to add "export:initwinsound" to the linker to get the symbol public.

The ironic thing is, that in Windows at least, DL_EXPORT is working the
exact opposite of how we want it - when it is used for functions built into
the core (eg, builting modules), these symbols do _not_ need to be
exported, but where it is used on extension modules, it fails to make them
public.

So, as you guessed, we have the situation that we have 2 macros that given
their names, are completely misleading :-(

I think that we should make the following change (carefully, of course :-)

* DL_IMPORT -> PYTHON_API
* DL_EXPORT -> PYTHON_MODULE_INIT.

Obviously, the names are up for grabs, but we should change the macros to
what they really _mean_, and getting the correct behaviour shouldn't be a
problem.  I don't see any real cross-platform issues, as long as the macro
reflects what it actually means!

Shall I check in the large number of files affected now?

Over-the-release-manager's-dead-body<wink> ly,

Mark.




From fdrake at beopen.com  Sat Aug 26 07:40:01 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 26 Aug 2000 01:40:01 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
Message-ID: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>

  I've been playing with dictionaries lately trying to stamp out a
bug:

http://sourceforge.net/bugs/?func=detailbug&bug_id=112558&group_id=5470

  It looks like any fix that really works risks a fair bit of
performance, and that's not good.  My best-effort fix so far is on
SourceForge:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470

but doesn't quite work, according to Guido (I've not yet received
instructions from him about how to reproduce the observed failure).
  None the less, performance is an issue for dictionaries, so I came
up with the idea to use a specialized version for string keys.  When I
saw how few of the dictionaries created by the regression test ever
had anything else, I tried to simply make all dictionaries the
specialized variety (they can degrade themselves as needed).  What I
found was that just over 2% of the dictionaries created by running the
regression test ever held any non-string keys; this may be very
different for "real" programs, but I'm curious about how different.
  I've also done *no* performance testing on my patch for this yet,
and don't expect it to be a big boost without something like the bug
fix I mentioned above, but I could be wrong.  If anyone would like to
play with the idea, I've posted my current patch at:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101309&group_id=5470

  Enjoy!  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fleck at triton.informatik.uni-bonn.de  Sat Aug 26 10:14:11 2000
From: fleck at triton.informatik.uni-bonn.de (Markus Fleck)
Date: Sat, 26 Aug 2000 10:14:11 +0200 (MET DST)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
In-Reply-To: <39A660CB.7661E20E@lemburg.com> from "M.-A. Lemburg" at Aug 25, 2000 02:04:27 PM
Message-ID: <200008260814.KAA06267@hera.informatik.uni-bonn.de>

M.-A. Lemburg:
> Could someone please summarize what needs to be done to
> post a message to comp.lang.python.announce without taking
> the path via the official (currently defunct) moderator ?

I'm not really defunct, I'm jut not posting any announcements
because I'm not receiving them any more. ;-)))

> I've had a look at the c.l.p.a postings and the only special
> header they include is the "Approved: fleck at informatik.uni-bonn.de"
> header.

Basically, that's all it takes to post to a "moderated" newsgroup.
(Talking about a case of "security by obscurity" here... :-/)
Actually, the string following the "Approved: " may even be random...

In principle, I do have the time again to do daily moderation of incoming
postings for c.l.py.a. Unfortunately, I currently lack the infrastructure
(i.e. the moderation program), which went down together with the old
starship. I was basically waiting for a version of Mailman that could be
used to post to moderated newsgroups. (I should probably have been more
vocal about that, or even should have started hacking Mailman myself... I
*did* start to write something that would grab new announcements daily from
Parnassus and post them to c.l.py.a, and I may even come to finish this in
September, but that doesn't substitute for a "real" moderation tool for
user-supplied postings. Also, it would probably be a lot easier for
Parnassus postings to be built directly from the Parnassus database, instead
from its [generated] HTML pages - the Parnassus author intended to supply
such functionality, but I didn't hear from him yet, either.)

So what's needed now? Primarily, a Mailman installation that can post to
moderated newsgroups (and maybe also do the mail2list gatewaying for
c.l.py.a), and a mail alias that forwards mail for
python-announce at python.org to that Mailman address. Some "daily digest"
generator for Parnassus announcements would be nice to have, too, but
that can only come once the other two things work.

Anyway, thanks for bringing this up again - it puts c.l.py.a at the
top of my to-do list again (where it should be, of course ;-).

Yours,
Markus.



From tim_one at email.msn.com  Sat Aug 26 10:14:48 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 04:14:48 -0400
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <14759.12346.778540.252012@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOECBHCAA.tim_one@email.msn.com>

[Tim, gripes about someone putting module init function names in
 both DL_IMPORT and DL_EXPORT macros]

[Fred Drake]
> That was me.

My IRC chat buddy Fred?  Well, can't get mad at *you*!

>> On Windows, that yields the result I (naively?) expected:
>> compiler warnings about inconsistent linkage declarations.

> Ouch.

Despite that-- as MarkH said later --these macros are as damnably confusing
as original sin, that one says "IMPORT" and the other "EXPORT" *may* have
been cause to guess they might not play well together when applied to a
single name.

> ...
>   Compiling with gcc using the -Wmissing-prototypes option causes a
> warning to be printed if there isn't a prototype at all:

Understood, and your goal is laudable.  I have a question, though:  *all*
module init functions use DL_EXPORT today, and just a few days ago *none* of
them used DL_IMPORT inside the file too.  So how come gcc only warned about
two modules?  Or does it actually warn about all of them, and you snuck this
change into pyexpat and parsermodule while primarily doing other things to
them?

> I can either change these to "normal" prototypes (no DL_xxPORT macros),
> DL_EXPORT prototypes,

I already checked that one in.

> or remove the prototypes completely, and we'll just have to ignore
> the warning.

No way.  "No warnings" is non-negotiable with me -- but since I no longer
get any warnings, I can pretend not to know that you get them under gcc
<wink>.

>   If you can write a few sentences explaining each of these macros and
> when they should be used, I'll make sure they land in the
> documentation.  ;)

I can't -- that's why I posted for help.  The design is currently
incomprehensible; e.g., from the PC config.h:

#ifdef USE_DL_IMPORT
#define DL_IMPORT(RTYPE) __declspec(dllimport) RTYPE
#endif
#ifdef USE_DL_EXPORT
#define DL_IMPORT(RTYPE) __declspec(dllexport) RTYPE
#define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE
#endif

So if you say "use import", the import macro does set up an import, but the
export macro is left undefined (turns out it's later set to an identity
expansion in Python.h, in that case).  But if you say "use export", both
import(!) and export macros are set up to do an export.  It's apparently
illegal to say "use both", but that has to be deduced from the compiler
error that *would* result from redefining the import macro in an
incompatible way.  And if you say neither, the trail snakes back to an
earlier blob of code, where "use import" is magically defined whenever "use
export" *isn't* -- but only if MS_NO_COREDLL is *not* defined.  And the test
of MS_NO_COREDLL is immediately preceded by the comment

    ... MS_NO_COREDLL (do not test this macro)

That covered one of the (I think) four sections in the now 750-line PC
config file that defines these things.  By the time I look at another config
file, my brain is gone.

MarkH is right:  we have to figure what these things are actually trying to
*accomplish*, then gut the code and spell whatever that is in a clear way.
Or, failing that, at least a documented way <wink>.





From tim_one at email.msn.com  Sat Aug 26 10:25:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 04:25:11 -0400
Subject: [Python-Dev] Fixing test_poll.py for me just broke it for you
Message-ID: <LNBBLJKPBEHFEDALKOLCKECCHCAA.tim_one@email.msn.com>

Here's the checkin comment.  See test/README for an expanded explanation if
the following isn't clear:


Another new test using "from test.test_support import ...", causing
subtle breakage on Windows (the test is skipped here, but the TestSkipped
exception wasn't recognized as such, because of duplicate copies of
test_support got loaded; so the test looks like a failure under Windows
instead of a skip).
Repaired the import, but

        THIS TEST *WILL* FAIL ON OTHER SYSTEMS NOW!

Again due to the duplicate copies of test_support, the checked-in
"expected output" file actually contains verbose-mode output.  I can't
generate the *correct* non-verbose output on my system.  So, somebody
please do that.





From mal at lemburg.com  Sat Aug 26 10:31:05 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 26 Aug 2000 10:31:05 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <200008260814.KAA06267@hera.informatik.uni-bonn.de>
Message-ID: <39A78048.DA793307@lemburg.com>

Markus Fleck wrote:
> 
> M.-A. Lemburg:
> > I've had a look at the c.l.p.a postings and the only special
> > header they include is the "Approved: fleck at informatik.uni-bonn.de"
> > header.
> 
> Basically, that's all it takes to post to a "moderated" newsgroup.
> (Talking about a case of "security by obscurity" here... :-/)
> Actually, the string following the "Approved: " may even be random...

Wow, so much for spam protection.
 
> In principle, I do have the time again to do daily moderation of incoming
> postings for c.l.py.a. Unfortunately, I currently lack the infrastructure
> (i.e. the moderation program), which went down together with the old
> starship. I was basically waiting for a version of Mailman that could be
> used to post to moderated newsgroups. (I should probably have been more
> vocal about that, or even should have started hacking Mailman myself... I
> *did* start to write something that would grab new announcements daily from
> Parnassus and post them to c.l.py.a, and I may even come to finish this in
> September, but that doesn't substitute for a "real" moderation tool for
> user-supplied postings. Also, it would probably be a lot easier for
> Parnassus postings to be built directly from the Parnassus database, instead
> from its [generated] HTML pages - the Parnassus author intended to supply
> such functionality, but I didn't hear from him yet, either.)
> 
> So what's needed now? Primarily, a Mailman installation that can post to
> moderated newsgroups (and maybe also do the mail2list gatewaying for
> c.l.py.a), and a mail alias that forwards mail for
> python-announce at python.org to that Mailman address. Some "daily digest"
> generator for Parnassus announcements would be nice to have, too, but
> that can only come once the other two things work.
> 
> Anyway, thanks for bringing this up again - it puts c.l.py.a at the
> top of my to-do list again (where it should be, of course ;-).

Barry has just installed a Mailman patch that allows gatewaying
to a moderated newsgroup.

He's also looking for volunteers to do the moderation. I guess
you should apply by sending Barry a private mail (see the
announcement on c.l.p.a ;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sat Aug 26 11:56:20 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 26 Aug 2000 11:56:20 +0200
Subject: [Python-Dev] New dictionaries patch on SF
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
Message-ID: <39A79444.D701EF84@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   I've been playing with dictionaries lately trying to stamp out a
> bug:
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112558&group_id=5470
> 
>   It looks like any fix that really works risks a fair bit of
> performance, and that's not good.  My best-effort fix so far is on
> SourceForge:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470
> 
> but doesn't quite work, according to Guido (I've not yet received
> instructions from him about how to reproduce the observed failure).

The solution to all this is not easy, since dictionaries can
effectively also be used *after* interpreter finalization (no
thread state). The current PyErr_* APIs all rely on having the
thread state available, so the dictionary implementation would
have to add an extra check for the thread state.

All this will considerably slow down the interpreter and then
only to solve a rare problem... perhaps we should reenable
passing back exceptions viy PyDict_GetItem() instead ?!
This will slow down the interpreter too, but it'll at least
not cause the troubles with hacking the dictionary implementation
to handle exceptions during compares.

>   None the less, performance is an issue for dictionaries, so I came
> up with the idea to use a specialized version for string keys.  When I
> saw how few of the dictionaries created by the regression test ever
> had anything else, I tried to simply make all dictionaries the
> specialized variety (they can degrade themselves as needed).  What I
> found was that just over 2% of the dictionaries created by running the
> regression test ever held any non-string keys; this may be very
> different for "real" programs, but I'm curious about how different.
>   I've also done *no* performance testing on my patch for this yet,
> and don't expect it to be a big boost without something like the bug
> fix I mentioned above, but I could be wrong.  If anyone would like to
> play with the idea, I've posted my current patch at:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101309&group_id=5470

I very much like the idea of having a customizable lookup
method for builtin dicts.

This would allow using more specific lookup function for
different tasks (it would even be possible switching the
lookup functions at run-time via a new dict method), e.g.
one could think of optimizing string lookups using a
predefined set of slots or by assuring that the stored
keys map 1-1 by using an additional hash value modifier
which is automatically tuned to assure this feature. This
would probably greatly speed up lookups for both successful and
failing searches.

We could add also add special lookup functions for keys
which are known not to raise exceptions during compares
(which is probably what motivated your patch, right ?)
and then fall back to a complicated and slow variant
for the general case.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Sat Aug 26 12:01:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sat, 26 Aug 2000 13:01:40 +0300 (IDT)
Subject: [Python-Dev] Fixing test_poll.py for me just broke it for you
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKECCHCAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008261301090.20214-100000@sundial>

On Sat, 26 Aug 2000, Tim Peters wrote:

> Again due to the duplicate copies of test_support, the checked-in
> "expected output" file actually contains verbose-mode output.  I can't
> generate the *correct* non-verbose output on my system.  So, somebody
> please do that.

Done.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Sat Aug 26 12:27:48 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 26 Aug 2000 12:27:48 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
In-Reply-To: <39A78048.DA793307@lemburg.com>; from mal@lemburg.com on Sat, Aug 26, 2000 at 10:31:05AM +0200
References: <200008260814.KAA06267@hera.informatik.uni-bonn.de> <39A78048.DA793307@lemburg.com>
Message-ID: <20000826122748.M16377@xs4all.nl>

On Sat, Aug 26, 2000 at 10:31:05AM +0200, M.-A. Lemburg wrote:
> Markus Fleck wrote:
> > > I've had a look at the c.l.p.a postings and the only special
> > > header they include is the "Approved: fleck at informatik.uni-bonn.de"
> > > header.

> > Basically, that's all it takes to post to a "moderated" newsgroup.
> > (Talking about a case of "security by obscurity" here... :-/)
> > Actually, the string following the "Approved: " may even be random...

Yes, it can be completely random. We're talking about USENET here, it wasn't
designed for complicated procedures :-)

> Wow, so much for spam protection.

Well, we have a couple of 'moderated' lists locally, and I haven't, in 5
years, seen anyone fake an Approved: header. Of course, the penalty of doing
so would be severe, but we haven't even had to warn anyone, either, so how
could they know that ? :)

I also think most news-administrators are quite uhm, strict, in that kind of
thing. If any of our clients were found faking Approved: headers, they'd get
a not-very-friendly warning. If they do it a second time, they lose their
account. The news administrators I talked with at SANE2000 (sysadmin
conference) definately shared the same attitude. This isn't email, with
arbitrary headers and open relays and such, this is usenet, where you have
to have a fair bit of clue to keep your newsserver up and running :)

And up to now, spammers have been either too dumb or too smart to figure out
how to post to moderated newsgroups... I hope that if anyone ever does, the
punishment will be severe enough to scare away the rest ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Sat Aug 26 13:48:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 26 Aug 2000 06:48:59 -0500
Subject: [Python-Dev] Python 1.6 bug fix strategy
In-Reply-To: Your message of "Fri, 25 Aug 2000 19:16:00 -0400."
             <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com> 
Message-ID: <200008261148.GAA07398@cj20424-a.reston1.va.home.com>

> [Jeremy Hylton]
> > We have gotten several bug reports recently based on 1.6b1.  What
> > plans, if any, are there to fix these bugs before the 1.6 final
> > release?
> 
> My understanding is that 1.6final is done, except for plugging in a license;
> i.e., too late even for bugfixes.  If true, "Fixed in 2.0" will soon be a
> popular response to all sorts of things -- unless CNRI intends to do its own
> work on 1.6.

Applying the fix for writelines is easy, and I'll take care of it.

The other patch that jeremy mentioned
(http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=111403)
has no fix that I know of, is not easily reproduced, and was only
spotted in embedded code, so it might be the submittor's fault.
Without a reproducible test case it's unlikely to get fixed, so I'll
let that one go.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Sat Aug 26 17:11:12 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 26 Aug 2000 10:11:12 -0500 (CDT)
Subject: [Python-Dev] Is Python moving too fast? (was Re: Is python commercializationazing? ...)
In-Reply-To: <8o8101020mk@news1.newsguy.com>
References: <Pine.GSO.4.10.10008251845380.13902-100000@sundial>
	<8o66m9$cmn$1@slb3.atl.mindspring.net>
	<slrn8qdfq2.2ko.thor@localhost.localdomain>
	<39A6B447.3AFC880E@seebelow.org>
	<8o8101020mk@news1.newsguy.com>
Message-ID: <14759.56848.238001.346327@beluga.mojam.com>

    Alex> When I told people that the 1.5.2 release I was using, the latest
    Alex> one, had been 100% stable for over a year, I saw lights of wistful
    Alex> desire lighting in their eyes (at least as soon as they understood
    Alex> that here, for once, 'stable' did NOT mean 'dead':-)....  Oh well,
    Alex> it was nice while it lasted; now, the perception of Python will
    Alex> switch back from "magically stable and sound beyond ordinary
    Alex> mortals' parameters" to "quite ready to change core language for
    Alex> the sake of a marginal and debatable minor gain", i.e., "just
    Alex> another neat thing off the net".

I began using Python in early 1994, probably around version 1.0.1.  In the
intervening 6+ years, Python has had what I consider to be five significant
releases: 1.1 (10/11/94), 1.2 (4/10/95), 1.3 (10/8/95), 1.4 (10/25/96) and
1.5 (12/31/97).  (1.5.1 was released 4/13/98 and 1.5.2 was released
4/13/99).  So, while it's been a bit over a year since 1.5.2 was released,
Python really hasn't changed much in over 2.5 years. Guido and his core team
have been very good at maintaining backward compatibility while improving
language features and performance and keeping the language accessible to new
users.

We are now in the midst of several significant changes to the Python
development environment.  From my perspective as a friendly outsider, here's
what I see:

    1.  For the first time in it's 10+ year history, the language actually
        has a team of programmers led by Guido whose full-time job is to
        work on the language.  To the best of my knowledge, Guido's work at
        CNRI and CWI focused on other stuff, to which Python was applied as
        one of the tools.  The same observation can be made about the rest
        of the core PythonLabs team: Tim, Barry, Fred & Jeremy.  All had
        other duties at their previous positions.  Python was an important
        tool in what they did, but it wasn't what they got measured by in
        yearly performance reviews.

    2.  For the first time in its history, a secondary development team has
        surfaced in a highly visible and productive way, thanks to the
        migration to the SourceForge CVS repository.  Many of those people
        have been adding new ideas and code to the language all along, but
        the channel between their ideas and the core distribution was a very
        narrow one.  In the past, only the people at CNRI (and before that,
        CWI) could make direct changes to the source code repository.  In
        fact, I believe Guido used to be the sole filter of every new
        contribution to the tree.  Everything had to pass his eyeballs at
        some point.  That was a natural rate limiter on the pace of change,
        but I believe it probably also filtered out some very good ideas.

	While the SourceForge tools aren't perfect, their patch manager and
	bug tracking system, coupled with the externally accessible CVS
	repository, make it much easier for people to submit changes and for
	developers to manage those changes.  At the moment, browsing the
	patch manager with all options set to "any" shows 22 patches,
	submitted by 11 different people, which have been assigned to 9
	different people (there is a lot of overlap betwee the gang of 9 and
	the gang of 11).  That amount of parallelism in the development just
	wasn't possible before.

    3.  Python is now housed in a company formed to foster open source
        software development.  I won't pretend I understand all the
        implications of that move beyond the obvious reasons stated in item
        one, but there is bound to be some desire by BeOpen to put their
        stamp on the language.  I believe that there are key changes to the
        language that would not have made it into 2.0 had the license
        wrangling between CNRI and BeOpen not dragged out as long as it did.
        Those of us involved as active developers took advantage of that
        lull.  (I say "we", because I was a part of that.  I pushed Greg
        Ewing's original list comprehensions prototype along when the
        opportunity arose.)

    4.  Python's user and programmer base has grown dramatically in the past
        several years.  While it's not possible to actually measure the size
        of the user community, you can get an idea of its growth by looking
        at the increase in list traffic.  Taking a peek at the posting
        numbers at

            http://www.egroups.com/group/python-list

        is instructive.  In January of 1994 there were 76 posts to the list.
        In January of 2000 that number grew to 2678.  (That's with much less
        relative participation today by the core developers than in 1994.)

        In January of 1994 I believe the python-list at cwi.nl (with a possible
        Usenet gateway) was the only available discussion forum about
        Python.  Egroups lists 45 Python-related lists today (I took their
        word for it - they may stretch things a bit).  There are at least
        three (maybe four) distinct dialects of the language as well, not to
        mention the significant growth in supportef platforms in the past
        six years.

All this adds up to a system that is due for some significant change.  Those
of us currently involved are still getting used to the new system, so
perhaps things are moving a bit faster than if we were completely familiar
with this environment.  Many of the things that are new in 2.0 have been
proposed on the list off and on for a long time.  Unicode support, list
comprehensions, augmented assignment and extensions to the print statement
come to mind.  They are not new ideas tossed in with a beer chaser (like
"<blink>").  From the traffic on python-dev about Unicode support, I believe
it was the most challenging thing to add to the language.  By comparison,
the other three items I mentioned above were relatively simple concepts to
grasp and implement.

All these ideas were proposed to the community in the past, but have only
recently gained their own voice (so to speak) with the restructuring of the
development environment and growth in the base of active developers.

This broadening of the channel between the development community and the CVS
repository will obviously take some getting used to.  Once 2.0 is out, I
don't expect this (relatively) furious pace to continue.

-- 
Skip Montanaro (skip at mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/

[Completely unrelated aside: I've never voiced an opinion - pro or con -
about the new print syntax, either on python-list or python-dev.  This will
be my only observation.

I have used the following print statement format for several years when I
wanted to insert some temporary debugging statements that I knew I would
later remove or comment out:

    print ">>", this, that, and, the, other, stuff

because it would make it easier to locate them with a text editor.  (Right
shift, while a very useful construct, is hardly common in my programming.)
Now, I'm happy to say, I will no longer have to quote the ">>" and it will
be easier to get the output to go to sys.stderr...]



From effbot at telia.com  Sat Aug 26 17:31:54 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 26 Aug 2000 17:31:54 +0200
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
Message-ID: <001801c00f72$c72d5860$f2a6b5d4@hagrid>

summary: Tkinter passes 8-bit strings to Tk without any
preprocessing.  Tk itself expects UTF-8, but passes bogus
UTF-8 data right through...  or in other words, Tkinter
treats any 8-bit string that doesn't contain valid UTF-8
as an ISO Latin 1 string...

:::

maybe Tkinter should raise a UnicodeError instead (just
like string comparisions etc).  example:

    w = Label(text="<cp1250 string>")
    UnicodeError: ASCII decoding error: ordinal not in range(128)

this will break existing code, but I think that's better than
confusing the hell out of anyone working on a non-Latin-1
platform...

+0 from myself -- there's no way we can get a +1 solution
(source encoding) into 2.0 without delaying the release...

:::

for some more background, see the bug report below, and
my followup.

</F>

---

Summary: Impossible to get Win32 default font
encoding in widgets

Details: I did not managed to obtain correct font
encoding in widgets on Win32 (NT Workstation,
Polish version, default encoding cp1250). All cp1250
Polish characters were displayed incorrectly. I think,
all characters that do not belong to Latin-1 will be
displayed incorrectly. Regarding Python1.6b1, I
checked the Tcl/Tk installation (8.3.2). The pure
Tcl/Tk programs DO display characters in cp1250
correctly.

As far as I know, the Tcl interpreter woks with
UTF-8 encoded strings. Does Python1.6b1 really
know about it?

---

Follow-Ups:

Date: 2000-Aug-26 08:04
By: effbot

Comment:
this is really a "how do I", rather than a bug
report ;-)

:::

In 1.6 and beyond, Python's default 8-bit
encoding is plain ASCII.  this encoding is only
used when you're using 8-bit strings in "unicode
contexts" -- for example, if you compare an
8-bit string to a unicode string, or pass it to
a subsystem designed to use unicode strings.

If you pass an 8-bit string containing
characters outside the ASCII range to a function
expecting a unicode string, the result is
undefined (it's usually results in an exception,
but some subsystems may have other ideas).

Finally, Tkinter now supports Unicode.  In fact,
it assumes that all strings passed to it are
Unicode.  When using 8-bit strings, it's only
safe to use plain ASCII.

Tkinter currently doesn't raise exceptions for
8-bit strings with non-ASCII characters, but it
probably should.  Otherwise, Tk will attempt to
parse the string as an UTF-8 string, and if that
fails, it assumes ISO-8859-1.

:::

Anyway, to write portable code using characters
outside the ASCII character set, you should use
unicode strings.

in your case, you can use:

   s = unicode("<a cp1250 string>", "cp1250")

to get the platform's default encoding, you can do:

   import locale
   language, encoding = locale.getdefaultlocale()

where encoding should be "cp1250" on your box.

:::

The reason this work under Tcl/Tk is that Tcl
assumes that your source code uses the
platform's default encoding, and converts things
to Unicode (not necessarily UTF-8) for you under
the hood.  Python 2.1 will hopefully support
*explicit* source encodings, but 1.6/2.0
doesn't.

-------------------------------------------------------

For detailed info, follow this link:
http://sourceforge.net/bugs/?func=detailbug&bug_id=112265&group_id=5470




From effbot at telia.com  Sat Aug 26 17:43:38 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 26 Aug 2000 17:43:38 +0200
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
References: <001801c00f72$c72d5860$f2a6b5d4@hagrid>
Message-ID: <002401c00f74$6896a520$f2a6b5d4@hagrid>

>     UnicodeError: ASCII decoding error: ordinal not in range(128)

btw, what the heck is an "ordinal"?

(let's see: it's probably not "a book of rites for the ordination of
deacons, priests, and bishops".  how about an "ordinal number"?
that is, "a number designating the place (as first, second, or third)
occupied by an item in an ordered sequence".  hmm.  does this
mean that I cannot use strings longer than 128 characters?  but
this string was only 12 characters long.  wait, there's another
definition here: "a number assigned to an ordered set that de-
signates both the order of its elements and its cardinal number".
hmm.  what's a "cardinal"?  "a high ecclesiastical official of the
Roman Catholic Church who ranks next below the pope and is
appointed by him to assist him as a member of the college of
cardinals"?  ... oh, here it is: "a number (as 1, 5, 15) that is
used in simple counting and that indicates how many elements
there are in an assemblage".  "assemblage"?)

:::

wouldn't "character" be easier to grok for mere mortals?

...and isn't "range(128)" overly cute?

:::

how about:

UnicodeError: ASCII decoding error: character not in range 0-127

</F>




From tim_one at email.msn.com  Sat Aug 26 22:45:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 16:45:27 -0400
Subject: [Python-Dev] test_gettext fails on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDCHCAA.tim_one@email.msn.com>

Don't know whether this is unique to Win98.

test test_gettext failed -- Writing: 'mullusk', expected: 'bacon\012T'

Here's -v output:

test_gettext
installing gettext
calling bindtextdomain with localedir .
.
None
gettext
gettext
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
This module provides internationalization and localization
support for your Python programs by providing an interface to the GNU
gettext message catalog library.
nudge nudge
1
nudge nudge

Has almost nothing in common with the expected output!





From tim_one at email.msn.com  Sat Aug 26 22:59:42 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 16:59:42 -0400
Subject: [Python-Dev] test_gettext fails on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEDCHCAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEDFHCAA.tim_one@email.msn.com>

> ...
> Has almost nothing in common with the expected output!

OK, I understand this now:  the setup function opens a binary file for
writing but neglected to *say* it was binary in the "open".  Huge no-no for
portability.  About to check in the fix.






From thomas at xs4all.net  Sat Aug 26 23:12:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 26 Aug 2000 23:12:31 +0200
Subject: [Python-Dev] cPickle
Message-ID: <20000826231231.P16377@xs4all.nl>

I just noticed that test_cpickle makes Python crash (with a segmentation
fault) when there is no copy_reg. The funny bit is this:

centurion:~ > ./python Lib/test/regrtest.py test_cpickle
test_cpickle
test test_cpickle skipped --  No module named copy_reg
1 test skipped: test_cpickle

centurion:~ > ./python Lib/test/regrtest.py test_cookie test_cpickle
test_cookie
test test_cookie skipped --  No module named copy_reg
test_cpickle
Segmentation fault (core dumped)

I suspect there is a bug in the import code, in the case of failed imports. 

Holmes-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Sat Aug 26 23:14:37 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 17:14:37 -0400
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
In-Reply-To: <002401c00f74$6896a520$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDFHCAA.tim_one@email.msn.com>

>>     UnicodeError: ASCII decoding error: ordinal not in range(128)

> btw, what the heck is an "ordinal"?

It's a technical term <wink>.  But it's used consistently in Python, e.g.,
that's where the name of the builtin ord function comes from!

>>> print ord.__doc__
ord(c) -> integer

Return the integer ordinal of a one character string.
>>>

> ...
> how about an "ordinal number"?  that is, "a number designating the
> place (as first, second, or third) occupied by an item in an
> ordered sequence".

Exactly.  Each character has an arbitrary but fixed position in an arbitrary
but ordered sequence of all characters.  This isn't hard.

> wouldn't "character" be easier to grok for mere mortals?

Doubt it -- they're already confused about the need to distinguish between a
character and its encoding, and the *character* is most certainly not "in"
or "out" of any range of integers.

> ...and isn't "range(128)" overly cute?

Yes.

> UnicodeError: ASCII decoding error: character not in range 0-127

As above, it makes no sense.  How about compromising on

> UnicodeError: ASCII decoding error: ord(character) > 127

?





From tim_one at email.msn.com  Sun Aug 27 11:57:42 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 27 Aug 2000 05:57:42 -0400
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <20000825141623.G17277@ludwig.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>

[Greg Ward]
> ...yow: the Perl community is really going overboard in proposing
> enhancements:
> ...
>    4. http://dev.perl.org/rfc/

Following that URL is highly recommended!  There's a real burst of
creativity blooming there, and everyone weary of repeated Python debates
should find it refreshing to discover exactly the same arguments going on
over there (lazy lists, curried functions, less syntax, more syntax, less
explicit, more explicit, go away this isn't stinking LISP, ya but maybe it
oughta be, yadda yadda yadda).  Except the *terms* of the debate are
inverted in so many ways!  For example, this is my favorite Killer Appeal to
Principle so far:

    Perl is really hard for a machine to parse.  *Deliberately*.  If
    you think it shouldn't be, you're missing something.

Certainly a good antidote to Python inbreeding <wink>.

Compared to our PEPs, the Perl RFCs are more a collection of wishlists --
implementation details are often sketchy, or even ignored.  But they're in a
brainstorming mode, so I believe that's both expected & encouraged now.

I was surprised by how often Python gets mentioned, and somtimes by how
confusedly.  For example, in the Perl Coroutines RFC:

    Unlike coroutines as defined by Knuth, and implemented in laguages
    such as Simula or Python, perl does not have an explicit "resume"
    call for invoking coroutines.

Mistake -- or Guido's time machine <wink>?

Those who hate Python PEP 214 should check out Perl RFC 39, which proposes
to introduce

    ">" LIST "<"

as a synonym for

    "print" LIST

My favorite example:

    perl -e '><><' # cat(1)

while, of course

    ><;

prints the current value of $_.

I happen to like Perl enough that I enjoy this stuff.  You may wish to take
a different lesson from it <wink>.

whichever-it's-a-mistake-to-ignore-people-having-fun-ly y'rs  - tim





From tim_one at email.msn.com  Sun Aug 27 13:13:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 27 Aug 2000 07:13:35 -0400
Subject: [Python-Dev] Is Python moving too fast? (was Re: Is python commercializationazing? ...)
In-Reply-To: <14759.56848.238001.346327@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEFHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> I began using Python in early 1994, probably around version 1.0.1.

And it's always good to hear a newcomer's perspective <wink>.  Seriously, it
was a wonderful sane sketch of what's been happening lately.  Some glosses:

> ...
> From my perspective as a friendly outsider, ...

Nobody fall for that ingratiating ploy:  Skip's a Python Developer at
SourceForge too.  And glad to have him there!

> ...
>     3.  Python is now housed in a company formed to foster open source
>         software development.  I won't pretend I understand all the
>         implications of that move ... but there is bound to be some
>         desire by BeOpen to put their stamp on the language.

There is more desire on BeOpen's part-- at least at first --to just pay our
salaries.  Many people have asked for language or library changes or
enhancements in the past based on demanding real-world needs, but until very
recently the only possible response was "huh -- send in a patch, and maybe
Guido will check it in".  Despite the high approachability of Python's
implementation, often that's just too much of a task for the people seeking
solutions.  But if they want it enough to pay for it, or aren't even sure
exactly what they need, they can hire us to do it now (shameless plug:
mailto:pythonlabs-info at beopen.com).  I doubt there's any team better
qualified, and while I've been a paid prostitute my whole career, you can
still trust Guido to keep us honest <wink>.  For example, that's how
Python's Unicode features got developed (although at CNRI).

> I believe that there are key changes to the language that would not
> have made it into 2.0 had the license wrangling between CNRI and
> BeOpen not dragged out as long as it did.

Absolutely.  You may <snort> have missed some of the endless posts on this
topic:  we were *going* to release 2.0b1 on July 1st.  I was at Guido's
house late the night before, everything was cooking, and we were mere hours
away from uploading the 2.0b1 tarball for release.  Then CNRI pulled the
plug in an email, and we've been trying to get it back into the outlet ever
since.  When it became clear that things weren't going to settle at once,
and that we needed to produce a 1.6 release too with *only* the stuff
developed under CNRI's tenure, that left us twiddling our thumbs.  There
were a pile of cool (but, as you said later, old!) ideas Guido wanted to get
in anyway, so he opened the door.  Had things turned out as we *hoped*, they
would have gone into 2.1 instead, and that's all there was to that.

> ...
> All this adds up to a system that is due for some significant change.

Sure does.  But it's working great so far, so don't jinx it by questioning
*anything* <wink>.

> ...
> Once 2.0 is out, I don't expect this (relatively) furious pace to
> continue.

I suspect it will continue-- maybe even accelerate --but *shift*.  We're
fast running out of *any* feasible (before P3K) "core language" idea that
Guido has ever had a liking for, so I expect the core language changes to
slow waaaaay down again.  The libraries may be a different story, though.
For example, there are lots of GUIs out there, and Tk isn't everyone's
favorite yet remains especially favored in the std distribution; Python is
being used in new areas where it's currently harder to use than it should be
(e.g., deeply embedded systems); some of the web-related modules could
certainly stand a major boost in consistency, functionality and ease-of-use;
and fill in the blank _________.  There are infrastructure issues too, like
what to do on top of Distutils to make it at least as useful as CPAN.  Etc
etc etc ... there's a *ton* of stuff to be done beyond fiddling with the
language per se.  I won't be happy until there's a Python in every toaster
<wink>.

although-*perhaps*-light-bulbs-don't-really-need-it-ly y'rs  - tim





From thomas at xs4all.net  Sun Aug 27 13:42:28 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 27 Aug 2000 13:42:28 +0200
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 27, 2000 at 05:57:42AM -0400
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>
Message-ID: <20000827134228.A500@xs4all.nl>

On Sun, Aug 27, 2000 at 05:57:42AM -0400, Tim Peters wrote:
> [Greg Ward]
> > ...yow: the Perl community is really going overboard in proposing
> > enhancements:
> > ...
> >    4. http://dev.perl.org/rfc/

> Following that URL is highly recommended!

Indeed. Thanx for pointing it out again (and Greg, too), I've had a barrel
of laughs (and good impressions, both) already :)

> I was surprised by how often Python gets mentioned, and somtimes by how
> confusedly.

Well, 'python' is mentioned explicitly 12 times, in 7 different RFCs.
There'll be some implicit ones, of course, but it's not as much as I would
have expected, based on howmany times I hear my perl-hugging colleague
comment on how cool a particular Python feature is ;)

> For example, in the Perl Coroutines RFC:
> 
>     Unlike coroutines as defined by Knuth, and implemented in laguages
>     such as Simula or Python, perl does not have an explicit "resume"
>     call for invoking coroutines.
> 
> Mistake -- or Guido's time machine <wink>?

Neither. Someone elses time machine, as the URL given in the references
section shows: they're not talking about coroutines in the core, but as
'addon'. And not necessarily as stackless, either, there are a couple of
implementations.

(Other than that I don't like the Perl coroutine proposal: I think
single process coroutines make a lot more sense, though I can see why they
are arguing for such a 'i/o-based' model.)

My personal favorite, up to now, is RFC 28: Perl should stay Perl. Anyone
upset by the new print statement should definately read it ;) The other RFCs
going "don't change *that*" are good too, showing that not everyone is
losing themselves in wishes ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Sun Aug 27 17:20:08 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 27 Aug 2000 17:20:08 +0200
Subject: [Python-Dev] If you thought there were too many PEPs...
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com> <20000827134228.A500@xs4all.nl>
Message-ID: <000901c0103a$4a48b380$f2a6b5d4@hagrid>

thomas wrote:
> My personal favorite, up to now, is RFC 28: Perl should stay Perl.

number 29 is also a good one: don't ever add an alias
for "unlink" (written by someone who have never ever
read the POSIX or ANSI C standards ;-)

:::

btw, Python's remove/unlink implementation is slightly
broken -- they both map to unlink, but that's not the
right way to do it:

from SUSv2:

    int remove(const char *path);

    If path does not name a directory, remove(path)
    is equivalent to unlink(path). 

    If path names a directory, remove(path) is equi-
    valent to rmdir(path). 

should I fix this?

</F>




From guido at beopen.com  Sun Aug 27 20:28:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 27 Aug 2000 13:28:46 -0500
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: Your message of "Sun, 27 Aug 2000 17:20:08 +0200."
             <000901c0103a$4a48b380$f2a6b5d4@hagrid> 
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com> <20000827134228.A500@xs4all.nl>  
            <000901c0103a$4a48b380$f2a6b5d4@hagrid> 
Message-ID: <200008271828.NAA14847@cj20424-a.reston1.va.home.com>

> btw, Python's remove/unlink implementation is slightly
> broken -- they both map to unlink, but that's not the
> right way to do it:
> 
> from SUSv2:
> 
>     int remove(const char *path);
> 
>     If path does not name a directory, remove(path)
>     is equivalent to unlink(path). 
> 
>     If path names a directory, remove(path) is equi-
>     valent to rmdir(path). 
> 
> should I fix this?

That's a new one -- didn't exist when I learned Unix.

I guess we can fix this in 2.1.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From dgoodger at bigfoot.com  Sun Aug 27 21:27:22 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Sun, 27 Aug 2000 15:27:22 -0400
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <39A68B42.4E3F8A3D@lemburg.com>
References: <39A68B42.4E3F8A3D@lemburg.com>
Message-ID: <B5CEE3D9.81F2%dgoodger@bigfoot.com>

Some comments:

1. I think the idea of attribute docstrings is a great one. It would assist
in the auto-documenting of code immeasurably.

2. I second Frank Niessink (frankn=nuws at cs.vu.nl), who wrote:

> wouldn't the naming
> scheme <attributename>.__doc__ be a better one?
> 
> So if:
> 
> class C:
>   a = 1
>   """Description of a."""
> 
> then:
> 
> C.a.__doc__ == "Description of a."

'C.a.__doc__' is far more natural and Pythonic than 'C.__doc__a__'. The
latter would also require ugly tricks to access.

3. However, what would happen to C.a.__doc__ (or C.__doc__a__ for that
matter) when attribute 'a' is reassigned? For example:

    class C:
        a = 1  # class attribute, default value for instance attribute
        """Description of a."""

        def __init__(self, arg=None):
            if arg is not None:
                self.a = arg  # instance attribute
            self.b = []
            """Description of b."""

    instance = C(2)

What would instance.a.__doc__ (instance.__doc__a__) be? Would the __doc__ be
wiped out by the reassignment, or magically remain unless overridden?

4. How about instance attributes that are never class attributes? Like
'instance.b' in the example above?

5. Since docstrings "belong" to the attribute preceeding them, wouldn't it
be more Pythonic to write:

    class C:
        a = 1
            """Description of a."""

? (In case of mail viewer problems, each line above is indented relative to
the one before.) This emphasizes the relationship between the docstring and
the attribute. Of course, such an approach may entail a more complicated
modification to the Python source, but also more complete IMHO.

6. Instead of mangling names, how about an alternative approach? Each class,
instance, module, and function gets a single special name (call it
'__docs__' for now), a dictionary of attribute-name to docstring mappings.
__docs__ would be the docstring equivalent to __dict__. These dictionary
entries would not be affected by reassignment unless a new docstring is
specified. So, in the example from (3) above, we would have:

    >>> instance.__docs__
    {'b': 'Description of b.'}
    >>> C.__docs__
    {'a': 'Description of a.'}

Just as there is a built-in function 'dir' to apply Inheritance rules to
instance.__dict__, there would have to be a function 'docs' to apply
inheritance to instance.__docs__:

    >>> docs(instance)
    {'a': 'Description of a.', 'b': 'Description of b.'}

There are repercussions here. A module containing the example from (3) above
would have a __docs__ dictionary containing mappings for docstrings for each
top-level class and function defined, in addition to docstrings for each
global variable.


In conclusion, although this proposal has great promise, it still needs
work. If it's is to be done at all, better to do it right.

This could be the first true test of the PEP system; getting input from the
Python user community as well as the core PythonLabs and Python-Dev groups.
Other PEPs have been either after-the-fact or, in the case of those features
approved for inclusion in Python 2.0, too rushed for a significant
discussion.

-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From thomas at xs4all.net  Mon Aug 28 01:16:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 01:16:24 +0200
Subject: [Python-Dev] Python keywords
Message-ID: <20000828011624.E500@xs4all.nl>


Mark, (and the rest of python-dev)

There was a thread here a few weeks ago (or so, I seem to have misplaced
that particular thread :P) about using Python keywords as identifiers in
some cases. You needed that ability for .NET-Python, where the specs say any
identifier should be possible as methods and attributes, and there were some
comments on the list on how to do that (by Guido, for one.)

Well, the attached patch sort-of does that. I tried making it a bit nicer,
but that involved editing all places that currently use the NAME-type node,
and most of those don't advertise that they're doing that :-S The attached
patch is in no way nice, but it does work:

>>> class X:
...     def print(self, x):
...             print "printing", x
... 
>>> x = X()
>>> x.print(1)
printing 1
>>> x.print
<method X.print of X instance at 0x8207fc4>
>>> x.assert = 1
>>>

However, it also allows this at the top level, currently:
>>> def print(x):
...     print "printing", x
... 

which results in some unexpected behaviour:
>>> print(1)
1
>>> globals()['print'](1)
printing 1

But when combining it with modules, it does work as expected, of course:

# printer.py:
def print(x, y):
        print "printing", x, "and", y
#

>>> import printer
>>> printer.print
<function print at 0x824120c>
>>> printer.print(1, 2)
printing 1 and 2

Another plus-side of this particular method is that it's simple and
straightforward, if a bit maintenance-intensive :-) But the big question is:
is this enough for what you need ? Or do you need the ability to use
keywords in *all* identifiers, including variable names and such ? Because
that is quite a bit harder ;-P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
-------------- next part --------------
Index: Grammar/Grammar
===================================================================
RCS file: /cvsroot/python/python/dist/src/Grammar/Grammar,v
retrieving revision 1.41
diff -c -r1.41 Grammar
*** Grammar/Grammar	2000/08/24 20:11:30	1.41
--- Grammar/Grammar	2000/08/27 23:15:53
***************
*** 19,24 ****
--- 19,28 ----
  #diagram:output\textwidth 20.04cm\oddsidemargin  0.0cm\evensidemargin 0.0cm
  #diagram:rules
  
+ # for reference: everything allowed in a 'def' or trailer expression.
+ # (I might have missed one or two ;)
+ # ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally')
+ 
  # Start symbols for the grammar:
  #	single_input is a single interactive statement;
  #	file_input is a module or sequence of commands read from an input file;
***************
*** 28,34 ****
  file_input: (NEWLINE | stmt)* ENDMARKER
  eval_input: testlist NEWLINE* ENDMARKER
  
! funcdef: 'def' NAME parameters ':' suite
  parameters: '(' [varargslist] ')'
  varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']
  fpdef: NAME | '(' fplist ')'
--- 32,38 ----
  file_input: (NEWLINE | stmt)* ENDMARKER
  eval_input: testlist NEWLINE* ENDMARKER
  
! funcdef: 'def' ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally') parameters ':' suite
  parameters: '(' [varargslist] ')'
  varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']
  fpdef: NAME | '(' fplist ')'
***************
*** 87,93 ****
  atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist '`' | NAME | NUMBER | STRING+
  listmaker: test ( list_for | (',' test)* [','] )
  lambdef: 'lambda' [varargslist] ':' test
! trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
  subscriptlist: subscript (',' subscript)* [',']
  subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
  sliceop: ':' [test]
--- 91,97 ----
  atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist '`' | NAME | NUMBER | STRING+
  listmaker: test ( list_for | (',' test)* [','] )
  lambdef: 'lambda' [varargslist] ':' test
! trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally')
  subscriptlist: subscript (',' subscript)* [',']
  subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
  sliceop: ':' [test]

From greg at cosc.canterbury.ac.nz  Mon Aug 28 05:16:35 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 28 Aug 2000 15:16:35 +1200 (NZST)
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <B5CEE3D9.81F2%dgoodger@bigfoot.com>
Message-ID: <200008280316.PAA16831@s454.cosc.canterbury.ac.nz>

David Goodger <dgoodger at bigfoot.com>:

> 6. Instead of mangling names, how about an alternative approach? Each class,
> instance, module, and function gets a single special name (call it
> '__docs__' for now), a dictionary of attribute-name to docstring
> mappings.

Good idea!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From moshez at math.huji.ac.il  Mon Aug 28 08:30:23 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 28 Aug 2000 09:30:23 +0300 (IDT)
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <000901c0103a$4a48b380$f2a6b5d4@hagrid>
Message-ID: <Pine.GSO.4.10.10008280930000.5796-100000@sundial>

On Sun, 27 Aug 2000, Fredrik Lundh wrote:

> btw, Python's remove/unlink implementation is slightly
> broken -- they both map to unlink, but that's not the
> right way to do it:
> 
> from SUSv2:
> 
>     int remove(const char *path);
> 
>     If path does not name a directory, remove(path)
>     is equivalent to unlink(path). 
> 
>     If path names a directory, remove(path) is equi-
>     valent to rmdir(path). 
> 
> should I fix this?

1. Yes.
2. After the feature freeze.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tanzer at swing.co.at  Mon Aug 28 08:32:17 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Mon, 28 Aug 2000 08:32:17 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: Your message of "Fri, 25 Aug 2000 17:05:38 +0200."
             <39A68B42.4E3F8A3D@lemburg.com> 
Message-ID: <m13TISv-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

>     This PEP proposes a small addition to the way Python currently
>     handles docstrings embedded in Python code.
(snip)
>     Here is an example:
> 
>         class C:
>             "class C doc-string"
> 
>             a = 1
>             "attribute C.a doc-string (1)"
> 
>             b = 2
>             "attribute C.b doc-string (2)"
> 
>     The docstrings (1) and (2) are currently being ignored by the
>     Python byte code compiler, but could obviously be put to good use
>     for documenting the named assignments that precede them.
>     
>     This PEP proposes to also make use of these cases by proposing
>     semantics for adding their content to the objects in which they
>     appear under new generated attribute names.

Great proposal. This would make docstrings even more useful.

>     In order to preserve features like inheritance and hiding of
>     Python's special attributes (ones with leading and trailing double
>     underscores), a special name mangling has to be applied which
>     uniquely identifies the docstring as belonging to the name
>     assignment and allows finding the docstring later on by inspecting
>     the namespace.
> 
>     The following name mangling scheme achieves all of the above:
> 
>         __doc__<attributename>__

IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
__docs__ dictionary is a better solution:

- It provides all docstrings for the attributes of an object in a
  single place.

  * Handy in interactive mode.
  * This simplifies the generation of documentation considerably.

- It is easier to explain in the documentation

>     To keep track of the last assigned name, the byte code compiler
>     stores this name in a variable of the compiling structure.  This
>     variable defaults to NULL.  When it sees a docstring, it then
>     checks the variable and uses the name as basis for the above name
>     mangling to produce an implicit assignment of the docstring to the
>     mangled name.  It then resets the variable to NULL to avoid
>     duplicate assignments.

Normally, Python concatenates adjacent strings. It doesn't do this
with docstrings. I think Python's behavior would be more consistent
if docstrings were concatenated like any other strings.

>     Since the implementation does not reset the compiling structure
>     variable when processing a non-expression, e.g. a function
>     definition, the last assigned name remains active until either the
>     next assignment or the next occurrence of a docstring.
> 
>     This can lead to cases where the docstring and assignment may be
>     separated by other expressions:
> 
>         class C:
>             "C doc string"
> 
>             b = 2
> 
>             def x(self):
>                 "C.x doc string"
>                 y = 3
>                 return 1
> 
>             "b's doc string"
> 
>     Since the definition of method "x" currently does not reset the
>     used assignment name variable, it is still valid when the compiler
>     reaches the docstring "b's doc string" and thus assigns the string
>     to __doc__b__.

This is rather surprising behavior. Does this mean that a string in
the middle of a function definition would be interpreted as the
docstring of the function?

For instance,

    def spam():
        a = 3
        "Is this spam's docstring? (not in 1.5.2)"
        return 1

Anyway, the behavior of Python should be the same for all kinds of
docstrings. 

>     A possible solution to this problem would be resetting the name
>     variable for all non-expression nodes.

IMHO, David Goodger's proposal of indenting the docstring relative to the
attribute it refers to is a better solution.

If that requires too many changes to the parser, the name variable
should be reset for all statement nodes.

Hoping-to-use-attribute-docstrings-soon ly,
Christian

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Mon Aug 28 10:28:16 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:28:16 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <39A68B42.4E3F8A3D@lemburg.com> <B5CEE3D9.81F2%dgoodger@bigfoot.com>
Message-ID: <39AA22A0.D533598A@lemburg.com>

[Note: Please CC: all messages on this thread to me directly as I
 am the PEP maintainer. If you don't, then I might not read your
 comments.]

David Goodger wrote:
> 
> Some comments:
> 
> 1. I think the idea of attribute docstrings is a great one. It would assist
> in the auto-documenting of code immeasurably.

Agreed ;-)
 
> 2. I second Frank Niessink (frankn=nuws at cs.vu.nl), who wrote:
> 
> > wouldn't the naming
> > scheme <attributename>.__doc__ be a better one?
> >
> > So if:
> >
> > class C:
> >   a = 1
> >   """Description of a."""
> >
> > then:
> >
> > C.a.__doc__ == "Description of a."
> 
> 'C.a.__doc__' is far more natural and Pythonic than 'C.__doc__a__'. The
> latter would also require ugly tricks to access.

This doesn't work, since Python objects cannot have arbitrary
attributes. Also, I wouldn't want to modify attribute objects indirectly
from the outside as the above implies.

I don't really see the argument of __doc__a__ being hard to
access: these attributes are meant for tools to use, not
humans ;-), and these tools can easily construct the right
lookup names by scanning the dir(obj) and then testing for
the various __doc__xxx__ strings.
 
> 3. However, what would happen to C.a.__doc__ (or C.__doc__a__ for that
> matter) when attribute 'a' is reassigned? For example:
> 
>     class C:
>         a = 1  # class attribute, default value for instance attribute
>         """Description of a."""
> 
>         def __init__(self, arg=None):
>             if arg is not None:
>                 self.a = arg  # instance attribute
>             self.b = []
>             """Description of b."""
> 
>     instance = C(2)
> 
> What would instance.a.__doc__ (instance.__doc__a__) be? Would the __doc__ be
> wiped out by the reassignment, or magically remain unless overridden?

See above. This won't work.
 
> 4. How about instance attributes that are never class attributes? Like
> 'instance.b' in the example above?

I don't get the point... doc strings should always be considered
constant and thus be defined in the class/module definition.
 
> 5. Since docstrings "belong" to the attribute preceeding them, wouldn't it
> be more Pythonic to write:
> 
>     class C:
>         a = 1
>             """Description of a."""
> 
> ? (In case of mail viewer problems, each line above is indented relative to
> the one before.) This emphasizes the relationship between the docstring and
> the attribute. Of course, such an approach may entail a more complicated
> modification to the Python source, but also more complete IMHO.

Note that Python's indents block and these are always preceeded
by a line ending in a colon. The above idea would break this.

> 6. Instead of mangling names, how about an alternative approach? Each class,
> instance, module, and function gets a single special name (call it
> '__docs__' for now), a dictionary of attribute-name to docstring mappings.
> __docs__ would be the docstring equivalent to __dict__. These dictionary
> entries would not be affected by reassignment unless a new docstring is
> specified. So, in the example from (3) above, we would have:
> 
>     >>> instance.__docs__
>     {'b': 'Description of b.'}
>     >>> C.__docs__
>     {'a': 'Description of a.'}
> 
> Just as there is a built-in function 'dir' to apply Inheritance rules to
> instance.__dict__, there would have to be a function 'docs' to apply
> inheritance to instance.__docs__:
> 
>     >>> docs(instance)
>     {'a': 'Description of a.', 'b': 'Description of b.'}
> 
> There are repercussions here. A module containing the example from (3) above
> would have a __docs__ dictionary containing mappings for docstrings for each
> top-level class and function defined, in addition to docstrings for each
> global variable.

This would not work well together with class inheritance.
 
> In conclusion, although this proposal has great promise, it still needs
> work. If it's is to be done at all, better to do it right.
> 
> This could be the first true test of the PEP system; getting input from the
> Python user community as well as the core PythonLabs and Python-Dev groups.
> Other PEPs have been either after-the-fact or, in the case of those features
> approved for inclusion in Python 2.0, too rushed for a significant
> discussion.

We'll see whether this "global" approach is a good one ;-)
In any case, I think it'll give more awareness of the PEP
system.

Thanks for the comments,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 28 10:55:15 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:55:15 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TISv-000wcEC@swing.co.at>
Message-ID: <39AA28F3.1968E27@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> 
> >     This PEP proposes a small addition to the way Python currently
> >     handles docstrings embedded in Python code.
> (snip)
> >     Here is an example:
> >
> >         class C:
> >             "class C doc-string"
> >
> >             a = 1
> >             "attribute C.a doc-string (1)"
> >
> >             b = 2
> >             "attribute C.b doc-string (2)"
> >
> >     The docstrings (1) and (2) are currently being ignored by the
> >     Python byte code compiler, but could obviously be put to good use
> >     for documenting the named assignments that precede them.
> >
> >     This PEP proposes to also make use of these cases by proposing
> >     semantics for adding their content to the objects in which they
> >     appear under new generated attribute names.
> 
> Great proposal. This would make docstrings even more useful.

Right :-)
 
> >     In order to preserve features like inheritance and hiding of
> >     Python's special attributes (ones with leading and trailing double
> >     underscores), a special name mangling has to be applied which
> >     uniquely identifies the docstring as belonging to the name
> >     assignment and allows finding the docstring later on by inspecting
> >     the namespace.
> >
> >     The following name mangling scheme achieves all of the above:
> >
> >         __doc__<attributename>__
> 
> IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> __docs__ dictionary is a better solution:
> 
> - It provides all docstrings for the attributes of an object in a
>   single place.
> 
>   * Handy in interactive mode.
>   * This simplifies the generation of documentation considerably.
> 
> - It is easier to explain in the documentation

The downside is that it doesn't work well together with
class inheritance: docstrings of the above form can
be overridden or inherited just like any other class
attribute.
 
> >     To keep track of the last assigned name, the byte code compiler
> >     stores this name in a variable of the compiling structure.  This
> >     variable defaults to NULL.  When it sees a docstring, it then
> >     checks the variable and uses the name as basis for the above name
> >     mangling to produce an implicit assignment of the docstring to the
> >     mangled name.  It then resets the variable to NULL to avoid
> >     duplicate assignments.
> 
> Normally, Python concatenates adjacent strings. It doesn't do this
> with docstrings. I think Python's behavior would be more consistent
> if docstrings were concatenated like any other strings.

Huh ? It does...

>>> class C:
...     "first line"\
...     "second line"
... 
>>> C.__doc__
'first linesecond line'

And the same works for the attribute doc strings too.

> >     Since the implementation does not reset the compiling structure
> >     variable when processing a non-expression, e.g. a function
> >     definition, the last assigned name remains active until either the
> >     next assignment or the next occurrence of a docstring.
> >
> >     This can lead to cases where the docstring and assignment may be
> >     separated by other expressions:
> >
> >         class C:
> >             "C doc string"
> >
> >             b = 2
> >
> >             def x(self):
> >                 "C.x doc string"
> >                 y = 3
> >                 return 1
> >
> >             "b's doc string"
> >
> >     Since the definition of method "x" currently does not reset the
> >     used assignment name variable, it is still valid when the compiler
> >     reaches the docstring "b's doc string" and thus assigns the string
> >     to __doc__b__.
> 
> This is rather surprising behavior. Does this mean that a string in
> the middle of a function definition would be interpreted as the
> docstring of the function?

No, since at the beginning of the function the name variable
is set to NULL.
 
> For instance,
> 
>     def spam():
>         a = 3
>         "Is this spam's docstring? (not in 1.5.2)"
>         return 1
> 
> Anyway, the behavior of Python should be the same for all kinds of
> docstrings.
> 
> >     A possible solution to this problem would be resetting the name
> >     variable for all non-expression nodes.
> 
> IMHO, David Goodger's proposal of indenting the docstring relative to the
> attribute it refers to is a better solution.
> 
> If that requires too many changes to the parser, the name variable
> should be reset for all statement nodes.

See my other mail: indenting is only allowed for blocks of
code and these are usually started with a colon -- doesn't
work here.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 28 10:58:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:58:34 +0200
Subject: [Python-Dev] Python keywords
References: <20000828011624.E500@xs4all.nl>
Message-ID: <39AA29BA.73EA9FB3@lemburg.com>

Thomas Wouters wrote:
> 
> Mark, (and the rest of python-dev)
> 
> There was a thread here a few weeks ago (or so, I seem to have misplaced
> that particular thread :P) about using Python keywords as identifiers in
> some cases. You needed that ability for .NET-Python, where the specs say any
> identifier should be possible as methods and attributes, and there were some
> comments on the list on how to do that (by Guido, for one.)

Are you sure you want to confuse Python source code readers by
making keywords usable as identifiers ?

What would happen to Python simple to parse grammar -- would
syntax highlighting still be as simple as it is now ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Mon Aug 28 12:54:13 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 05:54:13 -0500
Subject: [Python-Dev] Python keywords
In-Reply-To: Your message of "Mon, 28 Aug 2000 01:16:24 +0200."
             <20000828011624.E500@xs4all.nl> 
References: <20000828011624.E500@xs4all.nl> 
Message-ID: <200008281054.FAA22728@cj20424-a.reston1.va.home.com>

[Thomas Wouters]
> There was a thread here a few weeks ago (or so, I seem to have misplaced
> that particular thread :P) about using Python keywords as identifiers in
> some cases. You needed that ability for .NET-Python, where the specs say any
> identifier should be possible as methods and attributes, and there were some
> comments on the list on how to do that (by Guido, for one.)
> 
> Well, the attached patch sort-of does that. I tried making it a bit nicer,
> but that involved editing all places that currently use the NAME-type node,
> and most of those don't advertise that they're doing that :-S The attached
> patch is in no way nice, but it does work:
> 
> >>> class X:
> ...     def print(self, x):
> ...             print "printing", x
> ... 
> >>> x = X()
> >>> x.print(1)
> printing 1
> >>> x.print
> <method X.print of X instance at 0x8207fc4>
> >>> x.assert = 1
> >>>
> 
> However, it also allows this at the top level, currently:
> >>> def print(x):
> ...     print "printing", x
> ... 

Initially I thought this would be fine, but on second thought I'm not
so sure.  To a newbie who doesn't know all the keywords, this would be
confusing:

  >>> def try(): # my first function
  ...     print "hello"
  ...
  >>> try()
    File "<stdin>", line 1
      try()
	 ^
  SyntaxError: invalid syntax
  >>>

I don't know how best to fix this -- using different syntax for 'def'
inside a class than outside would require a complete rewrite of the
grammar, which is not a good idea.  Perhaps a 2nd pass compile-time
check would be sufficient.

> which results in some unexpected behaviour:
> >>> print(1)
> 1
> >>> globals()['print'](1)
> printing 1
> 
> But when combining it with modules, it does work as expected, of course:
> 
> # printer.py:
> def print(x, y):
>         print "printing", x, "and", y
> #
> 
> >>> import printer
> >>> printer.print
> <function print at 0x824120c>
> >>> printer.print(1, 2)
> printing 1 and 2
> 
> Another plus-side of this particular method is that it's simple and
> straightforward, if a bit maintenance-intensive :-) But the big question is:
> is this enough for what you need ? Or do you need the ability to use
> keywords in *all* identifiers, including variable names and such ? Because
> that is quite a bit harder ;-P

I believe that one other thing is needed: keyword parameters (only in
calls, not in definitions).  Also, I think you missed a few reserved
words, e.g. 'and', 'or'.  See Lib/keyword.py!

A comment on the patch: wouldn't it be *much* better to change the
grammar to introduce a new nonterminal, e.g. unres_name, as follows:

unres_name; NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | \
  'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | \
  'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | \
  'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally'

and use this elsewhere in the rules:

funcdef: 'def' unres_name parameters ':' suite
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' unres_name

Then you'd have to fix compile.c of course, but only in two places (I
think?).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Mon Aug 28 13:16:18 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 06:16:18 -0500
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: Your message of "Mon, 28 Aug 2000 09:30:23 +0300."
             <Pine.GSO.4.10.10008280930000.5796-100000@sundial> 
References: <Pine.GSO.4.10.10008280930000.5796-100000@sundial> 
Message-ID: <200008281116.GAA22841@cj20424-a.reston1.va.home.com>

> > from SUSv2:
> > 
> >     int remove(const char *path);
> > 
> >     If path does not name a directory, remove(path)
> >     is equivalent to unlink(path). 
> > 
> >     If path names a directory, remove(path) is equi-
> >     valent to rmdir(path). 
> > 
> > should I fix this?
> 
> 1. Yes.
> 2. After the feature freeze.

Agreed.  Note that the correct fix is to use remove() if it exists and
emulate it if it doesn't.

On Windows, I believe remove() exists but probably not with the above
semantics so it should be emulated.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Mon Aug 28 14:33:59 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 14:33:59 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
Message-ID: <39AA5C37.2F1846B3@lemburg.com>

I've been tossing some ideas around w/r to adding pragma style
declarations to Python and would like to hear what you think
about these:

1. Embed pragma declarations in comments:

	#pragma: name = value

   Problem: comments are removed by the tokenizer, yet the compiler
   will have to make use of them, so some logic would be needed
   to carry them along.

2. Reusing a Python keyword to build a new form of statement:

	def name = value

   Problem: not sure whether the compiler and grammar could handle
   this.

   The nice thing about this kind of declaration is that it would
   generate a node which the compiler could actively use. Furthermore,
   scoping would come for free. This one is my favourite.

3. Add a new keyword:

	decl name = value

   Problem: possible code breakage.

This is only a question regarding the syntax of these meta-
information declarations. The semantics remain to be solved
in a different discussion.

Comments ?

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Mon Aug 28 14:38:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 14:38:13 +0200
Subject: [Python-Dev] Python keywords
In-Reply-To: <200008281054.FAA22728@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 28, 2000 at 05:54:13AM -0500
References: <20000828011624.E500@xs4all.nl> <200008281054.FAA22728@cj20424-a.reston1.va.home.com>
Message-ID: <20000828143813.F500@xs4all.nl>

On Mon, Aug 28, 2000 at 05:54:13AM -0500, Guido van Rossum wrote:

> > However, it also allows this at the top level, currently:
> > >>> def print(x):
> > ...     print "printing", x
> > ... 

> Initially I thought this would be fine, but on second thought I'm not
> so sure.  To a newbie who doesn't know all the keywords, this would be
> confusing:
> 
>   >>> def try(): # my first function
>   ...     print "hello"
>   ...
>   >>> try()
>     File "<stdin>", line 1
>       try()
> 	 ^
>   SyntaxError: invalid syntax
>   >>>
> 
> I don't know how best to fix this -- using different syntax for 'def'
> inside a class than outside would require a complete rewrite of the
> grammar, which is not a good idea.  Perhaps a 2nd pass compile-time
> check would be sufficient.

Hmm. I'm not really sure. I think it's nice to be able to use
'object.print', and it would be, well, inconsistent, not to allow
'module.print' (or module.exec, for that matter), but I realize how
confusing it can be.

Perhaps generate a warning ? :-P

> I believe that one other thing is needed: keyword parameters (only in
> calls, not in definitions).  Also, I think you missed a few reserved
> words, e.g. 'and', 'or'.  See Lib/keyword.py!

Ahh, yes. I knew there had to be a list of keywords, but I was too tired to
go haunt for it, last night ;) 

> A comment on the patch: wouldn't it be *much* better to change the
> grammar to introduce a new nonterminal, e.g. unres_name, as follows:

> unres_name; NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | \
>   'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | \
>   'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | \
>   'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally'

> and use this elsewhere in the rules:

> funcdef: 'def' unres_name parameters ':' suite
> trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' unres_name

> Then you'd have to fix compile.c of course, but only in two places (I
> think?).

I tried this before, a week or two ago, but it was too much of a pain. The
nodes get tossed around no end, and tracking down where they are STR()'d and
TYPE()'d is, well, annoying ;P I tried to hack around it by making STR() and
CHILD() do some magic, but it didn't quite work. I kind of gave up and
decided it had to be done in the metagrammar, which drove me insane last
night ;-) and then decided to 'prototype' it first.

Then again, maybe I missed something. I might try it again. It would
definately be the better solution ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From DavidA at ActiveState.com  Thu Aug 24 02:25:55 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 23 Aug 2000 17:25:55 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] [Announce] ActivePython 1.6 beta release (fwd)
Message-ID: <Pine.WNT.4.21.0008231725340.272-100000@loom>

It is my pleasure to announce the availability of the beta release of
ActivePython 1.6, build 100.

This binary distribution, based on Python 1.6b1, is available from
ActiveState's website at:

    http://www.ActiveState.com/Products/ActivePython/

ActiveState is committed to making Python easy to install and use on all
major platforms. ActivePython contains the convenience of swift
installation, coupled with commonly used modules, providing you with a
total package to meets your Python needs. Additionally, for Windows users,
ActivePython provides a suite of Windows tools, developed by Mark Hammond.

ActivePython is provided in convenient binary form for Windows, Linux and
Solaris under a variety of installation packages, available at:

    http://www.ActiveState.com/Products/ActivePython/Download.html

For support information, mailing list subscriptions and archives, a bug
reporting system, and fee-based technical support, please go to

    http://www.ActiveState.com/Products/ActivePython/

Please send us feedback regarding this release, either through the mailing
list or through direct email to ActivePython-feedback at ActiveState.com.

ActivePython is free, and redistribution of ActivePython within your
organization is allowed.  The ActivePython license is available at
http://www.activestate.com/Products/ActivePython/License_Agreement.html
and in the software packages.

We look forward to your comments and to making ActivePython suit your
Python needs in future releases.

Thank you,

-- David Ascher & the ActivePython team
   ActiveState Tool Corporation











From nhodgson at bigpond.net.au  Mon Aug 28 16:22:50 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Tue, 29 Aug 2000 00:22:50 +1000
Subject: [Python-Dev] Python identifiers - was: Python keywords
References: <20000828011624.E500@xs4all.nl>
Message-ID: <019601c010fb$731007c0$8119fea9@neil>

   As well as .NET requiring a mechanism for accessing externally defined
identifiers which clash with Python keywords, it would be good to allow
access to identifiers containing non-ASCII characters. This is allowed in
.NET. C# copies the Java convention of allowing \u escapes in identifiers as
well as character/string literals.

   Has there been any thought to allowing this in Python? The benefit of
this convention over encoding the file in UTF-8 or an 8 bit character set is
that it is ASCII safe and can be manipulated correctly by common tools. My
interest in this is in the possibility of extending Scintilla and PythonWin
to directly understand this sequence, showing the correct glyph rather than
the \u sequence.

   Neil




From bwarsaw at beopen.com  Mon Aug 28 15:44:45 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 28 Aug 2000 09:44:45 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
	<200008260814.KAA06267@hera.informatik.uni-bonn.de>
Message-ID: <14762.27853.159285.488297@anthem.concentric.net>

>>>>> "MF" == Markus Fleck <fleck at triton.informatik.uni-bonn.de> writes:

    MF> In principle, I do have the time again to do daily moderation
    MF> of incoming postings for c.l.py.a. Unfortunately, I currently
    MF> lack the infrastructure (i.e. the moderation program), which
    MF> went down together with the old starship. I was basically
    MF> waiting for a version of Mailman that could be used to post to
    MF> moderated newsgroups. (I should probably have been more vocal
    MF> about that, or even should have started hacking Mailman
    MF> myself...

All this is in place now.
    
    MF> I *did* start to write something that would grab new
    MF> announcements daily from Parnassus and post them to c.l.py.a,
    MF> and I may even come to finish this in September, but that
    MF> doesn't substitute for a "real" moderation tool for
    MF> user-supplied postings. Also, it would probably be a lot
    MF> easier for Parnassus postings to be built directly from the
    MF> Parnassus database, instead from its [generated] HTML pages -
    MF> the Parnassus author intended to supply such functionality,
    MF> but I didn't hear from him yet, either.)

I think that would be a cool thing to work on.  As I mentioned to
Markus in private email, it would be great if the Parnassus->news tool
added the special c.l.py.a footer so that automated scripts on the
/other/ end could pull the messages off the newsgroup, search for the
footer, and post them to web pages, etc.

    MF> So what's needed now? Primarily, a Mailman installation that
    MF> can post to moderated newsgroups (and maybe also do the
    MF> mail2list gatewaying for c.l.py.a), and a mail alias that
    MF> forwards mail for python-announce at python.org to that Mailman
    MF> address. Some "daily digest" generator for Parnassus
    MF> announcements would be nice to have, too, but that can only
    MF> come once the other two things work.

All this is in place, as MAL said.  Markus, if you'd like to be a
moderator, email me and I'd be happy to add you.

And let's start encouraging people to post to c.l.py.a and
python-announce at Python.org again!

-Barry




From bwarsaw at beopen.com  Mon Aug 28 17:01:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 28 Aug 2000 11:01:24 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
Message-ID: <14762.32452.579356.483473@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    Fred> None the less, performance is an issue for dictionaries, so
    Fred> I came up with the idea to use a specialized version for
    Fred> string keys.

Note that JPython does something similar for dictionaries that are
used for namespaces.  See PyStringMap.java.

-Barry



From fdrake at beopen.com  Mon Aug 28 17:19:44 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 11:19:44 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
In-Reply-To: <14762.32452.579356.483473@anthem.concentric.net>
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
	<14762.32452.579356.483473@anthem.concentric.net>
Message-ID: <14762.33552.622374.428515@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Note that JPython does something similar for dictionaries that are
 > used for namespaces.  See PyStringMap.java.

  The difference is that there are no code changes outside
dictobject.c to make this useful for my proposal -- there isn't a new
object type involved.  The PyStringMap class is actually a different
implementation (which I did dig into a bit at one point, to create
versions that weren't bound to JPython).
  My modified dictionary objects are just dictionary objects that
auto-degrade themselves as soon as a non-string key is looked up
(including while setting values).  But the approach and rational are
very similar.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Mon Aug 28 19:09:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 12:09:30 -0500
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: Your message of "Mon, 28 Aug 2000 14:33:59 +0200."
             <39AA5C37.2F1846B3@lemburg.com> 
References: <39AA5C37.2F1846B3@lemburg.com> 
Message-ID: <200008281709.MAA24142@cj20424-a.reston1.va.home.com>

> I've been tossing some ideas around w/r to adding pragma style
> declarations to Python and would like to hear what you think
> about these:
> 
> 1. Embed pragma declarations in comments:
> 
> 	#pragma: name = value
> 
>    Problem: comments are removed by the tokenizer, yet the compiler
>    will have to make use of them, so some logic would be needed
>    to carry them along.
> 
> 2. Reusing a Python keyword to build a new form of statement:
> 
> 	def name = value
> 
>    Problem: not sure whether the compiler and grammar could handle
>    this.
> 
>    The nice thing about this kind of declaration is that it would
>    generate a node which the compiler could actively use. Furthermore,
>    scoping would come for free. This one is my favourite.
> 
> 3. Add a new keyword:
> 
> 	decl name = value
> 
>    Problem: possible code breakage.
> 
> This is only a question regarding the syntax of these meta-
> information declarations. The semantics remain to be solved
> in a different discussion.

I say add a new reserved word pragma and accept the consequences.  The
other solutions are just too ugly.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Mon Aug 28 18:36:33 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 28 Aug 2000 12:36:33 -0400 (EDT)
Subject: [Python-Dev] need help with build on HP-UX
Message-ID: <14762.38161.405971.414152@bitdiddle.concentric.net>

We have a bug report for Python 1.5.2 that says building with threads
enabled causes a core dump with the interpreter is started.

#110650:
http://sourceforge.net/bugs/?func=detailbug&bug_id=110650&group_id=5470

I don't have access to an HP-UX box on which to text this problem.  If
anyone does, could they verify whether the problem exists with the
current code?

Jeremy



From nathan at islanddata.com  Mon Aug 28 18:51:24 2000
From: nathan at islanddata.com (Nathan Clegg)
Date: Mon, 28 Aug 2000 09:51:24 -0700 (PDT)
Subject: [Python-Dev] RE: need help with build on HP-UX
In-Reply-To: <14762.38161.405971.414152@bitdiddle.concentric.net>
Message-ID: <XFMail.20000828095124.nathan@islanddata.com>

I can't say for current code, but I ran into this problem with 1.5.2.  I
resolved it by installing pthreads instead of HP's native.  Is/should this be a
prerequisite?



On 28-Aug-2000 Jeremy Hylton wrote:
> We have a bug report for Python 1.5.2 that says building with threads
> enabled causes a core dump with the interpreter is started.
> 
>#110650:
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110650&group_id=5470
> 
> I don't have access to an HP-UX box on which to text this problem.  If
> anyone does, could they verify whether the problem exists with the
> current code?
> 
> Jeremy
> 
> -- 
> http://www.python.org/mailman/listinfo/python-list



----------------------------------
Nathan Clegg
 nathan at islanddata.com





From guido at beopen.com  Mon Aug 28 20:34:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 13:34:55 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: Your message of "Mon, 28 Aug 2000 10:20:08 MST."
             <200008281720.KAA09138@slayer.i.sourceforge.net> 
References: <200008281720.KAA09138@slayer.i.sourceforge.net> 
Message-ID: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>

How about popen4?  Or is that Windows specific?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Mon Aug 28 19:36:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 19:36:06 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <39A68B42.4E3F8A3D@lemburg.com> <B5CEE3D9.81F2%dgoodger@bigfoot.com> <39AA22A0.D533598A@lemburg.com> <200008281515.IAA27799@netcom.com>
Message-ID: <39AAA306.2CBD5383@lemburg.com>

Aahz Maruch wrote:
> 
> [p&e]
> 
> In article <39AA22A0.D533598A at lemburg.com>,
> M.-A. Lemburg <mal at lemburg.com> wrote:
> >
> >>     >>> docs(instance)
> >>     {'a': 'Description of a.', 'b': 'Description of b.'}
> >>
> >> There are repercussions here. A module containing the example from (3) above
> >> would have a __docs__ dictionary containing mappings for docstrings for each
> >> top-level class and function defined, in addition to docstrings for each
> >> global variable.
> >
> >This would not work well together with class inheritance.
> 
> Could you provide an example explaining this?  Using a dict *seems* like
> a good idea to me, too.

class A:
    " Base class for database "

    x = "???"
    " name of the database; override in subclasses ! "

    y = 1
    " run in auto-commit ? "

class D(A):

    x = "mydb"
    """ name of the attached database; note that this must support
        transactions 
    """

This will give you:

A.__doc__x__ == " name of the database; override in subclasses ! "
A.__doc__y__ == " run in auto-commit ? "
D.__doc__x__ == """ name of the attached database; note that this must support
        transactions 
    """
D.__doc__y__ == " run in auto-commit ? "

There's no way you are going to achieve this using dictionaries.

Note: You can always build dictionaries of docstring by using
the existing Python introspection features. This PEP is
meant to provide the data -- not the extraction tools.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From gvwilson at nevex.com  Mon Aug 28 19:43:29 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 13:43:29 -0400 (EDT)
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: <200008281709.MAA24142@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>

> > Marc-Andre Lemburg:
> > 1. Embed pragma declarations in comments:
> > 	#pragma: name = value
> > 
> > 2. Reusing a Python keyword to build a new form of statement:
> > 	def name = value
> > 
> > 3. Add a new keyword:
> > 	decl name = value

> Guido van Rossum:
> I say add a new reserved word pragma and accept the consequences.  
> The other solutions are just too ugly.

Greg Wilson:
Will pragma values be available at run-time, e.g. in a special
module-level dictionary variable '__pragma__', so that:

    pragma "encoding" = "UTF8"
    pragma "division" = "fractional"

has the same effect as:

    __pragma__["encoding"] = "UTF8"
    __pragma__["division"] = "fractional"

If that's the case, would it be better to use the dictionary syntax?  Or
does the special form simplify pragma detection so much as to justify
adding new syntax?

Also, what's the effect of putting a pragma in the middle of a file,
rather than at the top?  Does 'import' respect pragmas, or are they
per-file?  I've seen Fortran files that start with 20 lines of:

    C$VENDOR PROPERTY DEFAULT

to disable any settings that might be in effect when the file is included
in another, just so that the author of the include'd file could be sure of
the semantics of the code he was writing.

Thanks,

Greg




From fdrake at beopen.com  Mon Aug 28 20:00:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 14:00:43 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
	<200008281834.NAA24777@cj20424-a.reston1.va.home.com>
Message-ID: <14762.43211.814471.424886@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > How about popen4?  Or is that Windows specific?

  Haven't written it yet.  It's a little different from just wrappers
around popen2 module functions.  The Popen3 class doesn't support it
yet.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Mon Aug 28 20:06:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 28 Aug 2000 13:06:49 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
	<200008281834.NAA24777@cj20424-a.reston1.va.home.com>
Message-ID: <14762.43577.780248.889686@beluga.mojam.com>

    Guido> How about popen4?  Or is that Windows specific?

This is going to sound really dumb, but for all N where N >= 2, how many
popenN routines are there?  Do they represent a subclass of rabbits?  Until
the thread about Windows and os.popen2 started, I, living in a dream world
where my view of libc approximated 4.2BSD, wasn't even aware any popenN
routines existed.  In fact, on my Mandrake box that seems to still be the
case:

    % man -k popen
    popen, pclose (3)    - process I/O
    % nm -a /usr/lib/libc.a | egrep popen
    iopopen.o:
    00000188 T _IO_new_popen
    00000188 W _IO_popen
    00000000 a iopopen.c
    00000188 T popen

In fact, the os module documentation only describes popen, not popenN.

Where'd all these other popen variants come from?  Where can I find them
documented online?

Skip



From fdrake at beopen.com  Mon Aug 28 20:22:27 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 14:22:27 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <14762.43577.780248.889686@beluga.mojam.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
	<200008281834.NAA24777@cj20424-a.reston1.va.home.com>
	<14762.43577.780248.889686@beluga.mojam.com>
Message-ID: <14762.44515.597067.695634@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > In fact, the os module documentation only describes popen, not popenN.

  This will be fixed.

 > Where'd all these other popen variants come from?  Where can I find them
 > documented online?

  In the popen2 module docs, there are descriptions for popen2() and
popen3().  popen4() is new from the Windows world.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From mal at lemburg.com  Mon Aug 28 20:57:26 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 20:57:26 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
Message-ID: <39AAB616.460FA0A8@lemburg.com>

Greg Wilson wrote:
> 
> > > Marc-Andre Lemburg:
> > > 1. Embed pragma declarations in comments:
> > >     #pragma: name = value
> > >
> > > 2. Reusing a Python keyword to build a new form of statement:
> > >     def name = value
> > >
> > > 3. Add a new keyword:
> > >     decl name = value
> 
> > Guido van Rossum:
> > I say add a new reserved word pragma and accept the consequences.
> > The other solutions are just too ugly.
> 
> Greg Wilson:
> Will pragma values be available at run-time, e.g. in a special
> module-level dictionary variable '__pragma__', so that:
> 
>     pragma "encoding" = "UTF8"
>     pragma "division" = "fractional"
> 
> has the same effect as:
> 
>     __pragma__["encoding"] = "UTF8"
>     __pragma__["division"] = "fractional"
> 
> If that's the case, would it be better to use the dictionary syntax?  Or
> does the special form simplify pragma detection so much as to justify
> adding new syntax?

Pragmas tell the compiler to make certain assumptions about the
scope they appear in. It may be useful have their values available
as __pragma__ dict too, but only for introspection purposes and
then only for objects which support the attribute.

If we were to use a convention such as your proposed dictionary
assignment for these purposes, the compiler would have to treat
these assignments in special ways. Adding a new reserved word is
much cleaner.

> Also, what's the effect of putting a pragma in the middle of a file,
> rather than at the top?  Does 'import' respect pragmas, or are they
> per-file?  I've seen Fortran files that start with 20 lines of:
> 
>     C$VENDOR PROPERTY DEFAULT
> 
> to disable any settings that might be in effect when the file is included
> in another, just so that the author of the include'd file could be sure of
> the semantics of the code he was writing.

The compiler will see the pragma definition as soon as it reaches
it during compilation. All subsequent compilation (up to where
the compilation block ends, i.e. up to module, function and class
boundaries) will be influenced by the setting.

This is in line with all other declarations in Python, e.g. those
of global variables, functions and classes.

Imports do not affect pragmas since pragmas are a compile
time thing.

Here are some possible applications of pragmas (just to toss in
a few ideas):

# Cause global lookups to be cached in function's locals for future
# reuse.
pragma globals = 'constant'

# Cause all Unicode literals in the current scope to be
# interpreted as UTF-8.
pragma encoding = 'utf-8'

# Use -OO style optimizations
pragma optimization = 2

# Default division mode
pragma division = 'float'

The basic syntax in the above examples is:

	"pragma" NAME "=" (NUMBER | STRING+)

It has to be that simple to allow the compiler use the information
at compilation time.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From skip at mojam.com  Mon Aug 28 21:17:47 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 28 Aug 2000 14:17:47 -0500 (CDT)
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: <39AAB616.460FA0A8@lemburg.com>
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
	<39AAB616.460FA0A8@lemburg.com>
Message-ID: <14762.47835.129388.512169@beluga.mojam.com>

    MAL> Here are some possible applications of pragmas (just to toss in
    MAL> a few ideas):

    MAL> # Cause global lookups to be cached in function's locals for future
    MAL> # reuse.
    MAL> pragma globals = 'constant'

    MAL> # Cause all Unicode literals in the current scope to be
    MAL> # interpreted as UTF-8.
    MAL> pragma encoding = 'utf-8'

    MAL> # Use -OO style optimizations
    MAL> pragma optimization = 2

    MAL> # Default division mode
    MAL> pragma division = 'float'

Marc-Andre,

My interpretation of the word "pragma" (and I think a probably common
interpretation) is that it is a "hint to the compiler" which the compiler
can ignore if it chooses.  See

    http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?query=pragma

Your use of the word suggests that you propose to implement something more
akin to a "directive", that is, something the compiler is not free to
ignore.  Ignoring the pragma in the first and third examples will likely
only make the program run slower.  Ignoring the second or fourth pragmas
would likely result in incorrect compilation of the source.

Whatever you come up with, I think the distinction between hint and
directive will have to be made clear in the documentation.

Skip




From tanzer at swing.co.at  Mon Aug 28 18:27:44 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Mon, 28 Aug 2000 18:27:44 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: Your message of "Mon, 28 Aug 2000 10:55:15 +0200."
             <39AA28F3.1968E27@lemburg.com> 
Message-ID: <m13TRlA-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

> > IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> > __docs__ dictionary is a better solution:
> > 
> > - It provides all docstrings for the attributes of an object in a
> >   single place.
> > 
> >   * Handy in interactive mode.
> >   * This simplifies the generation of documentation considerably.
> > 
> > - It is easier to explain in the documentation
> 
> The downside is that it doesn't work well together with
> class inheritance: docstrings of the above form can
> be overridden or inherited just like any other class
> attribute.

Yep. That's why David also proposed a `doc' function combining the
`__docs__' of a class with all its ancestor's __docs__.

> > Normally, Python concatenates adjacent strings. It doesn't do this
> > with docstrings. I think Python's behavior would be more consistent
> > if docstrings were concatenated like any other strings.
> 
> Huh ? It does...
> 
> >>> class C:
> ...     "first line"\
> ...     "second line"
> ... 
> >>> C.__doc__
> 'first linesecond line'
> 
> And the same works for the attribute doc strings too.

Surprise. I tried it this morning. Didn't use a backslash, though. And almost 
overlooked it now.

> > >             b = 2
> > >
> > >             def x(self):
> > >                 "C.x doc string"
> > >                 y = 3
> > >                 return 1
> > >
> > >             "b's doc string"
> > >
> > >     Since the definition of method "x" currently does not reset the
> > >     used assignment name variable, it is still valid when the compiler
> > >     reaches the docstring "b's doc string" and thus assigns the string
> > >     to __doc__b__.
> > 
> > This is rather surprising behavior. Does this mean that a string in
> > the middle of a function definition would be interpreted as the
> > docstring of the function?
> 
> No, since at the beginning of the function the name variable
> is set to NULL.

Fine. Could the attribute docstrings follow the same pattern, then?

> > >     A possible solution to this problem would be resetting the name
> > >     variable for all non-expression nodes.
> > 
> > IMHO, David Goodger's proposal of indenting the docstring relative to the
> > attribute it refers to is a better solution.
> > 
> > If that requires too many changes to the parser, the name variable
> > should be reset for all statement nodes.
> 
> See my other mail: indenting is only allowed for blocks of
> code and these are usually started with a colon -- doesn't
> work here.

Too bad.

It's-still-a-great-addition-to-Python ly, 
Christian

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Mon Aug 28 21:29:04 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 21:29:04 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
		<39AAB616.460FA0A8@lemburg.com> <14762.47835.129388.512169@beluga.mojam.com>
Message-ID: <39AABD80.5089AEAF@lemburg.com>

Skip Montanaro wrote:
> 
>     MAL> Here are some possible applications of pragmas (just to toss in
>     MAL> a few ideas):
> 
>     MAL> # Cause global lookups to be cached in function's locals for future
>     MAL> # reuse.
>     MAL> pragma globals = 'constant'
> 
>     MAL> # Cause all Unicode literals in the current scope to be
>     MAL> # interpreted as UTF-8.
>     MAL> pragma encoding = 'utf-8'
> 
>     MAL> # Use -OO style optimizations
>     MAL> pragma optimization = 2
> 
>     MAL> # Default division mode
>     MAL> pragma division = 'float'
> 
> Marc-Andre,
> 
> My interpretation of the word "pragma" (and I think a probably common
> interpretation) is that it is a "hint to the compiler" which the compiler
> can ignore if it chooses.  See
> 
>     http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?query=pragma
> 
> Your use of the word suggests that you propose to implement something more
> akin to a "directive", that is, something the compiler is not free to
> ignore.  Ignoring the pragma in the first and third examples will likely
> only make the program run slower.  Ignoring the second or fourth pragmas
> would likely result in incorrect compilation of the source.
> 
> Whatever you come up with, I think the distinction between hint and
> directive will have to be made clear in the documentation.

True, I see the pragma statement as directive. Perhaps its not
the best name after all -- but then it is likely not to be in
use in existing Python programs as identifier, so perhaps we
just need to make it clear in the documentation that some
pragma statements will carry important information, not only hints.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 28 21:35:58 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 21:35:58 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TRlA-000wcEC@swing.co.at>
Message-ID: <39AABF1E.171BFD00@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> 
> > > IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> > > __docs__ dictionary is a better solution:
> > >
> > > - It provides all docstrings for the attributes of an object in a
> > >   single place.
> > >
> > >   * Handy in interactive mode.
> > >   * This simplifies the generation of documentation considerably.
> > >
> > > - It is easier to explain in the documentation
> >
> > The downside is that it doesn't work well together with
> > class inheritance: docstrings of the above form can
> > be overridden or inherited just like any other class
> > attribute.
> 
> Yep. That's why David also proposed a `doc' function combining the
> `__docs__' of a class with all its ancestor's __docs__.

The same can be done for __doc__<attrname>__ style attributes:
a helper function would just need to look at dir(Class) and then
extract the attribute doc strings it finds. It could also do
a DFS search to find a complete API description of the class
by emulating attribute lookup and combine method and attribute
docstrings to produce some nice online documentation output.
 
> > > Normally, Python concatenates adjacent strings. It doesn't do this
> > > with docstrings. I think Python's behavior would be more consistent
> > > if docstrings were concatenated like any other strings.
> >
> > Huh ? It does...
> >
> > >>> class C:
> > ...     "first line"\
> > ...     "second line"
> > ...
> > >>> C.__doc__
> > 'first linesecond line'
> >
> > And the same works for the attribute doc strings too.
> 
> Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> overlooked it now.

You could also wrap the doc string in parenthesis or use a triple
quote string.
 
> > > >             b = 2
> > > >
> > > >             def x(self):
> > > >                 "C.x doc string"
> > > >                 y = 3
> > > >                 return 1
> > > >
> > > >             "b's doc string"
> > > >
> > > >     Since the definition of method "x" currently does not reset the
> > > >     used assignment name variable, it is still valid when the compiler
> > > >     reaches the docstring "b's doc string" and thus assigns the string
> > > >     to __doc__b__.
> > >
> > > This is rather surprising behavior. Does this mean that a string in
> > > the middle of a function definition would be interpreted as the
> > > docstring of the function?
> >
> > No, since at the beginning of the function the name variable
> > is set to NULL.
> 
> Fine. Could the attribute docstrings follow the same pattern, then?

They could and probably should by resetting the variable
after all constructs which do not assign attributes.
 
> > > >     A possible solution to this problem would be resetting the name
> > > >     variable for all non-expression nodes.
> > >
> > > IMHO, David Goodger's proposal of indenting the docstring relative to the
> > > attribute it refers to is a better solution.
> > >
> > > If that requires too many changes to the parser, the name variable
> > > should be reset for all statement nodes.
> >
> > See my other mail: indenting is only allowed for blocks of
> > code and these are usually started with a colon -- doesn't
> > work here.
> 
> Too bad.
> 
> It's-still-a-great-addition-to-Python ly,
> Christian

Me thinks so too ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Mon Aug 28 23:59:36 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 16:59:36 -0500
Subject: [Python-Dev] Lukewarm about range literals
Message-ID: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>

I chatted with some PythonLabs folks this morning and nobody had any
real enthusiasm for range literals.  I notice that:

  for i in [:100]: print i

looks a bit too much like line noise.  I remember that Randy Pausch
once mentioned that a typical newbie will read this as:

  for i in 100 print i

and they will have a heck of a time to reconstruct the punctuation,
with all sorts of errors lurking, e.g.:

  for i in [100]: print i
  for i in [100:]: print i
  for i in :[100]: print i

Is there anyone who wants to champion this?

Sorry, Thomas!  I'm not doing this to waste your time!  It honestly
only occurred to me this morning, after Tim mentioned he was at most
lukewarm about it...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Mon Aug 28 23:06:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 23:06:31 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 28, 2000 at 04:59:36PM -0500
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <20000828230630.I500@xs4all.nl>

On Mon, Aug 28, 2000 at 04:59:36PM -0500, Guido van Rossum wrote:

> Sorry, Thomas!  I'm not doing this to waste your time!  It honestly
> only occurred to me this morning, after Tim mentioned he was at most
> lukewarm about it...

Heh, no problem. It was good practice, and if you remember (or search your
mail archive) I was only lukewarm about it, too, back when you asked me to
write it! And been modulating between 'lukewarm' and 'stonecold', inbetween
generators, tuple-ranges that look like hardware-addresses, nonint-ranges
and what not.

Less-docs-to-write-if-noone-champions-this-then-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Tue Aug 29 00:30:10 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 17:30:10 -0500
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
Message-ID: <200008282230.RAA30148@cj20424-a.reston1.va.home.com>

Now that the CNRI license issues are nearly settled, BeOpen.com needs
to put its own license on Python 2.0 (as a derivative work of CNRI's
Python 1.6) too.  We want an open discussion about the new license
with the Python community, and have established a mailing list for
this purpose.  To participate subscribe, go to

   http://mailman.beopen.com/mailman/listinfo/license-py20

and follow the instructions for subscribing.  The mailing list is
unmoderated, open to all, and archived
(at http://mailman.beopen.com/pipermail/license-py20/).

Your questions, concerns and suggestions are welcome!

Our initial thoughts are to use a slight adaptation of the CNRI
license for Python 1.6, adding an "or GPL" clause, meaning that Python
2.0 can be redistributed under the Python 2.0 license or under the GPL
(like Perl can be redistributed under the Artistic license or under
the GPL).

Note that I don't want this list to degenerate into flaming about the
CNRI license (except as it pertains directly to the 2.0 license) --
there's little we can do about the CNRI license, and it has been
beaten to death on comp.lang.python.

In case you're in the dark about the CNRI license, please refer to
http://www.python.org/1.6/download.html for the license text and to
http://www.python.org/1.6/license_faq.html for a list of frequently
asked questions about the license and CNRI's answers.

Note that we're planning to release the first beta release of Python
2.0 on September 4 -- however we can change the license for the final
release.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Mon Aug 28 23:46:19 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 28 Aug 2000 16:46:19 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <14762.56747.826063.269390@beluga.mojam.com>

    Guido> I notice that:

    Guido>   for i in [:100]: print i

    Guido> looks a bit too much like line noise.  I remember that Randy
    Guido> Pausch once mentioned that a typical newbie will read this as:

    Guido>   for i in 100 print i

Just tossing out a couple ideas here.  I don't see either mentioned in the
current version of the PEP.

    1. Would it help readability if there were no optional elements in range
       literals?  That way you'd have to write

	for i in [0:100]: print i

    2. Would it be more visually obvious to use ellipsis notation to
       separate the start and end inidices?

        >>> for i in [0...100]: print i
	0
	1
	...
	99

	>>> for i in [0...100:2]: print i
	0
	2
	...
	98

I don't know if either are possible syntactically.

Skip



From thomas at xs4all.net  Mon Aug 28 23:55:36 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 23:55:36 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14762.56747.826063.269390@beluga.mojam.com>; from skip@mojam.com on Mon, Aug 28, 2000 at 04:46:19PM -0500
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com> <14762.56747.826063.269390@beluga.mojam.com>
Message-ID: <20000828235536.J500@xs4all.nl>

On Mon, Aug 28, 2000 at 04:46:19PM -0500, Skip Montanaro wrote:

> I don't know if either are possible syntactically.

They are perfectly possible (in fact, more easily so than the current
solution, if it hadn't already been written.) I like the elipsis syntax
myself, but mostly because i have *no* use for elipses, currently. It's also
reminiscent of the range-creating '..' syntax I learned in MOO, a long time
ago ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gvwilson at nevex.com  Tue Aug 29 00:04:41 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 18:04:41 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000828235536.J500@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>

> Thomas Wouters wrote:
> They are perfectly possible (in fact, more easily so than the current
> solution, if it hadn't already been written.) I like the elipsis
> syntax myself, but mostly because i have *no* use for elipses,
> currently. It's also reminiscent of the range-creating '..' syntax I
> learned in MOO, a long time ago ;)

I would vote -1 on [0...100:10] --- even range(0, 100, 10) reads better,
IMHO.  I understand Guido et al's objections to:

    for i in [:100]:

but in my experience, students coming to Python from other languages seem
to expect to be able to say "do this N times" very simply.  Even:

    for i in range(100):

raises eyebrows.  I know it's all syntactic sugar, but it comes up in the
first hour of every course I've taught...

Thanks,

Greg




From nowonder at nowonder.de  Tue Aug 29 02:41:57 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 29 Aug 2000 00:41:57 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
Message-ID: <39AB06D5.BD99855@nowonder.de>

Greg Wilson wrote:
> 
> I would vote -1 on [0...100:10] --- even range(0, 100, 10) reads better,

I don't like [0...100] either. It just looks bad.
But I really *do* like [0..100] (maybe that's Pascal being my first
serious language).

That said, I prefer almost any form of range literals over the current
situation. range(0,100) has no meaning to me (maybe because English is
not my mother tongue), but [0..100] looks like "from 0 to 100"
(although one might expect len([1..100]) == 100).

> but in my experience, students coming to Python from other languages seem
> to expect to be able to say "do this N times" very simply.  Even:
> 
>     for i in range(100):
> 
> raises eyebrows.  I know it's all syntactic sugar, but it comes up in the
> first hour of every course I've taught...

I fully agree on that one, although I think range(N) to
iterate N times isn't as bad as range(len(SEQUENCE)) to
iterate over the indices of a sequence.

not-voting---but-you-might-be-able-to-guess-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From cgw at fnal.gov  Tue Aug 29 00:47:30 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 28 Aug 2000 17:47:30 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <14762.60418.53633.223999@buffalo.fnal.gov>

I guess I'm in the minority here because I kind of like the range
literal syntax.

Guido van Rossum writes:
 > I notice that:
 > 
 >   for i in [:100]: print i
 > 
 > looks a bit too much like line noise.  I remember that Randy Pausch
 > once mentioned that a typical newbie will read this as:
 > 
 >   for i in 100 print i

When I was a complete Python newbie (back around 1994) I thought that
the syntax

l2 = l1[:]

for copying lists looked pretty mysterious and weird.  But after
spending some time programming Python I've come to think that the
slice syntax is perfectly natural.  Should constructs be banned from
the language simply because they might confuse newbies?  I don't think
so.

I for one like Thomas' range literals.  They fit very naturally into
the existing Python concept of slices.

 > and they will have a heck of a time to reconstruct the punctuation,
 > with all sorts of errors lurking, e.g.:
 > 
 >   for i in [100]: print i
 >   for i in [100:]: print i
 >   for i in :[100]: print i

This argument seems a bit weak to me; you could take just about any
Python expression and mess up the punctuation with misplaced colons.

 > Is there anyone who wants to champion this?

I don't know about "championing" it but I'll give it a +1, if that
counts for anything.




From gvwilson at nevex.com  Tue Aug 29 01:02:29 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 19:02:29 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14762.60418.53633.223999@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com>

> Charles wrote:
> When I was a complete Python newbie (back around 1994) I thought that
> the syntax
> 
> l2 = l1[:]
> 
> for copying lists looked pretty mysterious and weird.  But after
> spending some time programming Python I've come to think that the
> slice syntax is perfectly natural.  Should constructs be banned from
> the language simply because they might confuse newbies?

Greg writes:
Well, it *is* the reason we switched from Perl to Python in our software
engineering course...

Greg




From guido at beopen.com  Tue Aug 29 02:33:01 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 19:33:01 -0500
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Your message of "Mon, 28 Aug 2000 19:02:29 -0400."
             <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com> 
References: <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com> 
Message-ID: <200008290033.TAA30757@cj20424-a.reston1.va.home.com>

> > Charles wrote:
> > When I was a complete Python newbie (back around 1994) I thought that
> > the syntax
> > 
> > l2 = l1[:]
> > 
> > for copying lists looked pretty mysterious and weird.  But after
> > spending some time programming Python I've come to think that the
> > slice syntax is perfectly natural.  Should constructs be banned from
> > the language simply because they might confuse newbies?
> 
> Greg writes:
> Well, it *is* the reason we switched from Perl to Python in our software
> engineering course...

And the original proposal for range literals also came from the
Numeric corner of the world (I believe Paul Dubois first suggested it
to me).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Tue Aug 29 03:51:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 28 Aug 2000 21:51:33 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000828230630.I500@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>

Just brain-dumping here:

Thomas did an excellent job on the patch!  It's clean & crisp and, I think,
bulletproof.  Just want that to be clear.

As the reviewer, I spent about 2 hours playing with it, trying it out in my
code.  And I simply liked it less the more I used it; e.g.,

for i in [:len(a)]:
    a[i] += 1

struck me as clumsier and uglier than

for i in range(len(a)):
    a[i] += 1

at once-- which I expected due to the novelty --but didn't grow on me at
*all*.  Which is saying something, since I'm the world's longest-standing
fan of "for i indexing a" <wink>; i.e., I'm *no* fan of the range(len(...))
business, and this seems even worse.  Despite that I should know 100x better
at all levels, I kept finding myself trying to write stuff like

for i in [:a]:  # or [len(a)] a couple times, even [a:] once
    a[i] += 1

Charles likes slices.  Me too!  I *love* them.  But as a standalone notation
(i.e., not as a subscript), part of the glory of slicing breaks down:  for
the list a, a[:] makes good sense, but when *iterating* over a,  it's
suddenly [:len(a)] because there's no context to supply a correct upper
bound.

For 2.0, the question is solely yes-or-no on this specific notation.  If it
goes in, it will never go away.  I was +0 at first, at best -0 now.  It does
nothing for me I can't do just as easily-- and I think more clearly --with
range.  The kinds of "extensions"/variations mentioned in the PEP make me
shiver, too.

Post 2.0, who knows.  I'm not convinced Python actually needs another
arithmetic-progression *list* notation.  If it does, I've always been fond
of Haskell's range literals (but note that they include the endpoint):

Prelude> [1..10]
[1,2,3,4,5,6,7,8,9,10]
Prelude> [1, 3 .. 10]
[1,3,5,7,9]
Prelude> [10, 9 .. 1]
[10,9,8,7,6,5,4,3,2,1]
Prelude> [10, 7 .. -5]
[10,7,4,1,-2,-5]
Prelude>

Of course Haskell is profoundly lazy too, so "infinite" literals are just as
normal there:

Prelude> take 5 [1, 100 ..]
[1,100,199,298,397]
Prelude> take 5 [3, 2 ..]
[3,2,1,0,-1]

It's often easier to just list the first two terms than to figure out the
*last* term and name the stride.  I like notations that let me chuckle "hey,
you're the computer, *you* figure out the silly details" <wink>.





From dgoodger at bigfoot.com  Tue Aug 29 05:05:41 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Mon, 28 Aug 2000 23:05:41 -0400
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <39AABF1E.171BFD00@lemburg.com>
References: <m13TRlA-000wcEC@swing.co.at><39AABF1E.171BFD00@lemburg.com>
Message-ID: <B5D0A0C4.82E1%dgoodger@bigfoot.com>

on 2000-08-28 15:35, M.-A. Lemburg (mal at lemburg.com) wrote:

> Christian Tanzer wrote:
>> 
>> "M.-A. Lemburg" <mal at lemburg.com> wrote:
>> 
>>>> IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
>>>> __docs__ dictionary is a better solution:
>>>> 
>>>> - It provides all docstrings for the attributes of an object in a
>>>> single place.
>>>> 
>>>> * Handy in interactive mode.
>>>> * This simplifies the generation of documentation considerably.
>>>> 
>>>> - It is easier to explain in the documentation
>>> 
>>> The downside is that it doesn't work well together with
>>> class inheritance: docstrings of the above form can
>>> be overridden or inherited just like any other class
>>> attribute.
>> 
>> Yep. That's why David also proposed a `doc' function combining the
>> `__docs__' of a class with all its ancestor's __docs__.
> 
> The same can be done for __doc__<attrname>__ style attributes:
> a helper function would just need to look at dir(Class) and then
> extract the attribute doc strings it finds. It could also do
> a DFS search to find a complete API description of the class
> by emulating attribute lookup and combine method and attribute
> docstrings to produce some nice online documentation output.

Using dir(Class) wouldn't find any inherited attributes of the class. A
depth-first search would be required for any use of attribute docstrings.


From cgw at fnal.gov  Tue Aug 29 06:38:41 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 28 Aug 2000 23:38:41 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
References: <20000828230630.I500@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <14763.15953.563107.722452@buffalo.fnal.gov>

Tim Peters writes:

 > As the reviewer, I spent about 2 hours playing with it, trying it out in my
 > code.  And I simply liked it less the more I used it

That's 2 hours more than I (and probably most other people) spent
trying it out.

 > For 2.0, the question is solely yes-or-no on this specific notation.  If it
 > goes in, it will never go away.

This strikes me as an extremely strong argument.  If the advantages
aren't really all that clear, then adopting this syntax for range
literals now removes the possibility to come up with a better way at a
later date ("opportunity cost", as the economists say).

The Haskell examples you shared are pretty neat.

FWIW, I retract my earlier +1.




From ping at lfw.org  Tue Aug 29 07:09:39 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 00:09:39 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>

On Mon, 28 Aug 2000, Tim Peters wrote:
> Post 2.0, who knows.  I'm not convinced Python actually needs another
> arithmetic-progression *list* notation.  If it does, I've always been fond
> of Haskell's range literals (but note that they include the endpoint):
> 
> Prelude> [1..10]
> [1,2,3,4,5,6,7,8,9,10]
> Prelude> [1, 3 .. 10]
> [1,3,5,7,9]
> Prelude> [10, 9 .. 1]
> [10,9,8,7,6,5,4,3,2,1]
> Prelude> [10, 7 .. -5]
> [10,7,4,1,-2,-5]

I think these examples are beautiful.  There is no reason why we couldn't
fit something like this into Python.  Imagine this:

    - The ".." operator produces a tuple (or generator) of integers.
      It should probably have precedence just above "in".
    
    - "a .. b", where a and b are integers, produces the sequence
      of integers (a, a+1, a+2, ..., b).

    - If the left argument is a tuple of two integers, as in
      "a, b .. c", then we get the sequence of integers from
      a to c with step b-a, up to and including c if c-a happens
      to be a multiple of b-a (exactly as in Haskell).

And, optionally:

    - The "..!" operator produces a tuple (or generator) of integers.
      It functions exactly like the ".." operator except that the
      resulting sequence does not include the endpoint.  (If you read
      "a .. b" as "go from a up to b", then read "a ..! b" as "go from
      a up to, but not including b".)

If this operator existed, we could then write:

    for i in 2, 4 .. 20:
        print i

    for i in 1 .. 10:
        print i*i

    for i in 0 ..! len(a):
        a[i] += 1

...and these would all do the obvious things.


-- ?!ng





From greg at cosc.canterbury.ac.nz  Tue Aug 29 07:04:05 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Aug 2000 17:04:05 +1200 (NZST)
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
In-Reply-To: <200008282230.RAA30148@cj20424-a.reston1.va.home.com>
Message-ID: <200008290504.RAA17003@s454.cosc.canterbury.ac.nz>

> meaning that Python
> 2.0 can be redistributed under the Python 2.0 license or under the
> GPL

Are you sure that's possible? Doesn't the CNRI license
require that its terms be passed on to users of derivative
works? If so, a user of Python 2.0 couldn't just remove the
CNRI license and replace it with the GPL.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Aug 29 07:17:38 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Aug 2000 17:17:38 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
Message-ID: <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>

Ka-Ping Yee <ping at lfw.org>:

>    for i in 1 .. 10:
>        print i*i

That looks quite nice to me!

>    for i in 0 ..! len(a):
>        a[i] += 1

And that looks quite ugly. Couldn't it just as well be

    for i in 0 .. len(a)-1:
        a[i] += 1

and be vastly clearer?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From bwarsaw at beopen.com  Tue Aug 29 07:30:31 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 01:30:31 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
	<200008290517.RAA17013@s454.cosc.canterbury.ac.nz>
Message-ID: <14763.19063.973751.122546@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Ka-Ping Yee <ping at lfw.org>:

    >> for i in 1 .. 10: print i*i

    GE> That looks quite nice to me!

Indeed.

    >> for i in 0 ..! len(a): a[i] += 1

    GE> And that looks quite ugly. Couldn't it just as well be

    |     for i in 0 .. len(a)-1:
    |         a[i] += 1

    GE> and be vastly clearer?

I agree.  While I read 1 ..! 10 as "from one to not 10" that doesn't
exactly tell me what the sequence /does/ run to. ;)

-Barry



From effbot at telia.com  Tue Aug 29 09:09:02 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 09:09:02 +0200
Subject: [Python-Dev] Lukewarm about range literals
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <01c501c01188$a21232e0$766940d5@hagrid>

tim peters wrote:
> Charles likes slices.  Me too!  I *love* them.  But as a standalone notation
> (i.e., not as a subscript), part of the glory of slicing breaks down:  for
> the list a, a[:] makes good sense, but when *iterating* over a,  it's
> suddenly [:len(a)] because there's no context to supply a correct upper
> bound.

agreed.  ranges and slices are two different things.  giving
them the same syntax is a lousy idea.

> Post 2.0, who knows.  I'm not convinced Python actually needs another
> arithmetic-progression *list* notation.  If it does, I've always been fond
> of Haskell's range literals (but note that they include the endpoint):
> 
> Prelude> [1..10]
> [1,2,3,4,5,6,7,8,9,10]
> Prelude> [1, 3 .. 10]
> [1,3,5,7,9]

isn't that taken from SETL?

(the more I look at SETL, the more Pythonic it looks.  not too
bad for something that was designed in the late sixties ;-)

talking about SETL, now that the range literals are gone, how
about revisiting an old proposal:

    "...personally, I prefer their "tuple former" syntax over the the
    current PEP202 proposal:

        [expression : iterator]

        [n : n in range(100)]
        [(x**2, x) : x in range(1, 6)]
        [a : a in y if a > 5]

    (all examples are slightly pythonified; most notably, they
    use "|" or "st" (such that) instead of "if")

    the expression can be omitted if it's the same thing as the
    loop variable, *and* there's at least one "if" clause:

        [a in y if a > 5]

    also note that their "for-in" statement can take qualifiers:

        for a in y if a > 5:
            ...

</F>




From tanzer at swing.co.at  Tue Aug 29 08:42:17 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Tue, 29 Aug 2000 08:42:17 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: Your message of "Mon, 28 Aug 2000 21:35:58 +0200."
             <39AABF1E.171BFD00@lemburg.com> 
Message-ID: <m13Tf69-000wcDC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

> > > > IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> > > > __docs__ dictionary is a better solution:
(snip)
> > > The downside is that it doesn't work well together with
> > > class inheritance: docstrings of the above form can
> > > be overridden or inherited just like any other class
> > > attribute.
> > 
> > Yep. That's why David also proposed a `doc' function combining the
> > `__docs__' of a class with all its ancestor's __docs__.
> 
> The same can be done for __doc__<attrname>__ style attributes:
> a helper function would just need to look at dir(Class) and then
> extract the attribute doc strings it finds. It could also do
> a DFS search to find a complete API description of the class
> by emulating attribute lookup and combine method and attribute
> docstrings to produce some nice online documentation output.

Of course, one can get at all docstrings by using `dir'. But it is a
pain and slow as hell. And nothing one would use in interactive mode.

As Python already handles the analogous case for `__dict__' and
`getattr', it seems to be just a SMOP to do it for `__docs__', too. 

> > > > Normally, Python concatenates adjacent strings. It doesn't do this
> > > > with docstrings. I think Python's behavior would be more consistent
> > > > if docstrings were concatenated like any other strings.
> > >
> > > Huh ? It does...
> > >
> > > >>> class C:
> > > ...     "first line"\
> > > ...     "second line"
> > > ...
> > > >>> C.__doc__
> > > 'first linesecond line'
> > >
> > > And the same works for the attribute doc strings too.
> > 
> > Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> > overlooked it now.
> 
> You could also wrap the doc string in parenthesis or use a triple
> quote string.

Wrapping a docstring in parentheses doesn't work in 1.5.2:

Python 1.5.2 (#5, Jan  4 2000, 11:37:02)  [GCC 2.7.2.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> class C:
...   ("first line"
...    "second line")
... 
>>> C.__doc__ 
>>> 

Triple quoted strings work -- that's what I'm constantly using. The
downside is, that the docstrings either contain spurious white space
or it messes up the layout of the code (if you start subsequent lines
in the first column).

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Tue Aug 29 11:00:49 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 11:00:49 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TRlA-000wcEC@swing.co.at><39AABF1E.171BFD00@lemburg.com> <B5D0A0C4.82E1%dgoodger@bigfoot.com>
Message-ID: <39AB7BC1.5670ACDC@lemburg.com>

David Goodger wrote:
> 
> on 2000-08-28 15:35, M.-A. Lemburg (mal at lemburg.com) wrote:
> 
> > Christian Tanzer wrote:
> >>
> >> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> >>
> >>>> IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> >>>> __docs__ dictionary is a better solution:
> >>>>
> >>>> - It provides all docstrings for the attributes of an object in a
> >>>> single place.
> >>>>
> >>>> * Handy in interactive mode.
> >>>> * This simplifies the generation of documentation considerably.
> >>>>
> >>>> - It is easier to explain in the documentation
> >>>
> >>> The downside is that it doesn't work well together with
> >>> class inheritance: docstrings of the above form can
> >>> be overridden or inherited just like any other class
> >>> attribute.
> >>
> >> Yep. That's why David also proposed a `doc' function combining the
> >> `__docs__' of a class with all its ancestor's __docs__.
> >
> > The same can be done for __doc__<attrname>__ style attributes:
> > a helper function would just need to look at dir(Class) and then
> > extract the attribute doc strings it finds. It could also do
> > a DFS search to find a complete API description of the class
> > by emulating attribute lookup and combine method and attribute
> > docstrings to produce some nice online documentation output.
> 
> Using dir(Class) wouldn't find any inherited attributes of the class. A
> depth-first search would be required for any use of attribute docstrings.

Uhm, yes... that's what I wrote in the last paragraph.

> The advantage of the __doc__attribute__ name-mangling scheme (over __docs__
> dictionaries) would be that the attribute docstrings would be accessible
> from subclasses and class instances. But since "these attributes are meant
> for tools to use, not humans," this is not an issue.

I understand that you would rather like a "frozen" version
of the class docs, but this simply doesn't work out for
the common case of mixin classes and classes which are built
at runtime.

The name mangling is meant for internal use and just to give
the beast a name ;-) 

Doc tools can then take whatever action
they find necessary and apply the needed lookup, formatting
and content extraction. They might even add a frozen __docs__
attribute to classes which are known not to change after
creation.

I use such a function which I call freeze() to optimize many
static classes in my applications: the function scans all
available attributes in the inheritance tree and adds them
directly to the class in question. This gives some noticable
speedups for deeply nested class structures or ones which
use many mixin classes.

> Just to *find* all attribute names, in order to extract the docstrings, you
> would *have* to go through a depth-first search of all base classes. Since
> you're doing that anyway, why not collect docstrings as you collect
> attributes? There would be no penalty. In fact, such an optimized function
> could be written and included in the standard distribution.
> 
> A perfectly good model exists in __dict__ and dir(). Why not imitate it?

Sure, but let's do that in a doc() utility function.

I want to keep the implementation of this PEP clean and simple.
All meta-logic should be applied by external helpers.

> on 2000-08-28 04:28, M.-A. Lemburg (mal at lemburg.com) wrote:
> > This would not work well together with class inheritance.
> 
> It seems to me that it would work *exactly* as does class inheritance,
> cleanly and elegantly.

Right, and that's why I'm proposing to use attributes for the
docstrings as well: the docstrings will then behave just like
the attributes they describe.

> The __doc__attribute__ name-mangling scheme strikes
> me as un-Pythonic, to be honest.

It may look a bit strange, but it's certainly not un-Pythonic:
just look at private name mangling or the many __xxx__ hooks
which Python uses.
 
> Let me restate: I think the idea of attribute docstring is great. It brings
> a truly Pythonic, powerful auto-documentation system (a la POD or JavaDoc)
> closer. And I'm willing to help!

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Tue Aug 29 11:41:15 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 11:41:15 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13Tf69-000wcDC@swing.co.at>
Message-ID: <39AB853B.217402A2@lemburg.com>

Christian Tanzer wrote:
> 
> > > > >>> class C:
> > > > ...     "first line"\
> > > > ...     "second line"
> > > > ...
> > > > >>> C.__doc__
> > > > 'first linesecond line'
> > > >
> > > > And the same works for the attribute doc strings too.
> > >
> > > Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> > > overlooked it now.
> >
> > You could also wrap the doc string in parenthesis or use a triple
> > quote string.
> 
> Wrapping a docstring in parentheses doesn't work in 1.5.2:
> 
> Python 1.5.2 (#5, Jan  4 2000, 11:37:02)  [GCC 2.7.2.1] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> class C:
> ...   ("first line"
> ...    "second line")
> ...
> >>> C.__doc__
> >>>

Hmm, looks like you're right... the parenthesis probably only
work for "if" and function calls. This works:

function("firstline"
	 "secondline")

> Triple quoted strings work -- that's what I'm constantly using. The
> downside is, that the docstrings either contain spurious white space
> or it messes up the layout of the code (if you start subsequent lines
> in the first column).

Just a question of how smart you doc string extraction
tools are. Have a look at hack.py:

	http://starship.python.net/~lemburg/hack.py

and its docs() API:

>>> class C:
...     """ first line
...         second line
...         third line
...     """
... 
>>> import hack 
>>> hack.docs(C)
Class  :
    first line
    second line
    third line

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Tue Aug 29 11:44:30 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 29 Aug 2000 11:44:30 +0200
Subject: [Python-Dev] Pragma-style declaration syntax 
In-Reply-To: Message by "M.-A. Lemburg" <mal@lemburg.com> ,
	     Mon, 28 Aug 2000 20:57:26 +0200 , <39AAB616.460FA0A8@lemburg.com> 
Message-ID: <20000829094431.AA5DB303181@snelboot.oratrix.nl>

> The basic syntax in the above examples is:
> 
> 	"pragma" NAME "=" (NUMBER | STRING+)
> 
> It has to be that simple to allow the compiler use the information
> at compilation time.

Can we have a bit more syntax, so other packages that inspect the source 
(freeze and friends come to mind) can also use the pragma scheme?

Something like
	"pragma" NAME ("." NAME)+ "=" (NUMBER | STRING+)
should allow freeze to use something like

pragma freeze.exclude = "win32ui, sunaudiodev, linuxaudiodev"

which would be ignored by the compiler but interpreted by freeze.
And, if they're stored in the __pragma__ dictionary too, as was suggested 
here, you can also add pragmas specific for class browsers, debuggers and such.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From tim_one at email.msn.com  Tue Aug 29 11:45:24 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 05:45:24 -0400
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
In-Reply-To: <01c501c01188$a21232e0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>

[/F]
> agreed.  ranges and slices are two different things.  giving
> them the same syntax is a lousy idea.

Don't know about *that*, but it doesn't appear to work as well as was hoped.

[Tim]
>> Post 2.0, who knows.  I'm not convinced Python actually needs
>> another arithmetic-progression *list* notation.  If it does, I've
>> always been fond of Haskell's range literals (but note that they
>> include the endpoint):
>>
>> Prelude> [1..10]
>> [1,2,3,4,5,6,7,8,9,10]
>> Prelude> [1, 3 .. 10]
>> [1,3,5,7,9]

> isn't that taken from SETL?

Sure looks like it to me.  The Haskell designers explicitly credited SETL
for list comprehensions, but I don't know that they do for this gimmick too.
Of course Haskell's "infinite" list builders weren't in SETL, and, indeed,
expressions like [1..] are pretty common in Haskell programs.  One of the
prettiest programs ever in any language ever:

primes = sieve [2..]
         where sieve (x:xs) = x :
                              sieve [n | n <- xs, n `mod` x /= 0]

which defines the list of all primes.

> (the more I look at SETL, the more Pythonic it looks.  not too
> bad for something that was designed in the late sixties ;-)

It was way ahead of its time.  Still is!  Check out its general loop
construct, though -- now *that's* a kitchen sink.  Guido mentioned that
ABC's Lambert Meertens spent a year's sabbatical at NYU when SETL was in its
heyday, and I figure that's where ABC got quantifiers in boolean expressions
(if each x in list has p(x); if no x in list has p(x); if some x in list has
p(x)).  Have always wondered why Python didn't have that too; I ask that
every year, but so far Guido has never answered it <wink>.

> talking about SETL, now that the range literals are gone, how
> about revisiting an old proposal:
>
>     "...personally, I prefer their "tuple former" syntax over the the
>     current PEP202 proposal:
>
>         [expression : iterator]
>
>         [n : n in range(100)]
>         [(x**2, x) : x in range(1, 6)]
>         [a : a in y if a > 5]
>
>     (all examples are slightly pythonified; most notably, they
>     use "|" or "st" (such that) instead of "if")
>
>     the expression can be omitted if it's the same thing as the
>     loop variable, *and* there's at least one "if" clause:
>
>         [a in y if a > 5]
>
>     also note that their "for-in" statement can take qualifiers:
>
>         for a in y if a > 5:
>             ...

You left off the last sentence from the first time you posted this:

>     is there any special reason why we cannot use colon instead
>     of "for"?

Guido then said we couldn't use a colon because that would make [x : y] too
hard to parse, because range literals were of the same form.  Thomas went on
to point out that it's worse than that, it's truly ambiguous.

Now I expect you prefaced this with "now that the range literals are gone"
expecting that everyone would just remember all that <wink>.  Whether they
did or not, now they should.

I counted two replies beyond those.  One from Peter Schneider-Kamp was
really selling another variant.  The other from Marc-Andre Lemburg argued
that while the shorthand is convenient for mathematicians, "I doubt that
CP4E users
get the grasp of this".

Did I miss anything?

Since Guido didn't chime in again, I assumed he was happy with how things
stood.  I further assume he picked on a grammar technicality to begin with
because that's the way he usually shoots down a proposal he doesn't want to
argue about -- "no new keywords" has served him extremely well that way
<wink>.  That is, I doubt that "now that the range literals are gone" (if
indeed they are!) will make any difference to him, and with the release one
week away he'd have to get real excited real fast.

I haven't said anything about it, but I'm with Marc-Andre on this:  sets
were *extremely* heavily used in SETL, and brevity in their expression was a
great virtue there because of it.  listcomps won't be that heavily used in
Python, and I think it's downright Pythonic to leave them wordy in order to
*discourage* fat hairy listcomp expressions.  They've been checked in for
quite a while now, and I like them fine as they are in practice.

I've also got emails like this one in pvt:

    The current explanation "[for and if clauses] nest in the same way
    for loops and if statements nest now." is pretty clear and easy to
    remember.

That's important too, because despite pockets of hysteria to the contrary on
c.l.py, this is still Python.  When I first saw your first example:

     [n : n in range(100)]

I immediately read "n in range(100)" as a true/false expression, because
that's what it *is* in 1.6 unless immediately preceded by "for".  The
current syntax preserves that.  Saving two characters (":" vs "for") isn't
worth it in Python.  The vertical bar *would* be "worth it" to me, because
that's what's used in SETL, Haskell *and* common mathematical practice for
"such that".  Alas, as Guido is sure to point out, that's too hard to parse
<0.9 wink>.

consider-it-channeled-unless-he-thunders-back-ly y'rs  - tim





From mal at lemburg.com  Tue Aug 29 12:40:11 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 12:40:11 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <20000829094431.AA5DB303181@snelboot.oratrix.nl>
Message-ID: <39AB930B.F34673AB@lemburg.com>

Jack Jansen wrote:
> 
> > The basic syntax in the above examples is:
> >
> >       "pragma" NAME "=" (NUMBER | STRING+)
> >
> > It has to be that simple to allow the compiler use the information
> > at compilation time.
> 
> Can we have a bit more syntax, so other packages that inspect the source
> (freeze and friends come to mind) can also use the pragma scheme?
> 
> Something like
>         "pragma" NAME ("." NAME)+ "=" (NUMBER | STRING+)
> should allow freeze to use something like
> 
> pragma freeze.exclude = "win32ui, sunaudiodev, linuxaudiodev"
> 
> which would be ignored by the compiler but interpreted by freeze.
> And, if they're stored in the __pragma__ dictionary too, as was suggested
> here, you can also add pragmas specific for class browsers, debuggers and such.

Hmm, freeze_exclude would have also done the trick.

The only thing that will have to be assured is that the
arguments are readily available at compile time. Adding
a dot shouldn't hurt ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Tue Aug 29 13:02:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 13:02:14 +0200
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Aug 29, 2000 at 05:45:24AM -0400
References: <01c501c01188$a21232e0$766940d5@hagrid> <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>
Message-ID: <20000829130214.L500@xs4all.nl>

On Tue, Aug 29, 2000 at 05:45:24AM -0400, Tim Peters wrote:

> Saving two characters (":" vs "for") isn't worth it in Python.  The
> vertical bar *would* be "worth it" to me, because that's what's used in
> SETL, Haskell *and* common mathematical practice for "such that".  Alas,
> as Guido is sure to point out, that's too hard to parse

It's impossible to parse, of course, unless you require the parentheses
around the expression preceding it :)

[ (n) | n in range(100) if n%2 ]

I-keep-writing-'where'-instead-of-'if'-in-those-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gvwilson at nevex.com  Tue Aug 29 13:28:58 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 07:28:58 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>

> > Ka-Ping Yee <ping at lfw.org>:
> >    for i in 1 .. 10:
> >        print i*i
> >    for i in 0 ..! len(a):
> >        a[i] += 1

Greg Wilson writes:

The problem with using ellipsis is that there's no obvious way to include
a stride --- how do you hit every second (or n'th) element, rather than
every element?  I'd rather stick to range() than adopt:

    for i in [1..10:5]

Thanks,
Greg

BTW, I understand from side conversations that adding a 'keys()' method to
sequences, so that arbitrary collections could be iterated over using:

    for i in S.keys():
        print i, S[i]

was considered and rejected.  If anyone knows why, I'd be grateful for a
recap.





From guido at beopen.com  Tue Aug 29 14:36:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:36:38 -0500
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
In-Reply-To: Your message of "Tue, 29 Aug 2000 17:04:05 +1200."
             <200008290504.RAA17003@s454.cosc.canterbury.ac.nz> 
References: <200008290504.RAA17003@s454.cosc.canterbury.ac.nz> 
Message-ID: <200008291236.HAA32070@cj20424-a.reston1.va.home.com>

[Greg Ewing]
> > meaning that Python
> > 2.0 can be redistributed under the Python 2.0 license or under the
> > GPL
> 
> Are you sure that's possible? Doesn't the CNRI license
> require that its terms be passed on to users of derivative
> works? If so, a user of Python 2.0 couldn't just remove the
> CNRI license and replace it with the GPL.

I don't know the answer to this, but Bob Weiner, BeOpen's CTO, claims
that according to BeOpen's lawyer this is okay.  I'll ask him about
it.

I'll post his answer (when I get it) on the license-py20 list.  I
encourage to subscribe and repost this question there for the
archives!

(There were some early glitches with the list address, but they have
been fixed.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Aug 29 14:41:53 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:41:53 -0500
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: Your message of "Tue, 29 Aug 2000 09:09:02 +0200."
             <01c501c01188$a21232e0$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>  
            <01c501c01188$a21232e0$766940d5@hagrid> 
Message-ID: <200008291241.HAA32136@cj20424-a.reston1.va.home.com>

> isn't that taken from SETL?
> 
> (the more I look at SETL, the more Pythonic it looks.  not too
> bad for something that was designed in the late sixties ;-)

You've got it backwards: Python's predecessor, ABC, was inspired by
SETL -- Lambert Meertens spent a year with the SETL group at NYU
before coming up with the final ABC design!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From nowonder at nowonder.de  Tue Aug 29 15:41:30 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 29 Aug 2000 13:41:30 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
Message-ID: <39ABBD8A.B9B3136@nowonder.de>

Greg Wilson wrote:
> 
> BTW, I understand from side conversations that adding a 'keys()' method to
> sequences, so that arbitrary collections could be iterated over using:
> 
>     for i in S.keys():
>         print i, S[i]
> 
> was considered and rejected.  If anyone knows why, I'd be grateful for a
> recap.

If I remember correctly, it was rejected because adding
keys(), items() etc. methods to sequences would make all
objects (in this case sequences and mappings) look the same.

More accurate information from:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101178&group_id=5470

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From fredrik at pythonware.com  Tue Aug 29 13:49:17 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 13:49:17 +0200
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
References: <01c501c01188$a21232e0$766940d5@hagrid> <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com> <20000829130214.L500@xs4all.nl>
Message-ID: <01ae01c011af$2a3550a0$0900a8c0@SPIFF>

thomas wrote:
> > Saving two characters (":" vs "for") isn't worth it in Python.  The
> > vertical bar *would* be "worth it" to me, because that's what's used in
> > SETL, Haskell *and* common mathematical practice for "such that".  Alas,
> > as Guido is sure to point out, that's too hard to parse
>
> It's impossible to parse, of course, unless you require the parentheses
> around the expression preceding it :)
>
> [ (n) | n in range(100) if n%2 ]

I'm pretty sure Tim meant "|" instead of "if".  the SETL syntax is:

    [ n : n in range(100) | n%2 ]

(that is, ":" instead of for, and "|" or "st" instead of "if".  and yes,
they have nice range literals too, so don't take that "range" too
literal ;-)

in SETL, that can also be abbreviated to:

    [ n in range(100) | n%2 ]

which, of course, is a perfectly valid (though slightly obscure)
python expression...

</F>




From guido at beopen.com  Tue Aug 29 14:53:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:53:32 -0500
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: Your message of "Tue, 29 Aug 2000 07:41:53 EST."
             <200008291241.HAA32136@cj20424-a.reston1.va.home.com> 
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <01c501c01188$a21232e0$766940d5@hagrid>  
            <200008291241.HAA32136@cj20424-a.reston1.va.home.com> 
Message-ID: <200008291253.HAA32332@cj20424-a.reston1.va.home.com>

> It was way ahead of its time.  Still is!  Check out its general loop
> construct, though -- now *that's* a kitchen sink.  Guido mentioned that
> ABC's Lambert Meertens spent a year's sabbatical at NYU when SETL was in its
> heyday, and I figure that's where ABC got quantifiers in boolean expressions
> (if each x in list has p(x); if no x in list has p(x); if some x in list has
> p(x)).  Have always wondered why Python didn't have that too; I ask that
> every year, but so far Guido has never answered it <wink>.

I don't recall you asking me that even *once* before now.  Proof,
please?

Anyway, the answer is that I saw diminishing returns from adding more
keywords and syntax.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Tue Aug 29 16:46:23 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 09:46:23 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <39AB06D5.BD99855@nowonder.de>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
	<39AB06D5.BD99855@nowonder.de>
Message-ID: <14763.52415.747655.334938@beluga.mojam.com>

    Peter> I don't like [0...100] either. It just looks bad.  But I really
    Peter> *do* like [0..100] (maybe that's Pascal being my first serious
    Peter> language).

Which was why I proposed "...".  It's sort of like "..", but has the
advantage of already being a recognized token.  I doubt there would be much
problem adding ".." as a token either.

What we really want I think is something that evokes the following in the
mind of the reader

    for i from START to END incrementing by STEP:

without gobbling up all those keywords.  That might be one of the following:

    for i in [START..END,STEP]:
    for i in [START:END:STEP]:
    for i in [START..END:STEP]:

I'm sure there are other possibilities, but given the constraints of putting
the range literal in square brackets and not allowing a comma as the first
separator, the choices seem limited.

Perhaps it will just have to wait until Py3K when a little more grammar
fiddling is possible.

Skip



From thomas at xs4all.net  Tue Aug 29 16:52:21 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 16:52:21 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 09:46:23AM -0500
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com> <39AB06D5.BD99855@nowonder.de> <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <20000829165221.N500@xs4all.nl>

On Tue, Aug 29, 2000 at 09:46:23AM -0500, Skip Montanaro wrote:

> Which was why I proposed "...".  It's sort of like "..", but has the
> advantage of already being a recognized token.  I doubt there would be much
> problem adding ".." as a token either.

"..." is not a token, it's three tokens:

subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]

So adding ".." should be no problem.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gvwilson at nevex.com  Tue Aug 29 16:55:34 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 10:55:34 -0400 (EDT)
Subject: [Python-Dev] pragmas as callbacks
In-Reply-To: <39AAB616.460FA0A8@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291023160.21280-100000@akbar.nevex.com>

If a mechanism for providing meta-information about code is going to be
added to Python, then I would like it to be flexible enough for developers
to define/add their own.  It's just like allowing developers to extend the
type system with new classes, rather than handing them a fixed set of
built-in types and saying, "Good luck".  (Most commercial Fortran
compilers take the second approach, by providing a bunch of inflexible,
vendor-specific pragmas.  It's a nightmare...)

I think that pragmas are essentially callbacks into the interpreter.  When
I put:

    pragma encoding = "UTF-16"

I am telling the interpreter to execute its 'setEncoding()' method right
away.

So, why not present pragmas in that way?  I.e., why not expose the Python
interpreter as a callable object while the source is being parsed and
compiled?  I think that:

    __python__.setEncoding("UTF-16")

is readable, and can be extended in lots of well-structured ways by
exposing exactly as much of the interpreter as is deemed safe. Arguments
could be restricted to constants, or built-in operations on constants, to
start with, without compromising future extensibility.

Greg





From skip at mojam.com  Tue Aug 29 16:55:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 09:55:49 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
References: <20000828230630.I500@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <14763.52981.603640.415652@beluga.mojam.com>

One of the original arguments for range literals as I recall was that
indexing of loops could get more efficient.  The compiler would know that
[0:100:2] represents a series of integers and could conceivably generate
more efficient loop indexing code (and so could Python2C and other compilers
that generated C code).  This argument doesn't seem to be showing up here at
all.  Does it carry no weight in the face of the relative inscrutability of
the syntax?

Skip



From cgw at fnal.gov  Tue Aug 29 17:29:20 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 10:29:20 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000829165221.N500@xs4all.nl>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
	<39AB06D5.BD99855@nowonder.de>
	<14763.52415.747655.334938@beluga.mojam.com>
	<20000829165221.N500@xs4all.nl>
Message-ID: <14763.54992.458188.483296@buffalo.fnal.gov>

Thomas Wouters writes:
 > On Tue, Aug 29, 2000 at 09:46:23AM -0500, Skip Montanaro wrote:
 > 
 > > Which was why I proposed "...".  It's sort of like "..", but has the
 > > advantage of already being a recognized token.  I doubt there would be much
 > > problem adding ".." as a token either.
 > 
 > "..." is not a token, it's three tokens:
 > 
 > subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
 > 
 > So adding ".." should be no problem.

I have another idea.  I don't think it's been discussed previously,
but I came late to this party.  Sorry if this is old hat.


How about a:b to indicate the range starting at a and ending with b-1?

I claim that this syntax is already implicit in Python.

Think about the following:  if S is a sequence and i an index,

     S[i]

means the pairing of the sequence S with the index i.  Sequences and
indices are `dual' in the sense that pairing them together yields a
value.  I am amused by the fact that in the C language, 

     S[i] = *(S+i) = *(i+S) = i[S] 

which really shows this duality.

Now we already have

     S[a:b]

to denote the slice operation, but this can also be described as the
pairing of S with the range literal a:b

According to this view, the square braces indicate the pairing or
mapping operation itself, they are not part of the range literal.
They shouldn't be part of the range literal syntax.  Thinking about
this gets confused by the additional use of `[' for list construction.
If you take them away, you could even defend having 1:5 create an
xrange-like object rather than a list.

I think this also shows why [a:b] is *not* the natural syntax for a
range literal.

This is beautfully symmetric to me - 1..3 looks like it should be a
closed interval (including the endpoints), but it's very natural and
Pythonic that a:b is semi-open: the existing "slice invariance" 

     S[a:b] + S[b:c] = S[a:c] 

could be expressed as 

     a:b + b:c = a:c

which is very attractive to me, but of course there are problems.


The syntax Tim disfavored:

     for i in [:len(a)]:

now becomes

     for i in 0:len(a):  
     #do not allow elided endpoints outside of a [ context

which doesn't look so bad to me, but is probably ambiguous.  Hmmm,
could this possibly work or is it too much of a collision with the use
of `:' to indicate block structure?

Tim - I agree that the Haskell prime-number printing program is indeed
one of the prettiest programs ever.  Thanks for posting it.

Hold-off-on-range-literals-for-2.0-ly yr's,
				-C




From mal at lemburg.com  Tue Aug 29 17:37:39 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 17:37:39 +0200
Subject: [Python-Dev] pragmas as callbacks
References: <Pine.LNX.4.10.10008291023160.21280-100000@akbar.nevex.com>
Message-ID: <39ABD8C3.DABAAA6B@lemburg.com>

Greg Wilson wrote:
> 
> If a mechanism for providing meta-information about code is going to be
> added to Python, then I would like it to be flexible enough for developers
> to define/add their own.  It's just like allowing developers to extend the
> type system with new classes, rather than handing them a fixed set of
> built-in types and saying, "Good luck".  (Most commercial Fortran
> compilers take the second approach, by providing a bunch of inflexible,
> vendor-specific pragmas.  It's a nightmare...)

I don't think that Python will move in that direction. pragmas are
really only meant to add some form of meta-information to a Python
source file which would otherwise have to be passed to the compiler
in order to produce correct output. It's merely a way of defining
compile time flags for Python modules which allow more flexible
compilation.

Other tools might also make use of these pragmas, e.g. freeze,
to allow inspection of a module without having to execute it.

> I think that pragmas are essentially callbacks into the interpreter.  When
> I put:
> 
>     pragma encoding = "UTF-16"
> 
> I am telling the interpreter to execute its 'setEncoding()' method right
> away.

pragmas have a different target: they tell the compiler (or some
other non-executing tool) to make a certain assumption about the
code it is currently busy compiling.

The compiler is not expected to execute any Python code when it
sees a pragma, it will only set a few internal variables according
to the values stated in the pragma or simply ignore it if the
pragma uses an unknown key and then proceed with compiling.
 
> So, why not present pragmas in that way?  I.e., why not expose the Python
> interpreter as a callable object while the source is being parsed and
> compiled?  I think that:
> 
>     __python__.setEncoding("UTF-16")
> 
> is readable, and can be extended in lots of well-structured ways by
> exposing exactly as much of the interpreter as is deemed safe. Arguments
> could be restricted to constants, or built-in operations on constants, to
> start with, without compromising future extensibility.

The natural place for these APIs would be the sys module... 
no need for an extra __python__ module or object.

I'd rather not add complicated semantics to pragmas -- they
should be able to set flags, but not much more.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Tue Aug 29 17:40:39 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 29 Aug 2000 18:40:39 +0300 (IDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.54992.458188.483296@buffalo.fnal.gov>
Message-ID: <Pine.GSO.4.10.10008291838220.13338-100000@sundial>

On Tue, 29 Aug 2000, Charles G Waldman wrote:

> I have another idea.  I don't think it's been discussed previously,
> but I came late to this party.  Sorry if this is old hat.
> 
> How about a:b to indicate the range starting at a and ending with b-1?

I think it's nice. I'm not sure I like it yet, but it's an interesting
idea. Someone's gonna yell ": is ambiguos". Well, you know how, when
you know Python, you go around telling people "() don't create tuples,
commas do" and feeling all wonderful? Well, we can do the same with
ranges <wink>.

(:)-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Tue Aug 29 18:37:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 11:37:40 -0500
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Your message of "Tue, 29 Aug 2000 10:29:20 EST."
             <14763.54992.458188.483296@buffalo.fnal.gov> 
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com> <39AB06D5.BD99855@nowonder.de> <14763.52415.747655.334938@beluga.mojam.com> <20000829165221.N500@xs4all.nl>  
            <14763.54992.458188.483296@buffalo.fnal.gov> 
Message-ID: <200008291637.LAA04186@cj20424-a.reston1.va.home.com>

> How about a:b to indicate the range starting at a and ending with b-1?

I believe this is what the Nummies originally suggested.

> which doesn't look so bad to me, but is probably ambiguous.  Hmmm,
> could this possibly work or is it too much of a collision with the use
> of `:' to indicate block structure?

Alas, it could never work.  Look at this:

  for i in a:b:c

Does it mean

  for i in (a:b) : c

or

  for i in a: (b:c)

?

So we're back to requiring *some* form of parentheses.

I'm postponing this discussion until after Python 2.0 final is
released -- the feature freeze is real!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gvwilson at nevex.com  Tue Aug 29 17:54:23 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 11:54:23 -0400 (EDT)
Subject: [Python-Dev] pragmas as callbacks
In-Reply-To: <39ABD8C3.DABAAA6B@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291149040.23391-100000@akbar.nevex.com>

> Marc-Andre Lemburg wrote:
> I'd rather not add complicated semantics to pragmas -- they should be
> able to set flags, but not much more.

Greg Wilson writes:

That's probably what every Fortran compiler vendor said at first --- "just
a couple of on/off flags".  Then it was, "Set numeric values (like the
debugging level)".  A full-blown HPF compiler's pragmas are now a complete
programming language, so that you can (for example) specify how to
partition one array based on the partitioning in another.

Same thing happened with the C preprocessor --- more and more directives
crept in over time.  And the Microsoft C++ compiler.  And I'm sure this
list's readers could come up with dozens of more examples.

Pragmas are a way to give instructions to the interpreter; when you let
people give something instructions, you're letting them program it, and I
think it's best to design your mechanism from the start to support that.

Greg "oh no, not another parallelization directive" Wilson





From skip at mojam.com  Tue Aug 29 18:19:27 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 11:19:27 -0500 (CDT)
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
Message-ID: <14763.57999.57444.678054@beluga.mojam.com>

Don't know if this should concern us in preparation for 2.0b1 release, but
the following came across c.l.py this morning.  

FYI.

Skip

------- start of forwarded message (RFC 934 encapsulation) -------
X-Digest: Python-list digest, Vol 1 #3344 - 13 msgs
Message: 11
Newsgroups: comp.lang.python
Organization: Concentric Internet Services
Lines: 41
Message-ID: <39ABD9A1.A8ECDEC8 at faxnet.com>
NNTP-Posting-Host: 208.36.195.178
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Path: news!uunet!ffx.uu.net!newsfeed.mathworks.com!feeder.via.net!newshub2.rdc1.sfba.home.com!news.home.com!newsfeed.concentric.net!global-news-master
Xref: news comp.lang.python:110026
Precedence: bulk
List-Id: General discussion list for the Python programming language <python-list.python.org>
From: Jon LaCour <jal at faxnet.com>
Sender: python-list-admin at python.org
To: python-list at python.org
Subject: Python Problem - Important!
Date: 29 Aug 2000 15:41:00 GMT
Reply-To: jal at faxnet.com

I am beginning development of a very large web application, and I would
like to use Python (no, Zope is not an option).  PyApache seems to be my
best bet, but there is a MASSIVE problem with Python/PyApache that
prevents it from being even marginally useful to me, and to most major
software companies.

My product requires database access, and the database module that I use
for connecting depends on a library called mxDateTime.  This is a very
robust library that is in use all over our system (it has never given us
problems).  Yet, when I use PyApache to connect to a database, I have
major issues.

I have seen this same problem posted to this newsgroup and to the
PyApache mailing list several times from over a year ago, and it appears
to be unresolved.  The essential problem is this: the second time a
module is loaded, Python has cleaned up its dictionaries in its cleanup
mechanism, and does not allow a re-init.  With mxDateTime this gives an
error:

    "TypeError:  call of non-function (type None)"

Essentially, this is a major problem in either the Python internals, or
in PyApache.  After tracing the previous discussions on this issue, it
appears that this is a Python problem.  I am very serious when I say
that this problem *must* be resolved before Python can be taken
seriously for use in web applications, especially when Zope is not an
option.  I require the use of Apache's security features, and several
other Apache extensions.  If anyone knows how to resolve this issue, or
can even point out a way that I can resolve this *myself* I would love
to hear it.

This is the single stumbling block standing in the way of my company
converting almost entirely to Python development, and I am hoping that
python developers will take this bug and smash it quickly.

Thanks in advance, please cc: all responses to my email address at
jal at faxnet.com.

Jonathan LaCour
Developer, VertiSoft

------- end -------



From effbot at telia.com  Tue Aug 29 18:50:24 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 18:50:24 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <14763.57999.57444.678054@beluga.mojam.com>
Message-ID: <003301c011d9$3c1bbc80$766940d5@hagrid>

skip wrote:
> Don't know if this should concern us in preparation for 2.0b1 release, but
> the following came across c.l.py this morning.  

http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

    "The problem you describe is an artifact of the way mxDateTime 
    tries to reuse the time.time() API available through the 
    standard Python time module"

> Essentially, this is a major problem in either the Python internals, or
> in PyApache.

ah, the art of debugging...

</F>




From mal at lemburg.com  Tue Aug 29 18:46:29 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 18:46:29 +0200
Subject: [Python-Dev] pragmas as callbacks
References: <Pine.LNX.4.10.10008291149040.23391-100000@akbar.nevex.com>
Message-ID: <39ABE8E5.44073A09@lemburg.com>

Greg Wilson wrote:
> 
> > Marc-Andre Lemburg wrote:
> > I'd rather not add complicated semantics to pragmas -- they should be
> > able to set flags, but not much more.
> 
> Greg Wilson writes:
> 
> That's probably what every Fortran compiler vendor said at first --- "just
> a couple of on/off flags".  Then it was, "Set numeric values (like the
> debugging level)".  A full-blown HPF compiler's pragmas are now a complete
> programming language, so that you can (for example) specify how to
> partition one array based on the partitioning in another.
> 
> Same thing happened with the C preprocessor --- more and more directives
> crept in over time.  And the Microsoft C++ compiler.  And I'm sure this
> list's readers could come up with dozens of more examples.
>
> Pragmas are a way to give instructions to the interpreter; when you let
> people give something instructions, you're letting them program it, and I
> think it's best to design your mechanism from the start to support that.

I don't get your point: you can "program" the interpreter by
calling various sys module APIs to set interpreter flags already.

Pragmas are needed to tell the compiler what to do with a
source file. They extend the command line flags which are already
available to a more fine-grained mechanism. That's all -- nothing
more.

If a programmer wants to influence compilation globally,
then she would have to set the sys module flags prior to invoking
compile().

(This is already possible using mx.Tools additional sys builtins,
e.g. you can tell the compiler to work in optimizing mode prior
to invoking compile(). Some version of these will most likely go
into 2.1.)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Tue Aug 29 18:48:43 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 11:48:43 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008291637.LAA04186@cj20424-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
	<39AB06D5.BD99855@nowonder.de>
	<14763.52415.747655.334938@beluga.mojam.com>
	<20000829165221.N500@xs4all.nl>
	<14763.54992.458188.483296@buffalo.fnal.gov>
	<200008291637.LAA04186@cj20424-a.reston1.va.home.com>
Message-ID: <14763.59755.137579.785257@buffalo.fnal.gov>

Guido van Rossum writes:

 > Alas, it could never work.  Look at this:
 > 
 >   for i in a:b:c
 > 
 > Does it mean
 > 
 >   for i in (a:b) : c
 > 
 > or
 > 
 >   for i in a: (b:c)

Of course, it means "for i in the range from a to b-1 with stride c", but as written it's
illegal because you'd need another `:' after the c.  <wink>

 > I'm postponing this discussion until after Python 2.0 final is
 > released -- the feature freeze is real!

Absolutely.  I won't bring this up again, until the approprate time.




From mal at lemburg.com  Tue Aug 29 18:54:49 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 18:54:49 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <14763.57999.57444.678054@beluga.mojam.com> <003301c011d9$3c1bbc80$766940d5@hagrid>
Message-ID: <39ABEAD9.B106E53E@lemburg.com>

Fredrik Lundh wrote:
> 
> skip wrote:
> > Don't know if this should concern us in preparation for 2.0b1 release, but
> > the following came across c.l.py this morning.
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> 
>     "The problem you describe is an artifact of the way mxDateTime
>     tries to reuse the time.time() API available through the
>     standard Python time module"
> 

Here is a pre-release version of mx.DateTime which should fix
the problem (the new release will use the top-level mx package
-- it does contain a backward compatibility hack though):

http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip

Please let me know if it fixes your problem... I don't use PyApache.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jal at ns1.quickrecord.com  Tue Aug 29 19:17:17 2000
From: jal at ns1.quickrecord.com (Jonathan LaCour)
Date: Tue, 29 Aug 2000 13:17:17 -0400 (EDT)
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
In-Reply-To: <39ABEAD9.B106E53E@lemburg.com>
Message-ID: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com>

Well, it appears that this version raises a different problem. Do I need
to be running anything higher than python-1.5.2?  Possibly this has
something to do with how I installed this pre-release.  I simply moved 
the old DateTime directory out of the site-packages directory, and
then moved the mx, and DateTime directories from the zip that was
provided into the site-packages directory, and restarted. Here is the
traceback from the apache error log:

patientSearchResults.py failed for 192.168.168.130, reason: the script
raised an unhandled exception. Script's traceback follows:
Traceback (innermost last):
  File "/home/httpd/html/py-bin/patientSearchResults.py", line 3, in ?
    import ODBC.Solid
  File "/usr/lib/python1.5/site-packages/ODBC/__init__.py", line 21, in ?
    import DateTime # mxDateTime package must be installed first !
  File "/usr/lib/python1.5/site-packages/DateTime/__init__.py", line 17,
in ?
    from mx.DateTime import *
  File "/usr/lib/python1.5/site-packages/mx/DateTime/__init__.py", line
20, in ?    from DateTime import *
  File "/usr/lib/python1.5/site-packages/mx/DateTime/DateTime.py", line 8,
in ?
    from mxDateTime import *
  File
"/usr/lib/python1.5/site-packages/mx/DateTime/mxDateTime/__init__.py",
line 12, in ?
    setnowapi(time.time)
NameError: setnowapi


On Tue, 29 Aug 2000, M.-A. Lemburg wrote:

> Fredrik Lundh wrote:
> > 
> > skip wrote:
> > > Don't know if this should concern us in preparation for 2.0b1 release, but
> > > the following came across c.l.py this morning.
> > 
> > http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> > 
> >     "The problem you describe is an artifact of the way mxDateTime
> >     tries to reuse the time.time() API available through the
> >     standard Python time module"
> > 
> 
> Here is a pre-release version of mx.DateTime which should fix
> the problem (the new release will use the top-level mx package
> -- it does contain a backward compatibility hack though):
> 
> http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> 
> Please let me know if it fixes your problem... I don't use PyApache.
> 
> Thanks,
> -- 
> Marc-Andre Lemburg
> ______________________________________________________________________
> Business:                                      http://www.lemburg.com/
> Python Pages:                           http://www.lemburg.com/python/
> 




From gvwilson at nevex.com  Tue Aug 29 19:21:52 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 13:21:52 -0400 (EDT)
Subject: [Python-Dev] Re: pragmas as callbacks
In-Reply-To: <39ABE8E5.44073A09@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>

> > > Marc-Andre Lemburg wrote:
> > > I'd rather not add complicated semantics to pragmas -- they should be
> > > able to set flags, but not much more.

> > Greg Wilson writes:
> > Pragmas are a way to give instructions to the interpreter; when you let
> > people give something instructions, you're letting them program it, and I
> > think it's best to design your mechanism from the start to support that.

> Marc-Andre Lemburg:
> I don't get your point: you can "program" the interpreter by
> calling various sys module APIs to set interpreter flags already.
> 
> Pragmas are needed to tell the compiler what to do with a
> source file. They extend the command line flags which are already
> available to a more fine-grained mechanism. That's all -- nothing
> more.

Greg Wilson writes:
I understand, but my experience with other languages indicates that once
you have a way to set the parser's flags from within the source file being
parsed, people are going to want to be able to do it conditionally, i.e.
to set one flag based on the value of another.  Then they're going to want
to see if particular flags have been set to something other than their
default values, and so on.  Pragmas are a way to embed programs for the
parser in the file being parsed.  If we're going to allow this at all, we
will save ourselves a lot of future grief by planning for this now.

Thanks,
Greg




From mal at lemburg.com  Tue Aug 29 19:24:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 19:24:08 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com>
Message-ID: <39ABF1B8.426B7A6@lemburg.com>

Jonathan LaCour wrote:
> 
> Well, it appears that this version raises a different problem. Do I need
> to be running anything higher than python-1.5.2?  Possibly this has
> something to do with how I installed this pre-release.  I simply moved
> the old DateTime directory out of the site-packages directory, and
> then moved the mx, and DateTime directories from the zip that was
> provided into the site-packages directory, and restarted. Here is the
> traceback from the apache error log:
> 
> patientSearchResults.py failed for 192.168.168.130, reason: the script
> raised an unhandled exception. Script's traceback follows:
> Traceback (innermost last):
>   File "/home/httpd/html/py-bin/patientSearchResults.py", line 3, in ?
>     import ODBC.Solid
>   File "/usr/lib/python1.5/site-packages/ODBC/__init__.py", line 21, in ?
>     import DateTime # mxDateTime package must be installed first !
>   File "/usr/lib/python1.5/site-packages/DateTime/__init__.py", line 17,
> in ?
>     from mx.DateTime import *
>   File "/usr/lib/python1.5/site-packages/mx/DateTime/__init__.py", line
> 20, in ?    from DateTime import *
>   File "/usr/lib/python1.5/site-packages/mx/DateTime/DateTime.py", line 8,
> in ?
>     from mxDateTime import *
>   File
> "/usr/lib/python1.5/site-packages/mx/DateTime/mxDateTime/__init__.py",
> line 12, in ?
>     setnowapi(time.time)
> NameError: setnowapi

This API is new... could it be that you didn't recompile the
mxDateTime C extension inside the package ?

> On Tue, 29 Aug 2000, M.-A. Lemburg wrote:
> 
> > Fredrik Lundh wrote:
> > >
> > > skip wrote:
> > > > Don't know if this should concern us in preparation for 2.0b1 release, but
> > > > the following came across c.l.py this morning.
> > >
> > > http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> > >
> > >     "The problem you describe is an artifact of the way mxDateTime
> > >     tries to reuse the time.time() API available through the
> > >     standard Python time module"
> > >
> >
> > Here is a pre-release version of mx.DateTime which should fix
> > the problem (the new release will use the top-level mx package
> > -- it does contain a backward compatibility hack though):
> >
> > http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> >
> > Please let me know if it fixes your problem... I don't use PyApache.
> >
> > Thanks,
> > --
> > Marc-Andre Lemburg
> > ______________________________________________________________________
> > Business:                                      http://www.lemburg.com/
> > Python Pages:                           http://www.lemburg.com/python/
> >

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mwh21 at cam.ac.uk  Tue Aug 29 19:34:15 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 29 Aug 2000 18:34:15 +0100
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Skip Montanaro's message of "Tue, 29 Aug 2000 09:55:49 -0500 (CDT)"
References: <20000828230630.I500@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <14763.52981.603640.415652@beluga.mojam.com>
Message-ID: <m38ztgyns8.fsf@atrus.jesus.cam.ac.uk>

Skip Montanaro <skip at mojam.com> writes:

> One of the original arguments for range literals as I recall was that
> indexing of loops could get more efficient.  The compiler would know that
> [0:100:2] represents a series of integers and could conceivably generate
> more efficient loop indexing code (and so could Python2C and other compilers
> that generated C code).  This argument doesn't seem to be showing up here at
> all.  Does it carry no weight in the face of the relative inscrutability of
> the syntax?

IMHO, no.  A compiler sufficiently smart to optimize range literals
ought to be sufficiently smart to optimize most calls to "range".  At
least, I think so.  I also think the inefficiency of list construction
in Python loops is a red herring; executing the list body involves
going round & round the eval loop, and I'd be amazed if that didn't
dominate (note that - on my system at least - loops involving range
are often (marginally) faster than ones using xrange, presumably due
to the special casing of list[integer] in eval_code2).

Sure, it would be nice is this aspect got optimized, but lets speed up
the rest of the interpreter enough so you can notice first!

Cheers,
Michael

-- 
  Very rough; like estimating the productivity of a welder by the
  amount of acetylene used.         -- Paul Svensson, comp.lang.python
    [on the subject of the measuring programmer productivity by LOC]




From mal at lemburg.com  Tue Aug 29 19:41:25 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 19:41:25 +0200
Subject: [Python-Dev] Re: pragmas as callbacks
References: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>
Message-ID: <39ABF5C5.6605CF55@lemburg.com>

Greg Wilson wrote:
> 
> > > > Marc-Andre Lemburg wrote:
> > > > I'd rather not add complicated semantics to pragmas -- they should be
> > > > able to set flags, but not much more.
> 
> > > Greg Wilson writes:
> > > Pragmas are a way to give instructions to the interpreter; when you let
> > > people give something instructions, you're letting them program it, and I
> > > think it's best to design your mechanism from the start to support that.
> 
> > Marc-Andre Lemburg:
> > I don't get your point: you can "program" the interpreter by
> > calling various sys module APIs to set interpreter flags already.
> >
> > Pragmas are needed to tell the compiler what to do with a
> > source file. They extend the command line flags which are already
> > available to a more fine-grained mechanism. That's all -- nothing
> > more.
> 
> Greg Wilson writes:
> I understand, but my experience with other languages indicates that once
> you have a way to set the parser's flags from within the source file being
> parsed, people are going to want to be able to do it conditionally, i.e.
> to set one flag based on the value of another.  Then they're going to want
> to see if particular flags have been set to something other than their
> default values, and so on.  Pragmas are a way to embed programs for the
> parser in the file being parsed.  If we're going to allow this at all, we
> will save ourselves a lot of future grief by planning for this now.

I don't think mixing compilation with execution is a good idea.

If we ever want to add this feature, we can always use a
pragma for it ;-) ...

def mysettings(compiler, locals, globals, target):
    compiler.setoptimization(2)

# Call the above hook for every new compilation block
pragma compiler_hook = "mysettings"

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Tue Aug 29 20:42:41 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 29 Aug 2000 14:42:41 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <14764.1057.909517.977904@bitdiddle.concentric.net>

Does anyone have suggestions for how to detect unbounded recursion in
the Python core on Unix platforms?

Guido assigned me bug 112943 yesterday and gave it priority 9.
http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470

The bug in question causes a core dump on Unix because of a broken
__radd__.  There's another bug (110615) that does that same thing with
recursive invocations of __repr__.

And, of course, there's:
def foo(x): 
    return foo(x)
foo(None)

I believe that these bugs have been fixed on Windows.  Fredrik
confirmed this for one of them, but I don't remember which one.  Would
someone mind confirming and updating the records in the bug tracker?

I don't see an obvious solution.  Is there any way to implement
PyOS_CheckStack on Unix?  I imagine that each platform would have its
own variant and that there is no hope of getting them debugged before
2.0b1. 

We could add some counters in eval_code2 and raise an exception after
some arbitrary limit is reached.  Arbitrary limits seem bad -- and any
limit would have to be fairly restrictive because each variation on
the bug involves a different number of C function calls between
eval_code2 invocations.

We could special case each of the __special__ methods in C to raise an
exception upon recursive calls with the same arguments, but this is
complicated and expensive.  It does not catch the simplest version, 
the foo function above.

Does stackless raise exceptions cleanly on each of these bugs?  That
would be an argument worth mentioning in the PEP, eh?

Any other suggestions are welcome.

Jeremy



From effbot at telia.com  Tue Aug 29 21:15:30 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 21:15:30 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <00af01c011ed$86671dc0$766940d5@hagrid>

jeremy wrote:
>  Guido assigned me bug 112943 yesterday and gave it priority 9.
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470
> 
> The bug in question causes a core dump on Unix because of a broken
> __radd__.  There's another bug (110615) that does that same thing with
> recursive invocations of __repr__.
> 
> And, of course, there's:
> def foo(x): 
>     return foo(x)
> foo(None)
> 
> I believe that these bugs have been fixed on Windows.  Fredrik
> confirmed this for one of them, but I don't remember which one.  Would
> someone mind confirming and updating the records in the bug tracker?

my checkstack patch fixes #110615 and #112943 on windows.
cannot login to sourceforge right now, so I cannot update the
descriptions.

> I don't see an obvious solution.  Is there any way to implement
> PyOS_CheckStack on Unix?

not that I know...  you better get a real operating system ;-)

</F>




From mal at lemburg.com  Tue Aug 29 21:26:52 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 21:26:52 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <39AC0E7C.922536AA@lemburg.com>

Jeremy Hylton wrote:
> 
> Does anyone have suggestions for how to detect unbounded recursion in
> the Python core on Unix platforms?
> 
> Guido assigned me bug 112943 yesterday and gave it priority 9.
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470
> 
> The bug in question causes a core dump on Unix because of a broken
> __radd__.  There's another bug (110615) that does that same thing with
> recursive invocations of __repr__.
> 
> And, of course, there's:
> def foo(x):
>     return foo(x)
> foo(None)
> 
> I believe that these bugs have been fixed on Windows.  Fredrik
> confirmed this for one of them, but I don't remember which one.  Would
> someone mind confirming and updating the records in the bug tracker?
> 
> I don't see an obvious solution.  Is there any way to implement
> PyOS_CheckStack on Unix?  I imagine that each platform would have its
> own variant and that there is no hope of getting them debugged before
> 2.0b1.

I've looked around in the include files for Linux but haven't
found any APIs which could be used to check the stack size.
Not even getrusage() returns anything useful for the current
stack size.

For the foo() example I found that on my machine the core dump
happens at depth 9821 (counted from 0), so setting the recursion
limit to something around 9000 should fix it at least for
Linux2.

> We could add some counters in eval_code2 and raise an exception after
> some arbitrary limit is reached.  Arbitrary limits seem bad -- and any
> limit would have to be fairly restrictive because each variation on
> the bug involves a different number of C function calls between
> eval_code2 invocations.
> 
> We could special case each of the __special__ methods in C to raise an
> exception upon recursive calls with the same arguments, but this is
> complicated and expensive.  It does not catch the simplest version,
> the foo function above.
> 
> Does stackless raise exceptions cleanly on each of these bugs?  That
> would be an argument worth mentioning in the PEP, eh?
> 
> Any other suggestions are welcome.
> 
> Jeremy
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Tue Aug 29 21:40:49 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 29 Aug 2000 15:40:49 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC0E7C.922536AA@lemburg.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
Message-ID: <14764.4545.972459.760991@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:

  >> I don't see an obvious solution.  Is there any way to implement
  >> PyOS_CheckStack on Unix?  I imagine that each platform would have
  >> its own variant and that there is no hope of getting them
  >> debugged before 2.0b1.

  MAL> I've looked around in the include files for Linux but haven't
  MAL> found any APIs which could be used to check the stack size.
  MAL> Not even getrusage() returns anything useful for the current
  MAL> stack size.

Right.  

  MAL> For the foo() example I found that on my machine the core dump
  MAL> happens at depth 9821 (counted from 0), so setting the
  MAL> recursion limit to something around 9000 should fix it at least
  MAL> for Linux2.

Right.  I had forgotten about the MAX_RECURSION_LIMIT.  It would
probably be better to set the limit lower on Linux only, right?  If
so, what's the cleanest was to make the value depend on the platform.

Jeremy



From mal at lemburg.com  Tue Aug 29 21:42:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 21:42:08 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
		<39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net>
Message-ID: <39AC1210.18703B0B@lemburg.com>

Jeremy Hylton wrote:
> 
> >>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:
> 
>   >> I don't see an obvious solution.  Is there any way to implement
>   >> PyOS_CheckStack on Unix?  I imagine that each platform would have
>   >> its own variant and that there is no hope of getting them
>   >> debugged before 2.0b1.
> 
>   MAL> I've looked around in the include files for Linux but haven't
>   MAL> found any APIs which could be used to check the stack size.
>   MAL> Not even getrusage() returns anything useful for the current
>   MAL> stack size.
> 
> Right.
> 
>   MAL> For the foo() example I found that on my machine the core dump
>   MAL> happens at depth 9821 (counted from 0), so setting the
>   MAL> recursion limit to something around 9000 should fix it at least
>   MAL> for Linux2.
> 
> Right.  I had forgotten about the MAX_RECURSION_LIMIT.  It would
> probably be better to set the limit lower on Linux only, right?  If
> so, what's the cleanest was to make the value depend on the platform.

Perhaps a naive test in the configure script might help. I used
the following script to determine the limit:

import resource
i = 0    
def foo(x):
    global i
    print i,resource.getrusage(resource.RUSAGE_SELF)   
    i = i + 1
    foo(x)
foo(None)

Perhaps a configure script could emulate the stack requirements
of eval_code2() by using declaring a buffer of a certain size.
The script would then run in a similar way as the one
above printing the current stack depth and then dump core at
some point. The configure script would then have to remove the
core file and use the last written  depth number as basis
for setting the MAX_RECURSION_LIMIT.

E.g. for the above Python script I get:

9818 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9819 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9820 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9821 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
Segmentation fault (core dumped)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Tue Aug 29 22:09:46 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:09:46 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
Message-ID: <Pine.LNX.4.10.10008291508500.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Greg Wilson wrote:
> The problem with using ellipsis is that there's no obvious way to include
> a stride --- how do you hit every second (or n'th) element, rather than
> every element?

As explained in the examples i posted,

    1, 3 .. 20

could produce

    (1, 3, 5, 7, 9, 11, 13, 15, 17, 19)


-- ?!ng




From thomas at xs4all.net  Tue Aug 29 21:49:12 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 21:49:12 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Aug 29, 2000 at 02:42:41PM -0400
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <20000829214912.O500@xs4all.nl>

On Tue, Aug 29, 2000 at 02:42:41PM -0400, Jeremy Hylton wrote:

> Is there any way to implement PyOS_CheckStack on Unix?  I imagine that
> each platform would have its own variant and that there is no hope of
> getting them debugged before 2.0b1.

I can think of three mechanisms: using getrusage() and getrlimit() to find out
the current stacksize and the stack limit is most likely to give accurate
numbers, but is only available on most UNIX systems, not all of them. (I
hear there are systems that don't implement getrusage/getrlimit ;-)

int PyOS_CheckStack(void)
{
    struct rlimit rlim;
    struct rusage rusage;

    if (getrusage(RUSAGE_SELF, &rusage) != 0)
        /* getrusage failed, ignore or raise error ? */
    if (getrlimit(RLIMIT_STACK, &rlim) != 0)
        /* ditto */
    return rlim.rlim_cur > rusage.ru_isrss + PYOS_STACK_MARGIN;
}

(Note that it's probably necessary to repeat the getrlimit as well as the
getrusage, because even the 'hard' limit can change -- a Python program can
change the limits using the 'resource' module.) There are currently no
autoconf checks for rlimit/rusage, but we can add those without problem.
(and enable the resource automagically module while we're at it ;)

If that fails, I don't think there is a way to get the stack limit (unless
it's in platform-dependant ways) but there might be a way to get the
approximate size of the stack by comparing the address of a local variable
with the stored address of a local variable set at the start of Python.
Something like

static long stack_start_addr

[... in some init function ...]
    int dummy;
    stack_start_addr = (long) &dummy;
[ or better yet, use a real variable from that function, but one that won't
get optimized away (or you might lose that optimization) ]

#define PY_STACK_LIMIT 0x200000 /* 2Mbyte */

int PyOS_CheckStack(void)
{
    int dummy;
    return abs(stack_start_addr - (long)&dummy) < PY_STACK_LIMIT;
}

This is definately sub-optimal! With the fixed stack-limit, which might both
be too high and too low. Note that the abs() is necessary to accomodate both
stacks that grow downwards and those that grow upwards, though I'm
hard-pressed at the moment to name a UNIX system with an upwards growing
stack. And this solution is likely to get bitten in the unshapely behind by
optimizing, too-smart-for-their-own-good compilers, possibly requiring a
'volatile' qualifier to make them keep their hands off of it.

But the final solution, using alloca() like the Windows check does, is even
less portable... alloca() isn't available on some systems (more than
getrlimit isn't available on, I think, but the two sets are likely to
intersect) and I've heard rumours that on some systems it's even an alias
for malloc(), leading to memory leaks and other weird behaviour.

I'm thinking that a combination of #1 and #2 is best, where #1 is used when
getrlimit/getrusage are available, but #2 if they are not. However, I'm not
sure if either works, so it's a bit soon for that kind of thoughts :-)

> Does stackless raise exceptions cleanly on each of these bugs?  That
> would be an argument worth mentioning in the PEP, eh?

No, I don't think it does. Stackless gets bitten much later by recursive
behaviour, though, and just retains the current 'recursion depth' counter,
possibly set a bit higher. (I'm not sure, but I'm sure a true stackophobe
will clarify ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Tue Aug 29 21:52:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 15:52:32 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
Message-ID: <14764.5248.979275.341242@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    |     print i,resource.getrusage(resource.RUSAGE_SELF)   

My experience echos yours here MAL -- I've never seen anything 
from getrusage() that would be useful in this context. :/

A configure script test would be useful, but you'd have to build a
minimal Python interpreter first to run the script, wouldn't you?

-Barry



From bwarsaw at beopen.com  Tue Aug 29 21:53:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 15:53:32 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
	<Pine.LNX.4.10.10008291508500.302-100000@server1.lfw.org>
Message-ID: <14764.5308.529148.181749@anthem.concentric.net>

>>>>> "KY" == Ka-Ping Yee <ping at lfw.org> writes:

    KY> As explained in the examples i posted,

    KY>     1, 3 .. 20

What would

    1, 3, 7 .. 99

do? :)

-Barry



From ping at lfw.org  Tue Aug 29 22:20:03 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:20:03 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.5308.529148.181749@anthem.concentric.net>
Message-ID: <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:
> 
> What would
> 
>     1, 3, 7 .. 99
> 
> do? :)

    ValueError: too many elements on left side of ".." operator

or

    ValueError: at most two elements permitted on left side of ".."

You get the idea.


-- ?!ng




From prescod at prescod.net  Tue Aug 29 22:00:55 2000
From: prescod at prescod.net (Paul)
Date: Tue, 29 Aug 2000 15:00:55 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.5308.529148.181749@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0008291457410.6330-100000@amati.techno.com>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:

> 
> >>>>> "KY" == Ka-Ping Yee <ping at lfw.org> writes:
> 
>     KY> As explained in the examples i posted,
> 
>     KY>     1, 3 .. 20
> 
> What would
> 
>     1, 3, 7 .. 99

consider:

rangeRecognizers.register( primeHandler )
rangeRecognizers.register( fibHandler )
rangeRecognizers.register( compositeHandler )
rangeRecognizers.register( randomHandler )

(you want to fall back on random handler last so it needs to be registered
last)

 Paul Prescod





From thomas at xs4all.net  Tue Aug 29 22:02:27 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:02:27 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.5248.979275.341242@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 29, 2000 at 03:52:32PM -0400
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net>
Message-ID: <20000829220226.P500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:52:32PM -0400, Barry A. Warsaw wrote:

> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     |     print i,resource.getrusage(resource.RUSAGE_SELF)   
> 
> My experience echos yours here MAL -- I've never seen anything 
> from getrusage() that would be useful in this context. :/

Ack. indeed. Nevermind my longer post then, getrusage() is usageless. (At
least on Linux.)

> A configure script test would be useful, but you'd have to build a
> minimal Python interpreter first to run the script, wouldn't you?

Nah, as long as you can test how many recursions it would take to run out of
stack... But it's still not optimal: we're doing a check at compiletime (or
rather, configure-time) on a limit which can change during the course of a
single process, nevermind a single installation ;P And I don't really like
doing a configure test that's just a program that tries to run out of
memory... it might turn out troublesome for systems with decent sized
stacks.

(getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
maximum number of recursions from that.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Tue Aug 29 22:05:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:05:13 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 29, 2000 at 10:02:27PM +0200
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net> <20000829220226.P500@xs4all.nl>
Message-ID: <20000829220513.Q500@xs4all.nl>

On Tue, Aug 29, 2000 at 10:02:27PM +0200, Thomas Wouters wrote:

> Ack. indeed. Nevermind my longer post then, getrusage() is usageless. (At
> least on Linux.)

And on BSDI, too.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 29 22:05:32 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:05:32 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>
References: <14764.5308.529148.181749@anthem.concentric.net>
	<Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>
Message-ID: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:
 > What would
 > 
 >     1, 3, 7 .. 99
 > 
 > do? :)

Ka-Ping Yee writes:
 >     ValueError: too many elements on left side of ".." operator
...
 >     ValueError: at most two elements permitted on left side of ".."

  Looks like a SyntaxError to me.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From mal at lemburg.com  Tue Aug 29 22:10:02 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:10:02 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
		<39AC0E7C.922536AA@lemburg.com>
		<14764.4545.972459.760991@bitdiddle.concentric.net>
		<39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net>
Message-ID: <39AC189A.95846E0@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     |     print i,resource.getrusage(resource.RUSAGE_SELF)
> 
> My experience echos yours here MAL -- I've never seen anything
> from getrusage() that would be useful in this context. :/
> 
> A configure script test would be useful, but you'd have to build a
> minimal Python interpreter first to run the script, wouldn't you?

I just experimented with this a bit: I can't seem to get
a plain C program to behave like the Python interpreter.

The C program can suck memory in large chunks and consume
great amounts of stack, it just doesn't dump core... (don't
know what I'm doing wrong here).

Yet the Python 2.0 interpreter only uses about 5MB of
memory at the time it dumps core -- seems strange to me,
since the plain C program can easily consume more than 20Megs
and still continues to run.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Tue Aug 29 22:09:29 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:09:29 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<20000829220226.P500@xs4all.nl>
Message-ID: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > (getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
 > maximum number of recursions from that.)

  Still no go -- we can calculate the number of recursions for a
particular call frame size (or expected mix of frame sizes, which is
really the same), but we can't predict recursive behavior inside a C
extension, which is a significant part of the problem (witness the SRE
experience).  That's why PyOS_StackCheck() actually has to do more
than test a counter -- if the counter is low but the call frames are
larger than our estimate, it won't help.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Tue Aug 29 22:12:57 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:12:57 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <14764.6473.859814.216436@beluga.mojam.com>

    Jeremy> Does anyone have suggestions for how to detect unbounded
    Jeremy> recursion in the Python core on Unix platforms?

On most (all?) processors in common usage, the stack grows down toward the
heap and the heap brows upward, so what you really want to do is detect that
collision.  brk and sbrk are used to manipulate the end of the heap.  A
local variable in the current scope should be able to tell you roughly where
the top of stack is.

Of course, you really can't call brk or sbrk safely.  You have to leave that
to malloc.  You might get some ideas of how to estimate the current end of
the heap by peering at the GNU malloc code.

This might be a good reason to experiment with Vladimir's obmalloc package.
It could easily be modified to remember the largest machine address it
returns via malloc or realloc calls.  That value could be compared with the
current top of stack.  If obmalloc brks() memory back to the system (I've
never looked at it - I'm just guessing) it could lower the saved value to
the last address in the block below the just recycled block.

(I agree this probably won't be done very well before 2.0 release.)

Skip



From tim_one at email.msn.com  Tue Aug 29 22:14:16 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 16:14:16 -0400
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: <200008291253.HAA32332@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMEHCAA.tim_one@email.msn.com>

[Tim]
> ...
> Have always wondered why Python didn't have that [ABC's boolean
> quatifiers] too; I ask that every year, but so far Guido has never
> answered it <wink>.

[Guido]
> I don't recall you asking me that even *once* before now.  Proof,
> please?

That's too time-consuming until DejaNews regains its memory.  I never asked
*directly*, it simply comes up at least once a year on c.l.py (and has since
the old days!), and then I always mention that it comes up every year but
that Guido never jumps into those threads <wink>.  The oldest reference I
can find in DejaNews today is just from January 1st of this year, at the end
of

    http://www.deja.com/getdoc.xp?AN=567219971

There it got mentioned offhandedly.  Much earlier threads were near-PEP'ish
in their development of how this could work in Python.  I'll attach the
earliest one I have in my personal email archive, from a bit over 4 years
ago.  All my personal email much before that got lost in KSR's bankruptcy
bit bucket.

> Anyway, the answer is that I saw diminishing returns from adding more
> keywords and syntax.

Yes, I've channeled that too -- that's why I never bugged you directly
<wink>.



-----Original Message-----
From: python-list-request at cwi.nl [mailto:python-list-request at cwi.nl]
Sent: Saturday, August 03, 1996 4:42 PM
To: Marc-Andre Lemburg; python-list at cwi.nl
Subject: RE: \exists and \forall in Python ?!


> [Marc-Andre Lemburg]
> ... [suggesting "\exists" & "\forall" quantifiers] ...

Python took several ideas from CWI's ABC language, and this is one that
didn't
make the cut.  I'd be interested to hear Guido's thoughts on this!  They're
certainly very nice to have, although I wouldn't say they're of core
importance.  But then a lot of "nice to have but hardly crucial" features
did
survive the cut (like, e.g., "x < y < z" as shorthand for "x < y and y <
z"),
and it's never clear where to draw the line.

In ABC, the additional keywords were "some", "each", "no" and "has", as in
(importing the ABC semantics into a virtual Python):

if some d in range(2,n) has n % d == 0:
    print n, "not prime; it's divisible by", d
else:
    print n, "is prime"

or

if no d in range(2,n) has n % d == 0:
    print n, "is prime"
else:
    print n, "not prime; it's divisible by", d

or

if each d in range(2,n) has n % d == 0:
    print n, "is <= 2; test vacuously true"
else:
    print n, "is not divisible by, e.g.,", d

So "some" is a friendly spelling of "there exists", "no" of "not there
exists", and "each" of "for all".  In addition to testing the condition,
"some" also bound the test vrbls to "the first"  witness if there was one,
and
"no" and "each" to the first counterexample if there was one.  I think ABC
got
that all exactly right, so (a) it's the right model to follow if Python were
to add this, and (b) the (very useful!) business of binding the test vrbls
if
& only if the test succeeds (for "some") or fails (for "no" and "each")
makes
it much harder to fake (comprehensibly & efficiently) via map & reduce
tricks.

side-effects-are-your-friends-ly y'rs  - tim

Tim Peters    tim_one at msn.com, tim at dragonsys.com
not speaking for Dragon Systems Inc.





From skip at mojam.com  Tue Aug 29 22:17:10 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:17:10 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<20000829220226.P500@xs4all.nl>
Message-ID: <14764.6726.985174.85964@beluga.mojam.com>

    Thomas> Nah, as long as you can test how many recursions it would take
    Thomas> to run out of stack... But it's still not optimal: we're doing a
    Thomas> check at compiletime (or rather, configure-time) on a limit
    Thomas> which can change during the course of a single process,
    Thomas> nevermind a single installation ;P And I don't really like doing
    Thomas> a configure test that's just a program that tries to run out of
    Thomas> memory... it might turn out troublesome for systems with decent
    Thomas> sized stacks.

Not to mention which you'll get different responses depending on how heavily
the system is using VM, right?  If you are unlucky enough to build on a
memory-rich system then copy the python interpreter over to a memory-starved
system (or just run the interpreter while you have Emacs, StarOffice and
Netscape running), you may well run out of virtual memory a lot sooner than
your configure script thought.

Skip



From skip at mojam.com  Tue Aug 29 22:19:16 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:19:16 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC189A.95846E0@lemburg.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<39AC189A.95846E0@lemburg.com>
Message-ID: <14764.6852.672716.587046@beluga.mojam.com>

    MAL> The C program can suck memory in large chunks and consume great
    MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
    MAL> doing wrong here).

Are you overwriting all that memory you malloc with random junk?  If not,
the stack and the heap may have collided but not corrupted each other.

Skip



From ping at lfw.org  Tue Aug 29 22:43:23 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:43:23 -0500 (CDT)
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: <m13Tf69-000wcDC@swing.co.at>
Message-ID: <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Christian Tanzer wrote:
> Triple quoted strings work -- that's what I'm constantly using. The
> downside is, that the docstrings either contain spurious white space
> or it messes up the layout of the code (if you start subsequent lines
> in the first column).

The "inspect" module (see http://www.lfw.org/python/) handles this nicely.

    Python 1.5.2 (#4, Jul 21 2000, 18:28:23) [C] on sunos5
    Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
    >>> import inspect
    >>> class Foo:
    ...     """First line.
    ...        Second line.
    ...            An indented line.
    ...        Some more text."""
    ... 
    >>> inspect.getdoc(Foo)
    'First line.\012Second line.\012    An indented line.\012Some more text.'
    >>> print _
    First line.
    Second line.
        An indented line.
    Some more text.
    >>>        

I suggested "inspect.py" for the standard library quite some time ago
(long before the feature freeze, and before ascii.py, which has since
made it in).  MAL responded pretty enthusiastically
(http://www.python.org/pipermail/python-dev/2000-July/013511.html).
Could i request a little more feedback from others?

It's also quite handy for other purposes.  It can get the source
code for a given function or class:

    >>> func = inspect.getdoc
    >>> inspect.getdoc(func)
    'Get the documentation string for an object.'
    >>> inspect.getfile(func)
    'inspect.py'
    >>> lines, lineno = inspect.getsource(func)
    >>> print string.join(lines)
    def getdoc(object):
         """Get the documentation string for an object."""
         if not hasattr(object, "__doc__"):
             raise TypeError, "arg has no __doc__ attribute"
         if object.__doc__:
             lines = string.split(string.expandtabs(object.__doc__), "\n")
             margin = None
             for line in lines[1:]:
                 content = len(string.lstrip(line))
                 if not content: continue
                 indent = len(line) - content
                 if margin is None: margin = indent
                 else: margin = min(margin, indent)
             if margin is not None:
                 for i in range(1, len(lines)): lines[i] = lines[i][margin:]
             return string.join(lines, "\n")

And it can get the argument spec for a function:

    >>> inspect.getargspec(func)
    (['object'], None, None, None)
    >>> apply(inspect.formatargspec, _)
    '(object)'

Here's a slightly more challenging example:

    >>> def func(a, (b, c), (d, (e, f), (g,)), h=3): pass
    ... 
    >>> inspect.getargspec(func)
    (['a', ['b', 'c'], ['d', ['e', 'f'], ['g']], 'h'], None, None, (3,))
    >>> apply(inspect.formatargspec, _)
    '(a, (b, c), (d, (e, f), (g,)), h=3)'
    >>> 



-- ?!ng




From cgw at fnal.gov  Tue Aug 29 22:22:03 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 15:22:03 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<20000829220226.P500@xs4all.nl>
	<14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
Message-ID: <14764.7019.100780.127130@buffalo.fnal.gov>

The situation on Linux is damn annoying, because, from a few minutes
of rummaging around in the kernel it's clear that this information
*is* available to the kernel, just not exposed to the user in a useful
way.  The file /proc/<pid>/statm [1] gives as field 5 "drs", which is
"number of pages of data/stack".  If only the data and stack weren't
lumped together in this number, we could actually do something with
it!

[1]: Present on Linux 2.2 only.  See /usr/src/linux/Documentation/proc.txt
for description of this (fairly obscure) file.




From mal at lemburg.com  Tue Aug 29 22:24:00 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:24:00 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
		<39AC0E7C.922536AA@lemburg.com>
		<14764.4545.972459.760991@bitdiddle.concentric.net>
		<39AC1210.18703B0B@lemburg.com>
		<14764.5248.979275.341242@anthem.concentric.net>
		<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com>
Message-ID: <39AC1BE0.FFAA9100@lemburg.com>

Skip Montanaro wrote:
> 
>     MAL> The C program can suck memory in large chunks and consume great
>     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
>     MAL> doing wrong here).
> 
> Are you overwriting all that memory you malloc with random junk?  If not,
> the stack and the heap may have collided but not corrupted each other.

Not random junk, but all 1s:

int recurse(int depth)
{
    char buffer[2048];
    memset(buffer, 1, sizeof(buffer));

    /* Call recursively */
    printf("%d\n",depth);
    recurse(depth + 1);
}

main()
{
    recurse(0);
}

Perhaps I need to go up a bit on the stack to trigger the
collision (i.e. go down two levels, then up one, etc.) ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Tue Aug 29 22:49:28 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:49:28 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Fred L. Drake, Jr. wrote:
> Ka-Ping Yee writes:
>  >     ValueError: too many elements on left side of ".." operator
> ...
>  >     ValueError: at most two elements permitted on left side of ".."
> 
>   Looks like a SyntaxError to me.  ;)

I would have called "\xgh" a SyntaxError too, but Guido argued
convincingly that it's consistently ValueError for bad literals.
So i'm sticking with that.  See the thread of replies to

    http://www.python.org/pipermail/python-dev/2000-August/014629.html


-- ?!ng




From thomas at xs4all.net  Tue Aug 29 22:26:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:26:53 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6852.672716.587046@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 03:19:16PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net> <39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com>
Message-ID: <20000829222653.R500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:19:16PM -0500, Skip Montanaro wrote:

>     MAL> The C program can suck memory in large chunks and consume great
>     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
>     MAL> doing wrong here).

Are you sure you are consuming *stack* ?

> Are you overwriting all that memory you malloc with random junk?  If not,
> the stack and the heap may have collided but not corrupted each other.

malloc() does not consume stackspace, it consumes heapspace. Don't bother
using malloc(). You have to allocate huge tracks o' land in 'automatic'
variables, or use alloca() (which isn't portable.) Depending on your arch,
you might need to actually write to every, ooh, 1024th int or so.

{
    int *spam[0x2000];
	(etc)
}

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 29 22:26:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:26:43 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>
References: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>
	<Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>
Message-ID: <14764.7299.991437.132621@cj42289-a.reston1.va.home.com>

Ka-Ping Yee writes:
 > I would have called "\xgh" a SyntaxError too, but Guido argued
 > convincingly that it's consistently ValueError for bad literals.

  I understand the idea about bad literals.  I don't think that's what
this is.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Aug 29 22:40:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 22:40:16 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>	<39AC0E7C.922536AA@lemburg.com>	<14764.4545.972459.760991@bitdiddle.concentric.net>	<39AC1210.18703B0B@lemburg.com>	<14764.5248.979275.341242@anthem.concentric.net>	<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com> <39AC1BE0.FFAA9100@lemburg.com>
Message-ID: <00eb01c011f9$59d47a80$766940d5@hagrid>

mal wrote:
> int recurse(int depth)
> {
>     char buffer[2048];
>     memset(buffer, 1, sizeof(buffer));
> 
>     /* Call recursively */
>     printf("%d\n",depth);
>     recurse(depth + 1);
> }
> 
> main()
> {
>     recurse(0);
> }
> 
> Perhaps I need to go up a bit on the stack to trigger the
> collision (i.e. go down two levels, then up one, etc.) ?!

or maybe the optimizer removed your buffer variable?

try printing the buffer address, to see how much memory
you're really consuming here.

     printf("%p %d\n", buffer, depth);

</F>




From thomas at xs4all.net  Tue Aug 29 22:31:08 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:31:08 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6473.859814.216436@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 03:12:57PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <14764.6473.859814.216436@beluga.mojam.com>
Message-ID: <20000829223108.S500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:12:57PM -0500, Skip Montanaro wrote:

> On most (all?) processors in common usage, the stack grows down toward the
> heap and the heap brows upward, so what you really want to do is detect that
> collision.  brk and sbrk are used to manipulate the end of the heap.  A
> local variable in the current scope should be able to tell you roughly where
> the top of stack is.

I don't think that'll help, because the limit isn't the actual (physical)
memory limit, but mostly just administrative limits. 'limit' or 'limits',
depending on your shell.

> current top of stack.  If obmalloc brks() memory back to the system (I've
> never looked at it - I'm just guessing) it could lower the saved value to
> the last address in the block below the just recycled block.

Last I looked, obmalloc() worked on top of the normal system malloc (or its
replacement if you provide one) and doesn't brk/sbrk itself (thank god --
that would mean nastiness if extention modules or such used malloc, or if
python were embedded into a system using malloc!)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Tue Aug 29 22:39:18 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:39:18 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>	<39AC0E7C.922536AA@lemburg.com>	<14764.4545.972459.760991@bitdiddle.concentric.net>	<39AC1210.18703B0B@lemburg.com>	<14764.5248.979275.341242@anthem.concentric.net>	<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com> <39AC1BE0.FFAA9100@lemburg.com> <00eb01c011f9$59d47a80$766940d5@hagrid>
Message-ID: <39AC1F76.41CCED9@lemburg.com>

Fredrik Lundh wrote:
> 
> ...
> 
> or maybe the optimizer removed your buffer variable?
> 
> try printing the buffer address, to see how much memory
> you're really consuming here.
> 
>      printf("%p %d\n", buffer, depth);

I got some more insight using:

int checkstack(int depth)
{
    if (depth <= 0)
	return 0;
    checkstack(depth - 1);
}

int recurse(int depth)
{
    char stack[2048];
    char *heap;
    
    memset(stack, depth % 256, sizeof(stack));
    heap = (char*) malloc(2048);

    /* Call recursively */
    printf("stack %p heap %p depth %d\n", stack, heap, depth);
    checkstack(depth);
    recurse(depth + 1);
    return 0;
}

main()
{
    recurse(0);
}

This print lines like these:
stack 0xbed4b118 heap 0x92a1cb8 depth 9356

... or in other words over 3GB of space between the stack and
the heap. No wonder I'm not seeing any core dumps.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Tue Aug 29 22:44:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 16:44:18 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52981.603640.415652@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMIHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> One of the original arguments for range literals as I recall was that
> indexing of loops could get more efficient.  The compiler would know
> that [0:100:2] represents a series of integers and could conceivably
> generate more efficient loop indexing code (and so could Python2C and
> other compilers that generated C code).  This argument doesn't seem to
> be showing up here at all.  Does it carry no weight in the face of the
> relative inscrutability of the syntax?

It carries no weight at all *for 2.0* because the patch didn't exploit the
efficiency possibilities.

Which I expect are highly overrated (maybe 3% in a "good case" real-life
loop) anyway.  Even if they aren't, the same argument would apply to any
other new syntax for this too, so in no case is it an argument in favor of
this specific new syntax over alternative new syntaxes.

There are also well-known ways to optimize the current "range" exactly the
way Python works today; e.g., compile two versions of the loop, one assuming
range is the builtin, the other assuming it may be anything, then a quick
runtime test to jump to the right one.  Guido hates that idea just because
it's despicable <wink>, but that's the kind of stuff optimizing compilers
*do*, and if we're going to get excited about efficiency then we need to
consider *all sorts of* despicable tricks like that.

In any case, I've spent 5 hours straight now digging thru Python email, have
more backed up than when I started, and have gotten nothing done today
toward moving 2.0b1 along.  I'd love to talk more about all this, but there
simply isn't the time for it now ...





From cgw at fnal.gov  Tue Aug 29 23:05:21 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 16:05:21 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <14764.9617.203071.639126@buffalo.fnal.gov>

Jeremy Hylton writes:
 > Does anyone have suggestions for how to detect unbounded recursion in
 > the Python core on Unix platforms?

Hey, check this out! - it's not portable in general, but it works for Linux,
which certainly covers a large number of the systems out there in the world.

#!/usr/bin/env python

def getstack():
    for l in open("/proc/self/status").readlines():
        if l.startswith('VmStk'):
            t = l.split()
            return 1024 * int(t[1])


def f():
    print getstack()
    f()

f()


I'm working up a version of this in C; you can do a "getrlimit" to
find the maximum stack size, then read /proc/self/status to get
current stack usage, and compare these values.

As far as people using systems that have a broken getrusage and also
no /proc niftiness, well... get yourself a real operating system ;-)







From effbot at telia.com  Tue Aug 29 23:03:43 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 23:03:43 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com> <39ABF1B8.426B7A6@lemburg.com>
Message-ID: <012d01c011fe$31d23900$766940d5@hagrid>

mal wrote:
> Here is a pre-release version of mx.DateTime which should fix
> the problem (the new release will use the top-level mx package
> -- it does contain a backward compatibility hack though):
>
> http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
>
> Please let me know if it fixes your problem... I don't use PyApache.

mal, can you update the bug database.  this bug is still listed
as an open bug in the python core...

http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

</F>




From mal at lemburg.com  Tue Aug 29 23:12:45 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 23:12:45 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com> <39ABF1B8.426B7A6@lemburg.com> <012d01c011fe$31d23900$766940d5@hagrid>
Message-ID: <39AC274D.AD9856C7@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > Here is a pre-release version of mx.DateTime which should fix
> > the problem (the new release will use the top-level mx package
> > -- it does contain a backward compatibility hack though):
> >
> > http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> >
> > Please let me know if it fixes your problem... I don't use PyApache.
> 
> mal, can you update the bug database.  this bug is still listed
> as an open bug in the python core...
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

Hmm, I thought I had already closed it... done.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From barry at scottb.demon.co.uk  Tue Aug 29 23:21:04 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 29 Aug 2000 22:21:04 +0100
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC1F76.41CCED9@lemburg.com>
Message-ID: <000e01c011ff$09e3ca70$060210ac@private>

Use the problem as the solution.

The problem is that you get a SIGSEGV after you fall off the end of the stack.
(I'm assuming you always have guard pages between the stack end and other memory
zones. Otherwise you will not get the SEGV).

If you probe ahead of the stack to trigger the SIGSEGV you can use the signal
handler to trap the probe and recover gracefully. Use posix signal handling
everywhere for portability (don't mix posix and not and expect signals to work
BTW).

jmp_buf probe_env;

int CheckStack()	/* untested */
	{
	if( setjmp( &probe_env ) == 0 )
		{
		char buf[32];
		/* need code to deal with direction of stack */
		if( grow_down )
			buf[-65536] = 1;
		else
			buf[65536] = 1;
		return 1; /* stack is fine of 64k */
		}
	else
		{
		return 0; /* will run out of stack soon */
		}
	}

void sigsegv_handler( int )
	{
	longjmp( &probe_env );
	}

			Barry (not just a Windows devo <wink>)





From thomas at xs4all.net  Tue Aug 29 23:43:29 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 23:43:29 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.9617.203071.639126@buffalo.fnal.gov>; from cgw@fnal.gov on Tue, Aug 29, 2000 at 04:05:21PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <14764.9617.203071.639126@buffalo.fnal.gov>
Message-ID: <20000829234329.T500@xs4all.nl>

On Tue, Aug 29, 2000 at 04:05:21PM -0500, Charles G Waldman wrote:

> Jeremy Hylton writes:
>  > Does anyone have suggestions for how to detect unbounded recursion in
>  > the Python core on Unix platforms?

> Hey, check this out! - it's not portable in general, but it works for Linux,
> which certainly covers a large number of the systems out there in the world.

'large' in terms of "number of instances", perhaps, but not very large in
terms of total number of operating system types/versions, I think. I know of
two operating systems that implement that info in /proc (FreeBSD and Linux)
and one where it's optional (but default off and probably untested: BSDI.) I
also think that this is a very costly thing to do every ten (or even every
hundred) recursions.... I would go for the auto-vrbl-address-check, in
combination with either a fixed stack limit, or getrlimit() - which does
seem to work. Or perhaps the alloca() check for systems that have it (which
can be checked) and seems to work properly (which can be checked, too, but
not as reliable.)

The vrbl-address check only does a few integer calculations, and we can
forgo the getrlimit() call if we do it somewhere during Python init, and
after every call of resource.setrlimit(). (Or just do it anyway: it's
probably not *that* expensive, and if we don't do it each time, we can still
run into trouble if another extention module sets limits, or if python is
embedded in something that changes limits on the fly.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Wed Aug 30 01:10:25 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 01:10:25 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>; from ping@lfw.org on Tue, Aug 29, 2000 at 03:43:23PM -0500
References: <m13Tf69-000wcDC@swing.co.at> <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>
Message-ID: <20000830011025.V500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:43:23PM -0500, Ka-Ping Yee wrote:

> The "inspect" module (see http://www.lfw.org/python/) handles this nicely.

[snip example]

> I suggested "inspect.py" for the standard library quite some time ago
> (long before the feature freeze, and before ascii.py, which has since
> made it in).  MAL responded pretty enthusiastically
> (http://www.python.org/pipermail/python-dev/2000-July/013511.html).
> Could i request a little more feedback from others?

Looks fine to me, would fit nicely in with the other introspective things we
already have (dis, profile, etc) -- but wasn't it going to be added to the
'help' (or what was it) stdlib module ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From greg at cosc.canterbury.ac.nz  Wed Aug 30 04:11:41 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 30 Aug 2000 14:11:41 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>

> I doubt there would be much
> problem adding ".." as a token either.

If we're going to use any sort of ellipsis syntax here, I
think it would be highly preferable to use the ellipsis
token we've already got. I can't see any justification for
having two different ellipsis-like tokens in the language,
when there would be no ambiguity in using one for both
purposes.

> What we really want I think is something that evokes the following in the
> mind of the reader
> 
>     for i from START to END incrementing by STEP:

Am I right in thinking that the main motivation here is
to clean up the "for i in range(len(a))" idiom? If so,
what's wrong with a built-in:

  def indices(a):
    return range(len(a))

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From skip at mojam.com  Wed Aug 30 05:43:57 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 22:43:57 -0500 (CDT)
Subject: [Python-Dev] MacPython 2.0?
Message-ID: <14764.33533.218103.763531@beluga.mojam.com>

Has Jack or anyone else been building Mac versions of 2.0 and making them
available somewhere?  I seem to have fallen off the MacPython list and
haven't taken the time to investigate (perhaps I set subscription to NOMAIL
and forgot that crucial point).  I have no compilation tools on my Mac, so
while I'd like to try testing things a little bit there, I am entirely
dependent on others to provide me with something runnable.

Thx,

Skip



From tanzer at swing.co.at  Wed Aug 30 08:23:08 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Wed, 30 Aug 2000 08:23:08 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: Your message of "Tue, 29 Aug 2000 11:41:15 +0200."
             <39AB853B.217402A2@lemburg.com> 
Message-ID: <m13U1HA-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

> > Triple quoted strings work -- that's what I'm constantly using. The
> > downside is, that the docstrings either contain spurious white space
> > or it messes up the layout of the code (if you start subsequent lines
> > in the first column).
> 
> Just a question of how smart you doc string extraction
> tools are. Have a look at hack.py:

Come on. There are probably hundreds of hacks around to massage
docstrings. I've written one myself. Ka-Ping Yee suggested
inspect.py...

My point was that in such cases it is much better if the language does
it than if everybody does his own kludge. If a change of the Python
parser concerning this point is out of the question, why not have a
standard module providing this functionality (Ka-Ping Yee offered one
<nudge>, <nudge>).

Regards,
Christian

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Wed Aug 30 10:35:00 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 10:35:00 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13U1HA-000wcEC@swing.co.at>
Message-ID: <39ACC734.6F436894@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> 
> > > Triple quoted strings work -- that's what I'm constantly using. The
> > > downside is, that the docstrings either contain spurious white space
> > > or it messes up the layout of the code (if you start subsequent lines
> > > in the first column).
> >
> > Just a question of how smart you doc string extraction
> > tools are. Have a look at hack.py:
> 
> Come on. There are probably hundreds of hacks around to massage
> docstrings. I've written one myself. Ka-Ping Yee suggested
> inspect.py...

That's the point I wanted to make: there's no need to care much
about """-string formatting while writing them as long as you have
tools which do it for you at extraction time.
 
> My point was that in such cases it is much better if the language does
> it than if everybody does his own kludge. If a change of the Python
> parser concerning this point is out of the question, why not have a
> standard module providing this functionality (Ka-Ping Yee offered one
> <nudge>, <nudge>).

Would be a nice addition for Python's stdlib, yes. Maybe for 2.1,
since we are in feature freeze for 2.0...

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From pf at artcom-gmbh.de  Wed Aug 30 10:39:39 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Wed, 30 Aug 2000 10:39:39 +0200 (MEST)
Subject: Memory overcommitment and guessing about stack size (was Re: [Python-Dev] stack check on Unix: any suggestions?)
In-Reply-To: <39AC1BE0.FFAA9100@lemburg.com> from "M.-A. Lemburg" at "Aug 29, 2000 10:24: 0 pm"
Message-ID: <m13U3PH-000Dm9C@artcom0.artcom-gmbh.de>

Hi,

Any attempts to *reliable* predict the amount of virtual memory (stack+heap)
available to a process are *DOOMED TO FAIL* by principle on any unixoid
System.

Some of you might have missed all those repeated threads about virtual memory
allocation and the overcommitment strategy in the various Linux groups.  

M.-A. Lemburg:
> Skip Montanaro wrote:
> > 
> >     MAL> The C program can suck memory in large chunks and consume great
> >     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
> >     MAL> doing wrong here).
> > 
> > Are you overwriting all that memory you malloc with random junk?  If not,
> > the stack and the heap may have collided but not corrupted each other.
> 
> Not random junk, but all 1s:
[...]

For anyone interested in more details, I attach an email written by
Linus Thorvalds in the thread 'Re: Linux is 'creating' memory ?!'
on 'comp.os.linux.developmen.apps' on Mar 20th 1995, since I was
unable to locate this article on Deja (you know).

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

From martin at loewis.home.cs.tu-berlin.de  Wed Aug 30 11:12:58 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 11:12:58 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>

> Does anyone have suggestions for how to detect unbounded recursion
> in the Python core on Unix platforms?

I just submitted patch 101352, at

http://sourceforge.net/patch/?func=detailpatch&patch_id=101352&group_id=5470

This patch works on the realistic assumption that reliable stack usage
is not available through getrusage on most systems, so it uses an
estimate instead. The upper stack boundary is determined on thread
creation; the lower stack boundary inside the check. This must allow
for initial stack frames (main, _entry, etc), and for pages allocated
by on the stack by the system. At least on Linux, argv and env pages
count towards the stack limit.

If some systems are known to return good results from getrusage, that
should be used instead.

I have tested this patch on a Linux box to detect recursion in both
the example of bug 112943, as well as the foo() recursion; the latter
would crash with stock CVS python only when I reduced the stack limit
from 8MB to 1MB.

Since the patch uses a heuristic to determine stack exhaustion, it is
probably possible to find cases where it does not work. I.e. it might
diagnose exhaustion, where it could run somewhat longer (rather,
deeper), or it fails to diagnose exhaustion when it is really out of
stack. It is also likely that there are better heuristics. Overall, I
believe this patch is an improvement.

While this patch claims to support all of Unix, it only works where
getrlimit(RLIMIT_STACK) works. Unix(tm) does guarantee this API; it
should work on *BSD and many other Unices as well.

Comments?

Martin



From mal at lemburg.com  Wed Aug 30 11:56:31 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 11:56:31 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
Message-ID: <39ACDA4F.3EF72655@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > Does anyone have suggestions for how to detect unbounded recursion
> > in the Python core on Unix platforms?
> 
> I just submitted patch 101352, at
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101352&group_id=5470
> 
> This patch works on the realistic assumption that reliable stack usage
> is not available through getrusage on most systems, so it uses an
> estimate instead. The upper stack boundary is determined on thread
> creation; the lower stack boundary inside the check. This must allow
> for initial stack frames (main, _entry, etc), and for pages allocated
> by on the stack by the system. At least on Linux, argv and env pages
> count towards the stack limit.
> 
> If some systems are known to return good results from getrusage, that
> should be used instead.
> 
> I have tested this patch on a Linux box to detect recursion in both
> the example of bug 112943, as well as the foo() recursion; the latter
> would crash with stock CVS python only when I reduced the stack limit
> from 8MB to 1MB.
> 
> Since the patch uses a heuristic to determine stack exhaustion, it is
> probably possible to find cases where it does not work. I.e. it might
> diagnose exhaustion, where it could run somewhat longer (rather,
> deeper), or it fails to diagnose exhaustion when it is really out of
> stack. It is also likely that there are better heuristics. Overall, I
> believe this patch is an improvement.
> 
> While this patch claims to support all of Unix, it only works where
> getrlimit(RLIMIT_STACK) works. Unix(tm) does guarantee this API; it
> should work on *BSD and many other Unices as well.
> 
> Comments?

See my comments in the patch manager... the patch looks fine
except for two things: getrlimit() should be tested for
usability in the configure script and the call frequency
of PyOS_CheckStack() should be lowered to only use it for
potentially recursive programs.

Apart from that, this looks like the best alternative so far :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Wed Aug 30 13:58:51 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 30 Aug 2000 11:58:51 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>
Message-ID: <39ACF6FB.66BAB739@nowonder.de>

Greg Ewing wrote:
> 
> Am I right in thinking that the main motivation here is
> to clean up the "for i in range(len(a))" idiom? If so,
> what's wrong with a built-in:
> 
>   def indices(a):
>     return range(len(a))

As far as I know adding a builtin indices() has been
rejected as an idea.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From effbot at telia.com  Wed Aug 30 12:27:12 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 30 Aug 2000 12:27:12 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com>
Message-ID: <000d01c0126c$dfe700c0$766940d5@hagrid>

mal wrote:
> See my comments in the patch manager... the patch looks fine
> except for two things: getrlimit() should be tested for
> usability in the configure script and the call frequency
> of PyOS_CheckStack() should be lowered to only use it for
> potentially recursive programs.

the latter would break windows and mac versions of Python,
where Python can run on very small stacks (not to mention
embedded systems...)

for those platforms, CheckStack is designed to work with an
8k safety margin (PYOS_STACK_MARGIN)

:::

one way to address this is to introduce a scale factor, so that
you can add checks based on the default 8k limit, but auto-
magically apply them less often platforms where the safety
margin is much larger...

/* checkstack, but with a "scale" factor */
#if windows or mac
/* default safety margin */
#define PYOS_CHECKSTACK(v, n)\
    (((v) % (n) == 0) && PyOS_CheckStack())
#elif linux
/* at least 10 times the default safety margin */
#define PYOS_CHECKSTACK(v, n)\
    (((v) % ((n)*10) == 0) && PyOS_CheckStack())
#endif

 if (PYOS_CHECKSTACK(tstate->recursion_depth, 10)
    ...

</F>




From mal at lemburg.com  Wed Aug 30 12:42:39 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 12:42:39 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid>
Message-ID: <39ACE51F.3AEC75AB@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > See my comments in the patch manager... the patch looks fine
> > except for two things: getrlimit() should be tested for
> > usability in the configure script and the call frequency
> > of PyOS_CheckStack() should be lowered to only use it for
> > potentially recursive programs.
> 
> the latter would break windows and mac versions of Python,
> where Python can run on very small stacks (not to mention
> embedded systems...)
> 
> for those platforms, CheckStack is designed to work with an
> 8k safety margin (PYOS_STACK_MARGIN)

Ok, I don't mind calling it every ten levels deep, but I'd
rather not have it start at level 0. The reason is
that many programs probably don't make much use of
recursion anyway and have a maximum call depth of around
10-50 levels (Python programs usually using shallow class hierarchies).
These programs should not be bothered by calling PyOS_CheckStack()
all the time. Recursive programs will easily reach the 100 mark -- 
those should call PyOS_CheckStack often enough to notice the 
stack problems.

So the check would look something like this:

if (tstate->recursion_depth >= 50 &&
    tstate->recursion_depth%10 == 0 &&
    PyOS_CheckStack()) {
                PyErr_SetString(PyExc_MemoryError, "Stack overflow");
                return NULL;
        }

> :::
> 
> one way to address this is to introduce a scale factor, so that
> you can add checks based on the default 8k limit, but auto-
> magically apply them less often platforms where the safety
> margin is much larger...
> 
> /* checkstack, but with a "scale" factor */
> #if windows or mac
> /* default safety margin */
> #define PYOS_CHECKSTACK(v, n)\
>     (((v) % (n) == 0) && PyOS_CheckStack())
> #elif linux
> /* at least 10 times the default safety margin */
> #define PYOS_CHECKSTACK(v, n)\
>     (((v) % ((n)*10) == 0) && PyOS_CheckStack())
> #endif
> 
>  if (PYOS_CHECKSTACK(tstate->recursion_depth, 10)
>     ...

I'm not exactly sure how large the safety margin is with
Martin's patch, but this seems a good idea.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Wed Aug 30 12:49:59 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 30 Aug 2000 13:49:59 +0300 (IDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008301348150.2545-100000@sundial>

On Tue, 29 Aug 2000, Fred L. Drake, Jr. wrote:

> 
> Thomas Wouters writes:
>  > (getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
>  > maximum number of recursions from that.)
> 
>   Still no go -- we can calculate the number of recursions for a
> particular call frame size (or expected mix of frame sizes, which is
> really the same), but we can't predict recursive behavior inside a C
> extension, which is a significant part of the problem (witness the SRE
> experience).  That's why PyOS_StackCheck() actually has to do more
> than test a counter -- if the counter is low but the call frames are
> larger than our estimate, it won't help.

Can my trick (which works only if Python has control of the main) of
comparing addresses of local variables against addresses of local 
variables from main() and against the stack limit be used? 99% of the
people are using the plain Python interpreter with extensions, so it'll
solve 99% of the problem?
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From jack at oratrix.nl  Wed Aug 30 13:30:01 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:30:01 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions? 
In-Reply-To: Message by Jeremy Hylton <jeremy@beopen.com> ,
	     Tue, 29 Aug 2000 14:42:41 -0400 (EDT) , <14764.1057.909517.977904@bitdiddle.concentric.net> 
Message-ID: <20000830113002.44CE7303181@snelboot.oratrix.nl>

My SGI has getrlimit(RLIMIT_STACK) which should do the trick. But maybe this 
is an sgi-ism? Otherwise RLIMIT_VMEM and subtracting brk() may do the trick.

While thinking about this, though, I suddenly realised that my (new, faster) 
Mac implementation of PyOS_CheckStack will fail miserably in any other than 
the main thread, something I'll have to fix shortly.

Unix code will also have to differentiate between running on the main stack 
and a sub-thread stack, probably. And I haven't looked at the way 
PyOS_CheckStack is implemented on Windows, but it may well also share this 
problem.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jack at oratrix.nl  Wed Aug 30 13:38:55 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:38:55 +0200
Subject: [Python-Dev] MacPython 2.0? 
In-Reply-To: Message by Skip Montanaro <skip@mojam.com> ,
	     Tue, 29 Aug 2000 22:43:57 -0500 (CDT) , <14764.33533.218103.763531@beluga.mojam.com> 
Message-ID: <20000830113855.B1F2F303181@snelboot.oratrix.nl>

> Has Jack or anyone else been building Mac versions of 2.0 and making them
> available somewhere?  I seem to have fallen off the MacPython list and
> haven't taken the time to investigate (perhaps I set subscription to NOMAIL
> and forgot that crucial point).  I have no compilation tools on my Mac, so
> while I'd like to try testing things a little bit there, I am entirely
> dependent on others to provide me with something runnable.

I'm waiting for Guido to release a 2.0 and then I'll quickly follow suit. I 
have almost everything in place for the first alfa/beta.

But, if you're willing to be my guineapig I'd be happy to build you a 
distribution of the current state of things tonight or tomorrow, let me know.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jack at oratrix.nl  Wed Aug 30 13:53:32 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:53:32 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions? 
In-Reply-To: Message by "M.-A. Lemburg" <mal@lemburg.com> ,
	     Wed, 30 Aug 2000 12:42:39 +0200 , <39ACE51F.3AEC75AB@lemburg.com> 
Message-ID: <20000830115332.5CA4A303181@snelboot.oratrix.nl>

A completely different way to go about getting the stacksize on Unix is by 
actually committing the space once in a while. Something like (typed in as I'm 
making it up):

STACK_INCREMENT=128000

prober() {
    char space[STACK_INCREMENT];

    space[0] = 1;
    /* or maybe for(i=0;i<STACK_INCREMENT; i+=PAGESIZE) or so */
    space[STACK_INCREMENT-1] = 1;
}

jmp_buf buf;
catcher() {
    longjmp(buf);
    return 1;
}

PyOS_CheckStack() {
    static char *known_safe;
    char *here;

    if (we-are-in-a-thread())
	go do different things;
    if ( &here > known_safe )
	return 1;
    keep-old-SIGSEGV-handler;
    if ( setjmp(buf) )
	return 0;
    signal(SIGSEGV, catcher);
    prober();
    restore-old-SIGSEGV-handler;
    known_safe = &here - (STACK_INCREMENT - PYOS_STACK_MARGIN);
    return 1;
}
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From thomas at xs4all.net  Wed Aug 30 14:25:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 14:25:42 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000830113002.44CE7303181@snelboot.oratrix.nl>; from jack@oratrix.nl on Wed, Aug 30, 2000 at 01:30:01PM +0200
References: <jeremy@beopen.com> <20000830113002.44CE7303181@snelboot.oratrix.nl>
Message-ID: <20000830142542.A12695@xs4all.nl>

On Wed, Aug 30, 2000 at 01:30:01PM +0200, Jack Jansen wrote:

> My SGI has getrlimit(RLIMIT_STACK) which should do the trick. But maybe this 
> is an sgi-ism? Otherwise RLIMIT_VMEM and subtracting brk() may do the trick.

No, getrlimit(RLIMIT_STACK, &rlim) is the way to go. 'getrlimit' isn't
available everywhere, but the RLIMIT_STACK constant is universal, as far as
I know. And we can use autoconf to figure out if it's available.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Wed Aug 30 15:30:23 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 30 Aug 2000 15:30:23 +0200
Subject: [Python-Dev] Lukewarm about range literals
References: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>
Message-ID: <04d101c01286$7444d6c0$0900a8c0@SPIFF>

greg wrote:
> If we're going to use any sort of ellipsis syntax here, I
> think it would be highly preferable to use the ellipsis
> token we've already got. I can't see any justification for
> having two different ellipsis-like tokens in the language,
> when there would be no ambiguity in using one for both
> purposes.

footnote: "..." isn't really token:

>>> class Spam:
...     def __getitem__(self, index):
...         print index
...
>>> spam = Spam()
>>> spam[...]
Ellipsis
>>> spam[. . .]
Ellipsis
>>> spam[.
... .
... .
... ]
Ellipsis

(etc)

</F>




From amk1 at erols.com  Wed Aug 30 15:26:20 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Wed, 30 Aug 2000 09:26:20 -0400
Subject: [Python-Dev] Cookie.py security
Message-ID: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>

[CC'ed to python-dev and Tim O'Malley]

The Cookie module recently added to 2.0 provides 3 classes of Cookie:
SimpleCookie, which treats cookie values as simple strings, 
SerialCookie, which treats cookie values as pickles and unpickles them,
and SmartCookie which figures out if the value is a pickle or not.

Unpickling untrusted data is unsafe.  (Correct?)  Therefore,
SerialCookie and SmartCookie really shouldn't be used, and Moshe's
docs for the module say so.

Question: should SerialCookie and SmartCookie be removed?  If they're
not there, people won't accidentally use them because they didn't read
the docs and missed the warning.

Con: breaks backward compatibility with the existing cookie module and
forks the code.  

(Are marshals safer than pickles?  What if SerialCookie used marshal
instead?)

--amk




From fdrake at beopen.com  Wed Aug 30 16:09:16 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 30 Aug 2000 10:09:16 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>

A.M. Kuchling writes:
 > (Are marshals safer than pickles?  What if SerialCookie used marshal
 > instead?)

  A bit safer, I think, but this maintains the backward compatibility
issue.
  If it is useful to change the API, this is the best time to do it,
but we'd probably want to rename the module as well.  Shared
maintenance is also an issue -- Tim's opinion is very valuable here!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Wed Aug 30 18:18:29 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 30 Aug 2000 09:18:29 -0700
Subject: [Python-Dev] NetBSD compilation bug - I need help (was: Re: Python bug)
In-Reply-To: <14764.32658.941039.258537@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Aug 29, 2000 at 11:29:22PM -0400
References: <14764.32658.941039.258537@bitdiddle.concentric.net>
Message-ID: <20000830091829.C14776@ActiveState.com>

On Tue, Aug 29, 2000 at 11:29:22PM -0400, Jeremy Hylton wrote:
> You have one open Python bug that is assigned to you and given a
> priority seven or higher.  I would like to resolve this bugs before
> the 2.0b1 release.
> 
> The bug is:
> 112289 | NetBSD1.4.2 build issue 
>

Sorry to have let this one get a little stale. I can give it a try. A couple
of questions:

1. Who reported this bug? He talked about providing more information and I
would like to speak with him. I cannot find his email address.
2. Does anybody have a NetBSD1.4.2 (or close) machine that I can get shell
access to? Do you know if they have such a machine at SourceForge that users
can get shell access to? Or failing that can someone with such a machine give
me the full ./configure and make output and maybe run this command:
   find /usr/include -name "*" -type f | xargs -l grep -nH _TELL64
and give me the output.


If I come up blank on both of these then I can't really expect to fix this
bug.


Thanks,
Trent


-- 
Trent Mick
TrentM at ActiveState.com



From pf at artcom-gmbh.de  Wed Aug 30 18:37:16 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Wed, 30 Aug 2000 18:37:16 +0200 (MEST)
Subject: os.remove() behaviour on empty directories (was Re: [Python-Dev] If you thought there were too many PEPs...)
In-Reply-To: <200008271828.NAA14847@cj20424-a.reston1.va.home.com> from Guido van Rossum at "Aug 27, 2000  1:28:46 pm"
Message-ID: <m13UArU-000Dm9C@artcom0.artcom-gmbh.de>

Hi,

effbot:
> > btw, Python's remove/unlink implementation is slightly
> > broken -- they both map to unlink, but that's not the
> > right way to do it:
> > 
> > from SUSv2:
> > 
> >     int remove(const char *path);
> > 
> >     If path does not name a directory, remove(path)
> >     is equivalent to unlink(path). 
> > 
> >     If path names a directory, remove(path) is equi-
> >     valent to rmdir(path). 
> > 
> > should I fix this?

BDFL:
> That's a new one -- didn't exist when I learned Unix.

Yes, this 'remove()' has been added relatively late to Unix.  It didn't
existed for example in SCO XENIX 386 (the first "real" OS available
for relatively inexpensive IBM-PC arch boxes long before the advent
of Linux).

Changing the behaviour of Pythons 'os.remove()' on Unices might break 
some existing code (although such code is not portable to WinXX anyway):

pf at artcom0:ttyp3 ~ 7> mkdir emptydir
pf at artcom0:ttyp3 ~ 8> python
Python 1.5.2 (#1, Jul 23 1999, 06:38:16)  [GCC egcs-2.91.66 19990314/Linux (egcs- on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import os
>>> try:
...     os.remove('emptydir')
... except OSError:
...     print 'emptydir is a directory'
... 
emptydir is a directory
>>> 

> I guess we can fix this in 2.1.

Please don't do this without a heavy duty warning in a section about
expected upgrade problems.  

This change might annoy people, who otherwise don't care about
portability and use Python on Unices only.  I imagine people using
something like this:

    def cleanup_junkfiles(targetdir)
        for n in os.listdir(targetdir):
            try:
                os.remove(n)
            except OSError:
                pass

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)



From thomas at xs4all.net  Wed Aug 30 19:39:48 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 19:39:48 +0200
Subject: [Python-Dev] Threads & autoconf
Message-ID: <20000830193948.C12695@xs4all.nl>

I'm trying to clean up the autoconf (and README) mess wrt. threads a bit,
but I think I need some hints ;) I can't figure out why there is a separate
--with-dec-threads option... Is there a reason it can't be autodetected like
we do for other thread systems ? Does DEC Unix do something very different
but functional when leaving out the '-threads' option (which is the only
thing -dec- adds) or is it just "hysterical raisins" ? 

And then the systems that need different library/compiler flags/settings...
I suspect noone here has one of those machines ? It'd be nice if we could
autodetect this without trying every combination of flags/libs in autoconf
:P (But then, if we could autodetect, I assume it would've been done long
ago... right ? :)

Do we know if those systems still need those separate flags/libs ? Should we
leave a reference to them in the README, or add a separate README.threads
file with more extensive info about threads and how to disable them ? (I
think README is a bit oversized, but that's probably just me.) And are we
leaving threads on by default ? If not, the README will have to be
re-adjusted again :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gward at mems-exchange.org  Wed Aug 30 19:52:36 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Wed, 30 Aug 2000 13:52:36 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>; from ping@lfw.org on Tue, Aug 29, 2000 at 12:09:39AM -0500
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
Message-ID: <20000830135235.A8465@ludwig.cnri.reston.va.us>

On 29 August 2000, Ka-Ping Yee said:
> I think these examples are beautiful.  There is no reason why we couldn't
> fit something like this into Python.  Imagine this:
> 
>     - The ".." operator produces a tuple (or generator) of integers.
>       It should probably have precedence just above "in".
>     
>     - "a .. b", where a and b are integers, produces the sequence
>       of integers (a, a+1, a+2, ..., b).
> 
>     - If the left argument is a tuple of two integers, as in
>       "a, b .. c", then we get the sequence of integers from
>       a to c with step b-a, up to and including c if c-a happens
>       to be a multiple of b-a (exactly as in Haskell).

I guess I haven't been paying much attention, or I would have squawked
at the idea of using *anything* other than ".." for a literal range.

> If this operator existed, we could then write:
> 
>     for i in 2, 4 .. 20:
>         print i
> 
>     for i in 1 .. 10:
>         print i*i

Yup, beauty.  +1 on this syntax.  I'd vote to scuttle the [1..10] patch
and wait for an implementation of The Right Syntax, as illustrated by Ping.


>     for i in 0 ..! len(a):
>         a[i] += 1

Ugh.  I agree with everythone else on this: why not "0 .. len(a)-1"?

        Greg



From thomas at xs4all.net  Wed Aug 30 20:04:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 20:04:03 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000830135235.A8465@ludwig.cnri.reston.va.us>; from gward@mems-exchange.org on Wed, Aug 30, 2000 at 01:52:36PM -0400
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org> <20000830135235.A8465@ludwig.cnri.reston.va.us>
Message-ID: <20000830200402.E12695@xs4all.nl>

On Wed, Aug 30, 2000 at 01:52:36PM -0400, Greg Ward wrote:
> I'd vote to scuttle the [1..10] patch
> and wait for an implementation of The Right Syntax, as illustrated by Ping.

There *is* no [1..10] patch. There is only the [1:10] patch. See the PEP ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From martin at loewis.home.cs.tu-berlin.de  Wed Aug 30 20:32:30 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 20:32:30 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39ACE51F.3AEC75AB@lemburg.com> (mal@lemburg.com)
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com>
Message-ID: <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>

> So the check would look something like this:
> 
> if (tstate->recursion_depth >= 50 &&
>     tstate->recursion_depth%10 == 0 &&
>     PyOS_CheckStack()) {
>                 PyErr_SetString(PyExc_MemoryError, "Stack overflow");
>                 return NULL;
>         }

That sounds like a good solution to me. A recursion depth of 50 should
be guaranteed on most systems supported by Python.

> I'm not exactly sure how large the safety margin is with
> Martin's patch, but this seems a good idea.

I chose 3% of the rlimit, which must accomodate the space above the
known start of stack plus a single page. That number was chosen
arbitarily; on my Linux system, the stack limit is 8MB, so 3% give
200k. Given the maximum limitation of environment pages and argv
pages, I felt that this is safe enough. OTOH, if you've used more than
7MB of stack, it is likely that the last 200k won't help, either.

Regards,
Martin




From martin at loewis.home.cs.tu-berlin.de  Wed Aug 30 20:37:56 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 20:37:56 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <200008301837.UAA00743@loewis.home.cs.tu-berlin.de>

> My SGI has getrlimit(RLIMIT_STACK) which should do the trick

It tells you how much stack you've got; it does not tell you how much
of that is actually in use.

> Unix code will also have to differentiate between running on the
> main stack and a sub-thread stack, probably.

My patch computes (or, rather, estimates) a start-of-stack for each
thread, and then saves that in the thread context.

> And I haven't looked at the way PyOS_CheckStack is implemented on
> Windows

It should work for multiple threads just fine. It tries to allocate 8k
on the current stack, and then catches the error if any.

Regards,
Martin




From timo at timo-tasi.org  Wed Aug 30 20:51:52 2000
From: timo at timo-tasi.org (timo at timo-tasi.org)
Date: Wed, 30 Aug 2000 14:51:52 -0400
Subject: [Python-Dev] Re: Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>; from A.M. Kuchling on Wed, Aug 30, 2000 at 09:26:20AM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000830145152.A24581@illuminatus.timo-tasi.org>

hola.

On Wed, Aug 30, 2000 at 09:26:20AM -0400, A.M. Kuchling wrote:
> Question: should SerialCookie and SmartCookie be removed?  If they're
> not there, people won't accidentally use them because they didn't read
> the docs and missed the warning.
> 
> Con: breaks backward compatibility with the existing cookie module and
> forks the code.  

I had a thought about this - kind of a intermediate solution.

Right now, the shortcut 'Cookie.Cookie()' returns an instance of the
SmartCookie, which uses Pickle.  Most extant examples of using the
Cookie module use this shortcut.

We could change 'Cookie.Cookie()' to return an instance of SimpleCookie,
which does not use Pickle.  Unfortunately, this may break existing code
(like Mailman), but there is a lot of code out there that it won't break.

Also, people could still use the SmartCookie and SerialCookie classes,
but not they would be more likely to read them in the documentation
because they are "outside the beaten path".




From timo at timo-tasi.org  Wed Aug 30 21:09:13 2000
From: timo at timo-tasi.org (timo at timo-tasi.org)
Date: Wed, 30 Aug 2000 15:09:13 -0400
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>; from Fred L. Drake, Jr. on Wed, Aug 30, 2000 at 10:09:16AM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>
Message-ID: <20000830150913.B24581@illuminatus.timo-tasi.org>

hola.

On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
> 
> A.M. Kuchling writes:
>  > (Are marshals safer than pickles?  What if SerialCookie used marshal
>  > instead?)
> 
>   A bit safer, I think, but this maintains the backward compatibility
> issue.

Is this true?
  Marshal is backwards compatible to Pickle?

If it is true, that'd be kinda cool.

>   If it is useful to change the API, this is the best time to do it,
> but we'd probably want to rename the module as well.  Shared
> maintenance is also an issue -- Tim's opinion is very valuable here!

I agree -- if this is the right change, then now is the right time.

If a significant change is warranted, then the name change is probably
the right way to signal this change.  I'd vote for 'httpcookie.py'.

I've been thinking about the shared maintenance issue, too.  The right
thing is for the Cookie.py (or renamed version thereof) to be the 
official version.  I would probably keep the latest version up on
my web site but mark it as 'deprecated' once Python 2.0 gets released.

thoughts..?

e



From thomas at xs4all.net  Wed Aug 30 21:22:22 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 21:22:22 +0200
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830150913.B24581@illuminatus.timo-tasi.org>; from timo@timo-tasi.org on Wed, Aug 30, 2000 at 03:09:13PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.5516.877559.786344@cj42289-a.reston1.va.home.com> <20000830150913.B24581@illuminatus.timo-tasi.org>
Message-ID: <20000830212222.F12695@xs4all.nl>

On Wed, Aug 30, 2000 at 03:09:13PM -0400, timo at timo-tasi.org wrote:
> hola.
> On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
> > A.M. Kuchling writes:
> >  > (Are marshals safer than pickles?  What if SerialCookie used marshal
> >  > instead?)

> >   A bit safer, I think, but this maintains the backward compatibility
> > issue.

> Is this true?
>   Marshal is backwards compatible to Pickle?

No, what Fred meant is that it maintains the backward compatibility *issue*,
not compatibility itself. It's still a problem for people who want to read
cookies made by the 'old' version, or otherwise want to read in 'old'
cookies.

I think it would be possible to provide a 'safe' unpickle, that only
unpickles primitives, for example, but that might *still* maintain the
backwards compatibility issue, even if it's less of an issue then. And it's
a bloody lot of work, too :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Wed Aug 30 23:45:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 30 Aug 2000 17:45:28 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830150913.B24581@illuminatus.timo-tasi.org>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<14765.5516.877559.786344@cj42289-a.reston1.va.home.com>
	<20000830150913.B24581@illuminatus.timo-tasi.org>
Message-ID: <14765.32888.769808.560154@cj42289-a.reston1.va.home.com>

On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
 >   A bit safer, I think, but this maintains the backward compatibility
 > issue.

timo at timo-tasi.org writes:
 > Is this true?
 >   Marshal is backwards compatible to Pickle?
 > 
 > If it is true, that'd be kinda cool.

  Would be, but my statement wasn't clear: it maintains the *issue*,
not compatibility.  ;(  The data formats are not interchangable in any
way.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Thu Aug 31 00:54:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 30 Aug 2000 18:54:25 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> ...
> What we really want I think is something that evokes the following in the
> mind of the reader
>
>     for i from START to END incrementing by STEP:
>
> without gobbling up all those keywords.

Note that they needn't be keywords, though, any more than "as" became a
keyword in the new "import x as y".  I love the Haskell notation in Haskell
because it fits so nicely with "infinite" lists there too.  I'm not sure
about in Python -- 100s of languages have straightforward integer index
generation, and Python's range(len(seq)) is hard to see as much more than
gratuitous novelty when viewed against that background.

    for i = 1 to 10:           #  1 to 10 inclusive
    for i = 10 to 1 by -1:     #  10 down to 1 inclusive
    for i = 1 upto 10:         #  1 to 9 inclusive
    for i = 10 upto 1 by -1:   #  10 down to 2 inclusive

are all implementable right now without new keywords, and would pretty much
*have* to be "efficient" from the start because they make no pretense at
being just one instance of an infinitely extensible object iteration
protocol.  They are what they are, and that's it -- simplicity isn't
*always* a bad thing <wink>.

>     for i in [START..END,STEP]:
>     for i in [START:END:STEP]:
>     for i in [START..END:STEP]:

The difference in easy readability should squawk for itself.

>     for i in 0 ..! len(a):
>         a[i] += 1

Looks like everybody hates that, and that's understandable, but I can't
imagine why

     for in 0 .. len(a)-1:

isn't *equally* hated!  Requiring "-1" in the most common case is simply bad
design.  Check out the Python-derivative CORBAscript, where Python's "range"
was redefined to *include* the endpoint.  Virtually every program I've seen
in it bristles with ugly

    for i in range(len(a)-1)

lines.  Yuck.

but-back-to-2.0-ly y'rs  - tim





From jeremy at beopen.com  Thu Aug 31 01:34:14 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 19:34:14 -0400 (EDT)
Subject: [Python-Dev] Release deadline looming (patches by Aug. 31)
Message-ID: <14765.39414.944199.794554@bitdiddle.concentric.net>

[Apologies for the short notice here; this was lost in a BeOpen mail
server for about 24 hours.]

We are still on schedule to release 2.0b1 on Sept. 5 (Tuesday).  There
are a few outstanding items that we need to resolve.  In order to
leave time for the admistrivia necessary to produce a release, we will
need to have a code freeze soon.

Guido says that typically, all the patches should be in two days
before the release.  The two-day deadline may be earlier than
expected, because Monday is a holiday in the US and at BeOpen.  So two
days before the release is midnight Thursday.

That's right.  All patches need to be completed by Aug. 31 at
midnight.  If this deadline is missed, the change won't make it into
2.0b1.

If you've got bugs assigned to you with a priority higher than 5,
please try to take a look at them before the deadline.

Jeremy



From jeremy at beopen.com  Thu Aug 31 03:21:23 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 21:21:23 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <14765.45843.401319.187156@bitdiddle.concentric.net>

>>>>> "AMK" == A M Kuchling <amk1 at erols.com> writes:

  AMK> (Are marshals safer than pickles?  What if SerialCookie used
  AMK> marshal instead?)

I would guess that pickle makes attacks easier: It has more features,
e.g. creating instances of arbitrary classes (provided that the attacker
knows what classes are available).

But neither marshal nor pickle is safe.  It is possible to cause a
core dump by passing marshal invalid data.  It may also be possible to
launch a stack overflow attack -- not sure.

Jeremy



From gstein at lyra.org  Thu Aug 31 03:53:10 2000
From: gstein at lyra.org (Greg Stein)
Date: Wed, 30 Aug 2000 18:53:10 -0700
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.45843.401319.187156@bitdiddle.concentric.net>; from jeremy@beopen.com on Wed, Aug 30, 2000 at 09:21:23PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <20000830185310.I3278@lyra.org>

On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
>...
> But neither marshal nor pickle is safe.  It is possible to cause a
> core dump by passing marshal invalid data.  It may also be possible to
> launch a stack overflow attack -- not sure.

I believe those core dumps were fixed. Seems like I remember somebody doing
some work on that.

??


Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From greg at cosc.canterbury.ac.nz  Thu Aug 31 03:47:10 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 13:47:10 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <04d101c01286$7444d6c0$0900a8c0@SPIFF>
Message-ID: <200008310147.NAA17316@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <fredrik at pythonware.com>:

> footnote: "..." isn't really token:

Whatever it is technically, it's an existing part of the
language, and it seems redundant and confusing to introduce
another very similar one.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From jeremy at beopen.com  Thu Aug 31 03:55:24 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 21:55:24 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830185310.I3278@lyra.org>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<14765.45843.401319.187156@bitdiddle.concentric.net>
	<20000830185310.I3278@lyra.org>
Message-ID: <14765.47884.801312.292059@bitdiddle.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

  GS> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
  >> ...  But neither marshal nor pickle is safe.  It is possible to
  >> cause a core dump by passing marshal invalid data.  It may also
  >> be possible to launch a stack overflow attack -- not sure.

  GS> I believe those core dumps were fixed. Seems like I remember
  GS> somebody doing some work on that.

  GS> ??

Aha!  I hadn't notice that patch sneaking in.  I brought it up with
Guido a few months ago and he didn't want to make changes to marshal
because, IIRC, marshal exists only because .pyc files need it.

Jeremy



From greg at cosc.canterbury.ac.nz  Thu Aug 31 03:59:34 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 13:59:34 +1200 (NZST)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.32888.769808.560154@cj42289-a.reston1.va.home.com>
Message-ID: <200008310159.NAA17320@s454.cosc.canterbury.ac.nz>

"Fred L. Drake, Jr." <fdrake at beopen.com>:

> it maintains the *issue*, not compatibility.  ;( 

A confusing choice of word! Usually one only talks about
"maintaining" something that one *want* maintained...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 31 04:33:36 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:33:36 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>
Message-ID: <200008310233.OAA17325@s454.cosc.canterbury.ac.nz>

Tim Peters <tim_one at email.msn.com>:

> I can't imagine why
> 
>     for in 0 .. len(a)-1:
> 
> isn't *equally* hated!  Requiring "-1" in the most common case is simply bad
> design.

I agree with that. I didn't mean to suggest that I thought it was
a good idea.

The real problem is in defining a..b to include b, which gives
you a construct that is intuitive but not very useful in the
context of the rest of the language.

On the other hand, if a..b *doesn't* include b, it's more
useful, but less intuitive.

(By "intuitive" here, I mean "does what you would expect based
on your experience with similar notations in other programming
languages or in mathematics".)

I rather like the a:b idea, because it ties in with the half-open 
property of slices. Unfortunately, it gives the impression that
you should be able to say

   a = [1,2,3,4,5,6]
   b = 2:5
   c = a[b]

and get c == [3,4,5].

>    for i = 1 to 10:           #  1 to 10 inclusive

Endpoint problem again. You would be forever saying

   for i = 0 to len(a)-1:

I do like the idea of keywords, however. All we need to do
is find a way of spelling

   for i = 0 uptobutnotincluding len(a):

without running out of breath.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 31 04:37:00 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:37:00 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <39ACF6FB.66BAB739@nowonder.de>
Message-ID: <200008310237.OAA17328@s454.cosc.canterbury.ac.nz>

Peter Schneider-Kamp <nowonder at nowonder.de>:

> As far as I know adding a builtin indices() has been
> rejected as an idea.

But why? I know it's been suggested, but I don't remember seeing any
convincing arguments against it. Or much discussion at all.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 31 04:57:07 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:57:07 +1200 (NZST)
Subject: [Python-Dev] Pragmas: Just say "No!"
In-Reply-To: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>
Message-ID: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz>

Greg Wilson <gvwilson at nevex.com>:

> Pragmas are a way to embed programs for the
> parser in the file being parsed.

I hope the BDFL has the good sense to run screaming from
anything that has the word "pragma" in it. As this discussion
demonstrates, it's far too fuzzy and open-ended a concept --
nobody can agree on what sort of thing a pragma is supposed
to be.

INTERVIEWER: Tell us how you came to be drawn into the
world of pragmas.

COMPILER WRITER: Well, it started off with little things. Just
a few boolean flags, a way to turn asserts on and off, debug output,
that sort of thing. I thought, what harm can it do? It's not like
I'm doing anything you couldn't do with command line switches,
right? Then it got a little bit heavier, integer values for
optimisation levels, even the odd string or two. Before I
knew it I was doing the real hard stuff, constant expressions,
conditionals, the whole shooting box. Then one day when I put
in a hook for making arbitrary calls into the interpreter, that
was when I finally realised I had a problem...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From trentm at ActiveState.com  Thu Aug 31 06:34:44 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 30 Aug 2000 21:34:44 -0700
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830185310.I3278@lyra.org>; from gstein@lyra.org on Wed, Aug 30, 2000 at 06:53:10PM -0700
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net> <20000830185310.I3278@lyra.org>
Message-ID: <20000830213444.C20461@ActiveState.com>

On Wed, Aug 30, 2000 at 06:53:10PM -0700, Greg Stein wrote:
> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
> >...
> > But neither marshal nor pickle is safe.  It is possible to cause a
> > core dump by passing marshal invalid data.  It may also be possible to
> > launch a stack overflow attack -- not sure.
> 
> I believe those core dumps were fixed. Seems like I remember somebody doing
> some work on that.
> 
> ??

Nope, I think that there may have been a few small patches but the
discussions to fix some "brokeness" in marshal did not bear fruit:

http://www.python.org/pipermail/python-dev/2000-June/011132.html


Oh, I take that back. Here is patch that supposedly fixed some core dumping:

http://www.python.org/pipermail/python-checkins/2000-June/005997.html
http://www.python.org/pipermail/python-checkins/2000-June/006029.html


Trent


-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Thu Aug 31 06:50:20 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 00:50:20 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>
	<200008310233.OAA17325@s454.cosc.canterbury.ac.nz>
Message-ID: <14765.58380.529345.814715@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> I do like the idea of keywords, however. All we need to do
    GE> is find a way of spelling

    GE>    for i = 0 uptobutnotincluding len(a):

    GE> without running out of breath.

for i until len(a):

-Barry



From nhodgson at bigpond.net.au  Thu Aug 31 08:21:06 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Thu, 31 Aug 2000 16:21:06 +1000
Subject: [Python-Dev] Pragmas: Just say "No!"
References: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz>
Message-ID: <005301c01313$a66a3ae0$8119fea9@neil>

Greg Ewing:
> Greg Wilson <gvwilson at nevex.com>:
>
> > Pragmas are a way to embed programs for the
> > parser in the file being parsed.
>
> I hope the BDFL has the good sense to run screaming from
> anything that has the word "pragma" in it. As this discussion
> demonstrates, it's far too fuzzy and open-ended a concept --
> nobody can agree on what sort of thing a pragma is supposed
> to be.

   It is a good idea, however, to claim a piece of syntactic turf as early
as possible so that if/when it is needed, it is unlikely to cause problems
with previously written code. My preference would be to introduce a reserved
word 'directive' for future expansion here. 'pragma' has connotations of
'ignorable compiler hint' but most of the proposed compiler directives will
cause incorrect behaviour if ignored.

   Neil





From m.favas at per.dem.csiro.au  Thu Aug 31 08:11:31 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 31 Aug 2000 14:11:31 +0800
Subject: [Python-Dev] Threads & autoconf
Message-ID: <39ADF713.53E6B37D@per.dem.csiro.au>

[Thomas}
>I'm trying to clean up the autoconf (and README) mess wrt. threads a bit,
>but I think I need some hints ;) I can't figure out why there is a separate
>--with-dec-threads option... Is there a reason it can't be autodetected like
>we do for other thread systems ? Does DEC Unix do something very different
>but functional when leaving out the '-threads' option (which is the only
>thing -dec- adds) or is it just "hysterical raisins" ?

Yes, DEC Unix does do something very different without the "-threads"
option to the "cc" line that finally builds the python executable - the
following are unresolved:

cc   python.o \
          ../libpython2.0.a -L/usr/local/lib -ltk8.0 -ltcl8.0 -lX11   
-ldb     
 -L/usr/local/lib -lz  -lnet  -lpthreads -lm  -o python 
ld:
Unresolved:
_PyGC_Insert
_PyGC_Remove
__pthread_mutex_init
__pthread_mutex_destroy
__pthread_mutex_lock
__pthread_mutex_unlock
__pthread_cond_init
__pthread_cond_destroy
__pthread_cond_signal
__pthread_cond_wait
__pthread_create
__pthread_detach
make[1]: *** [link] Error 1

So, it is still needed. It should be possible, though, to detect that
the system is OSF1 during configure and set this without having to do
"--with-dec-threads". I think DEEC/Compaq/Tru 64 Unix is the only
current Unix that reports itself as OSF1. If there are other legacy
systems that do, and don't need "-threads", they could do "configure
--without-dec-threads" <grin>.

Mark
 
-- 
Email - m.favas at per.dem.csiro.au        Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074            CSIRO Exploration &
Mining
Fax   - +61 8 9333 6121                          Private Bag No 5
                                                 Wembley, Western
Australia 6913



From effbot at telia.com  Thu Aug 31 08:41:20 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 08:41:20 +0200
Subject: [Python-Dev] Cookie.py security
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <004301c01316$7ef57e40$766940d5@hagrid>

jeremy wrote:
> I would guess that pickle makes attacks easier: It has more features,
> e.g. creating instances of arbitrary classes (provided that the attacker
> knows what classes are available).

well, if not else, he's got the whole standard library to
play with...

:::

(I haven't looked at the cookie code, so I don't really know
what I'm talking about here)

cannot you force the user to pass in a list of valid classes to
the cookie constructor, and use a subclass of pickle.Unpickler
to get a little more control over what's imported:

    class myUnpickler(Unpicker):
        def __init__(self, data, classes):
            self.__classes = classes
            Unpickler.__init__(self, StringIO.StringIO(data))
        def find_class(self, module, name):
            for cls in self.__classes__:
                if cls.__module__ == module and cls.__name__ == name:
                    return cls
            raise SystemError, "failed to import class"

> But neither marshal nor pickle is safe.  It is possible to cause a
> core dump by passing marshal invalid data.  It may also be possible to
> launch a stack overflow attack -- not sure.

</F>




From fdrake at beopen.com  Thu Aug 31 09:09:33 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 03:09:33 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <200008310702.AAA32318@slayer.i.sourceforge.net>
References: <200008310702.AAA32318@slayer.i.sourceforge.net>
Message-ID: <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Fix grouping: this is how I intended it, misguided as I was in boolean
 > operator associativity.

  And to think I spent time digging out my reference material to make
sure I didn't change anything!
  This is why compilers have warnings like that!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 31 09:22:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 09:22:13 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 31, 2000 at 03:09:33AM -0400
References: <200008310702.AAA32318@slayer.i.sourceforge.net> <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>
Message-ID: <20000831092213.G12695@xs4all.nl>

On Thu, Aug 31, 2000 at 03:09:33AM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > Fix grouping: this is how I intended it, misguided as I was in boolean
>  > operator associativity.

>   And to think I spent time digging out my reference material to make
> sure I didn't change anything!

Well, if you'd dug out the PEP, you'd have known what way the parentheses
were *intended* to go :-) 'HASINPLACE' is a macro that does a
Py_HasFeature() for the _inplace_ struct members, and those struct members
shouldn't be dereferenced if HASINPLACE is false :)

>   This is why compilers have warnings like that!

Definately ! Now if only there was a permanent way to add -Wall.... hmm...
Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From m.favas at per.dem.csiro.au  Thu Aug 31 09:23:43 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 31 Aug 2000 15:23:43 +0800
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
Message-ID: <39AE07FF.478F413@per.dem.csiro.au>

(Tru64 Unix) - test_gettext fails with the message:
IOError: [Errno 0] Bad magic number: './xx/LC_MESSAGES/gettext.mo'

This is because the magic number is read in by the code in
Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
buf[:4])[0]), and checked against LE_MAGIC (defined as 950412DE) and
BE_MAGIC (calculated as FFFFFFFFDE120495 using
struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0]) These format strings
work for machines where a Python integer is the same size as a C int,
but not for machines where a Python integer is larger than a C int. The
problem arises because the LE_MAGIC number is negative if a 32-bit int,
but positive if Python integers are 64-bit. Replacing the "i" in the
code that generates BE_MAGIC and reads in "magic" by "I" makes the test
work for me, but there's other uses of "i" and "ii" when the rest of the
.mo file is processed that I'm unsure about with different inputs.

Mark
-- 
Email - m.favas at per.dem.csiro.au        Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074            CSIRO Exploration &
Mining
Fax   - +61 8 9333 6121                          Private Bag No 5
                                                 Wembley, Western
Australia 6913



From tim_one at email.msn.com  Thu Aug 31 09:24:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 03:24:35 -0400
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>


-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Sachin Desai
Sent: Thursday, August 31, 2000 2:49 AM
To: python-list at python.org
Subject: test_largefile cause kernel panic in Mac OS X DP4



Has anyone experienced this. I updated my version of python to the latest
source from the CVS repository and successfully built it. Upon executing a
"make test", my machine ended up in a kernel panic when the test being
executed was "test_largefile".

My configuration is:
    Powerbook G3
    128M RAM
    Mac OS X DP4

I guess my next step is to log a bug with Apple.




-- 
http://www.python.org/mailman/listinfo/python-list




From fdrake at beopen.com  Thu Aug 31 09:37:24 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 03:37:24 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <20000831092213.G12695@xs4all.nl>
References: <200008310702.AAA32318@slayer.i.sourceforge.net>
	<14766.1197.987441.118202@cj42289-a.reston1.va.home.com>
	<20000831092213.G12695@xs4all.nl>
Message-ID: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
 > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

  I'd be happy with this.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Thu Aug 31 09:45:19 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 31 Aug 2000 10:45:19 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects
 abstract.c,2.49,2.50
In-Reply-To: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008311045010.20952-100000@sundial>

On Thu, 31 Aug 2000, Fred L. Drake, Jr. wrote:

> 
> Thomas Wouters writes:
>  > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
>  > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)
> 
>   I'd be happy with this.

For 2.1, I suggest going for -Werror too.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Thu Aug 31 10:06:01 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 10:06:01 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <Pine.GSO.4.10.10008311045010.20952-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 31, 2000 at 10:45:19AM +0300
References: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com> <Pine.GSO.4.10.10008311045010.20952-100000@sundial>
Message-ID: <20000831100601.H12695@xs4all.nl>

On Thu, Aug 31, 2000 at 10:45:19AM +0300, Moshe Zadka wrote:
> > Thomas Wouters writes:
> >  > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
> >  > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

> For 2.1, I suggest going for -Werror too.

No, don't think so. -Werror is severe: it would cause compile-failures on
systems not quite the same as ours. For instance, when using
Linux-2.4.0-test-kernels (bleeding edge ;) I consistently get a warning
about a redefine in <sys/resource.h>. That isn't Python's fault, and we
can't do anything about it, but with -Werror it would cause
compile-failures. The warning is annoying, but not fatal.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Thu Aug 31 12:47:34 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 06:47:34 -0400 (EDT)
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AE07FF.478F413@per.dem.csiro.au>
Message-ID: <14766.14278.609327.610929@anthem.concentric.net>

>>>>> "MF" == Mark Favas <m.favas at per.dem.csiro.au> writes:

    MF> This is because the magic number is read in by the code in
    MF> Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
    MF> buf[:4])[0]), and checked against LE_MAGIC (defined as
    MF> 950412DE) and BE_MAGIC (calculated as FFFFFFFFDE120495 using
    MF> struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0])

I was trying to be too clever.  Just replace the BE_MAGIC value with
0xde120495, as in the included patch.

    MF> Replacing the "i" in the code that generates BE_MAGIC and
    MF> reads in "magic" by "I" makes the test work for me, but
    MF> there's other uses of "i" and "ii" when the rest of the .mo
    MF> file is processed that I'm unsure about with different inputs.

Should be fine, I think.  With < and > leading characters, those
format strings should select `standard' sizes:

    Standard size and alignment are as follows: no alignment is
    required for any type (so you have to use pad bytes); short is 2
    bytes; int and long are 4 bytes. float and double are 32-bit and
    64-bit IEEE floating point numbers, respectively.

Please run the test again with this patch and let me know.
-Barry

Index: gettext.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/gettext.py,v
retrieving revision 1.4
diff -u -r1.4 gettext.py
--- gettext.py	2000/08/30 03:29:58	1.4
+++ gettext.py	2000/08/31 10:40:41
@@ -125,7 +125,7 @@
 class GNUTranslations(NullTranslations):
     # Magic number of .mo files
     LE_MAGIC = 0x950412de
-    BE_MAGIC = struct.unpack('>i', struct.pack('<i', LE_MAGIC))[0]
+    BE_MAGIC = 0xde120495
 
     def _parse(self, fp):
         """Override this method to support alternative .mo formats."""




From mal at lemburg.com  Thu Aug 31 14:33:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 14:33:28 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
Message-ID: <39AE5098.36746F4B@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > So the check would look something like this:
> >
> > if (tstate->recursion_depth >= 50 &&
> >     tstate->recursion_depth%10 == 0 &&
> >     PyOS_CheckStack()) {
> >                 PyErr_SetString(PyExc_MemoryError, "Stack overflow");
> >                 return NULL;
> >         }
> 
> That sounds like a good solution to me. A recursion depth of 50 should
> be guaranteed on most systems supported by Python.

Jeremy: Could get at least this patch into 2.0b1 ?
 
> > I'm not exactly sure how large the safety margin is with
> > Martin's patch, but this seems a good idea.
> 
> I chose 3% of the rlimit, which must accomodate the space above the
> known start of stack plus a single page. That number was chosen
> arbitarily; on my Linux system, the stack limit is 8MB, so 3% give
> 200k. Given the maximum limitation of environment pages and argv
> pages, I felt that this is safe enough. OTOH, if you've used more than
> 7MB of stack, it is likely that the last 200k won't help, either.

Looks like I don't have any limits set on my dev-machine...
Linux has no problems offering me 3GB of (virtual) stack space
even though it only has 64MB real memory and 200MB swap
space available ;-)

I guess the proposed user settable recursion depth limit is the
best way to go. Testing for the right limit is rather easy by
doing some trial and error processing using Python.

At least for my Linux installation a limit of 9000 seems
reasonable. Perhaps everybody on the list could do a quick
check on their platform ?

Here's a sample script:

i = 0
def foo(x):
    global i
    print i
    i = i + 1
    foo(x)

foo(None)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From gstein at lyra.org  Thu Aug 31 14:48:04 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 05:48:04 -0700
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE5098.36746F4B@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 02:33:28PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com>
Message-ID: <20000831054804.A3278@lyra.org>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
>...
> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?
> 
> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

10k iterations on my linux box

-g

-- 
Greg Stein, http://www.lyra.org/



From thomas at xs4all.net  Thu Aug 31 14:46:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 14:46:45 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE5098.36746F4B@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 02:33:28PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com>
Message-ID: <20000831144645.I12695@xs4all.nl>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:

> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?

On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
set it higher even without help from root, and much higher with help) I can
go as high as 8k recursions of the simple python-function type, and 5k
recursions of one involving a C call (like a recursive __str__()).

I don't remember ever seeing a system with less than 2Mbyte stack, except
for seriously memory-deprived systems. I do know that the 2Mbyte stacklimit
on BSDI is enough to cause 'pine' (sucky but still popular mailprogram) much
distress when handling large mailboxes, so we usually set the limit higher
anyway.

Mutt-forever-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Thu Aug 31 15:32:41 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 15:32:41 +0200
Subject: [Python-Dev] Pragmas: Just say "No!"
References: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz> <005301c01313$a66a3ae0$8119fea9@neil>
Message-ID: <39AE5E79.C2C91730@lemburg.com>

Neil Hodgson wrote:
> 
> Greg Ewing:
> > Greg Wilson <gvwilson at nevex.com>:
> >
> > > Pragmas are a way to embed programs for the
> > > parser in the file being parsed.
> >
> > I hope the BDFL has the good sense to run screaming from
> > anything that has the word "pragma" in it. As this discussion
> > demonstrates, it's far too fuzzy and open-ended a concept --
> > nobody can agree on what sort of thing a pragma is supposed
> > to be.
> 
>    It is a good idea, however, to claim a piece of syntactic turf as early
> as possible so that if/when it is needed, it is unlikely to cause problems
> with previously written code. My preference would be to introduce a reserved
> word 'directive' for future expansion here. 'pragma' has connotations of
> 'ignorable compiler hint' but most of the proposed compiler directives will
> cause incorrect behaviour if ignored.

The objectives the "pragma" statement follows should be clear
by now. If it's just the word itself that's bugging you, then
we can have a separate discussion on that. Perhaps "assume"
or "declare" would be a better candidates.

We need some kind of logic of this sort in Python. Otherhwise
important features like source code encoding will not be
possible.

As I said before, I'm not advertising adding compiler
programs to Python, just a simple way of passing information
for the compiler.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nascheme at enme.ucalgary.ca  Thu Aug 31 15:53:21 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Thu, 31 Aug 2000 07:53:21 -0600
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.45843.401319.187156@bitdiddle.concentric.net>; from Jeremy Hylton on Wed, Aug 30, 2000 at 09:21:23PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <20000831075321.A3099@keymaster.enme.ucalgary.ca>

On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
> I would guess that pickle makes attacks easier: It has more features,
> e.g. creating instances of arbitrary classes (provided that the attacker
> knows what classes are available).

marshal can handle code objects.  That seems pretty scary to me.  I
would vote for not including these unsecure classes in the standard
distribution.  Software that expects them should include their own
version of Cookie.py or be fixed.

  Neil



From mal at lemburg.com  Thu Aug 31 15:58:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 15:58:55 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <20000831144645.I12695@xs4all.nl>
Message-ID: <39AE649F.A0E818C1@lemburg.com>

Thomas Wouters wrote:
> 
> On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
> 
> > At least for my Linux installation a limit of 9000 seems
> > reasonable. Perhaps everybody on the list could do a quick
> > check on their platform ?
> 
> On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
> set it higher even without help from root, and much higher with help) I can
> go as high as 8k recursions of the simple python-function type, and 5k
> recursions of one involving a C call (like a recursive __str__()).

Ok, this give us a 5000 limit as default... anyone with less ;-)

(Note that with the limit being user settable making a lower limit
 the default shouldn't hurt anyone.)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Thu Aug 31 16:06:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 16:06:23 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE649F.A0E818C1@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 03:58:55PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <20000831144645.I12695@xs4all.nl> <39AE649F.A0E818C1@lemburg.com>
Message-ID: <20000831160623.J12695@xs4all.nl>

On Thu, Aug 31, 2000 at 03:58:55PM +0200, M.-A. Lemburg wrote:
> Thomas Wouters wrote:

> > On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
> > set it higher even without help from root, and much higher with help) I can
> > go as high as 8k recursions of the simple python-function type, and 5k
> > recursions of one involving a C call (like a recursive __str__()).

> Ok, this give us a 5000 limit as default... anyone with less ;-)

I would suggest going for something a lot less than 5000, tho, to account
for 'large' frames. Say, 2000 or so, max.

> (Note that with the limit being user settable making a lower limit
>  the default shouldn't hurt anyone.)

Except that it requires yet another step ... ;P It shouldn't hurt anyone if
it isn't *too* low. However, I have no clue how 'high' it would have to be
for, for instance, Zope, or any of the other 'large' Python apps.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Thu Aug 31 16:20:45 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Thu, 31 Aug 2000 16:20:45 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions? 
Message-ID: <20000831142046.20C21303181@snelboot.oratrix.nl>

I'm confused now: how is this counting-stack-limit different from the maximum 
recursion depth we already have?

The whole point of PyOS_StackCheck is to do an _actual_ check of whether 
there's space left for the stack so we can hopefully have an orderly cleanup 
before we hit the hard limit.

If computing it is too difficult because getrlimit isn't available or doesn't 
do what we want we should probe it, as the windows code does or my example 
code posted yesterday does. Note that the testing only has to be done every 
*first* time the stack goes past a certain boundary: the probing can remember 
the deepest currently known valid stack location, and everything that is 
shallower is okay from that point on (making PyOS_StackCheck a subroutine call 
and a compare in the normal case).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From mal at lemburg.com  Thu Aug 31 16:44:09 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 16:44:09 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <39AE6F39.2DAEB3E9@lemburg.com>

Jack Jansen wrote:
> 
> I'm confused now: how is this counting-stack-limit different from the maximum
> recursion depth we already have?
> 
> The whole point of PyOS_StackCheck is to do an _actual_ check of whether
> there's space left for the stack so we can hopefully have an orderly cleanup
> before we hit the hard limit.
> 
> If computing it is too difficult because getrlimit isn't available or doesn't
> do what we want we should probe it, as the windows code does or my example
> code posted yesterday does. Note that the testing only has to be done every
> *first* time the stack goes past a certain boundary: the probing can remember
> the deepest currently known valid stack location, and everything that is
> shallower is okay from that point on (making PyOS_StackCheck a subroutine call
> and a compare in the normal case).

getrlimit() will not always work: in case there is no limit
imposed on the stack, it will return huge numbers (e.g. 2GB)
which wouldn't make any valid assumption possible. 

Note that you can't probe for this since you can not be sure whether
the OS overcommits memory or not. Linux does this heavily and
I haven't yet even found out why my small C program happily consumes
20MB of memory without segfault at recursion level 60000 while Python
already segfaults at recursion level 9xxx with a memory footprint
of around 5MB.

So, at least for Linux, the only safe way seems to make the
limit a user option and to set a reasonably low default.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Thu Aug 31 16:50:01 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 09:50:01 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000831142046.20C21303181@snelboot.oratrix.nl>
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <14766.28825.35228.221474@buffalo.fnal.gov>

Jack Jansen writes:
 > I'm confused now: how is this counting-stack-limit different from
 > the maximum recursion depth we already have?

Because on Unix the maximum allowable stack space is not fixed (it can
be controlled by "ulimit" or "setrlimit"), so a hard-coded maximum
recursion depth is not appropriate.

 > The whole point of PyOS_StackCheck is to do an _actual_ check of
 > whether before we hit the hard limit.

 > If computing it is too difficult because getrlimit isn't available
 > or doesn't do what we want we should probe it

getrlimit is available and works fine.  It's getrusage that is
problematic.

I seriously think that instead of trying to slip this in `under the
wire' we should defer for 2.0b1 and try to do it right for either the
next 2.0x.  Getting this stuff right on Unix, portably, is tricky.
There may be a lot of different tricks required to make this work
right on different flavors of Unix.






From guido at beopen.com  Thu Aug 31 17:58:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 10:58:49 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 14:33:28 +0200."
             <39AE5098.36746F4B@lemburg.com> 
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>  
            <39AE5098.36746F4B@lemburg.com> 
Message-ID: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>

> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

Please try this again on various platforms with this version:

    i = 0
    class C:
      def __getattr__(self, name):
	  global i
	  print i
	  i += 1
	  return self.name # common beginners' mistake

    C() # This tries to get __init__, triggering the recursion

I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
have no idea what units).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 31 18:07:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:07:16 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 16:20:45 +0200."
             <20000831142046.20C21303181@snelboot.oratrix.nl> 
References: <20000831142046.20C21303181@snelboot.oratrix.nl> 
Message-ID: <200008311607.LAA15693@cj20424-a.reston1.va.home.com>

> I'm confused now: how is this counting-stack-limit different from
> the maximum recursion depth we already have?
> 
> The whole point of PyOS_StackCheck is to do an _actual_ check of
> whether there's space left for the stack so we can hopefully have an
> orderly cleanup before we hit the hard limit.
> 
> If computing it is too difficult because getrlimit isn't available
> or doesn't do what we want we should probe it, as the windows code
> does or my example code posted yesterday does. Note that the testing
> only has to be done every *first* time the stack goes past a certain
> boundary: the probing can remember the deepest currently known valid
> stack location, and everything that is shallower is okay from that
> point on (making PyOS_StackCheck a subroutine call and a compare in
> the normal case).

The point is that there's no portable way to do PyOS_CheckStack().
Not even for Unix.  So we use a double strategy:

(1) Use a user-settable recursion limit with a conservative default.
This can be done portably.  It is set low by default so that under
reasonable assumptions it will stop runaway recursion long before the
stack is actually exhausted.  Note that Emacs Lisp has this feature
and uses a default of 500.  I would set it to 1000 in Python.  The
occasional user who is fond of deep recursion can set it higher and
tweak his ulimit -s to provide the actual stack space if necessary.

(2) Where implementable, use actual stack probing with
PyOS_CheckStack().  This provides an additional safeguard for e.g. (1)
extensions allocating lots of C stack space during recursion; (2)
users who set the recursion limit too high; (3) long-running server
processes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From cgw at fnal.gov  Thu Aug 31 17:14:02 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 10:14:02 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
	<39ACDA4F.3EF72655@lemburg.com>
	<000d01c0126c$dfe700c0$766940d5@hagrid>
	<39ACE51F.3AEC75AB@lemburg.com>
	<200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
	<39AE5098.36746F4B@lemburg.com>
	<200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <14766.30266.156124.961607@buffalo.fnal.gov>

Guido van Rossum writes:
 > 
 > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
 > have no idea what units).

That would be Kb.  But -c controls core-file size, not stack.  
You wanted -s.  ulimit -a shows all the limits.



From guido at beopen.com  Thu Aug 31 18:23:21 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:23:21 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 10:14:02 EST."
             <14766.30266.156124.961607@buffalo.fnal.gov> 
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>  
            <14766.30266.156124.961607@buffalo.fnal.gov> 
Message-ID: <200008311623.LAA15877@cj20424-a.reston1.va.home.com>

> That would be Kb.  But -c controls core-file size, not stack.  
> You wanted -s.  ulimit -a shows all the limits.

Typo.  I did use ulimit -s.  ulimit -a confirms that it's 8192 kbytes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Thu Aug 31 17:24:58 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 17:24:58 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>  
	            <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <39AE78CA.809E660A@lemburg.com>

Guido van Rossum wrote:
> 
> > Here's a sample script:
> >
> > i = 0
> > def foo(x):
> >     global i
> >     print i
> >     i = i + 1
> >     foo(x)
> >
> > foo(None)
> 
> Please try this again on various platforms with this version:
> 
>     i = 0
>     class C:
>       def __getattr__(self, name):
>           global i
>           print i
>           i += 1
>           return self.name # common beginners' mistake
> 
>     C() # This tries to get __init__, triggering the recursion
> 
> I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> have no idea what units).

8192 refers to kB, i.e. 8 MB.

I get 6053 on SuSE Linux 6.2 without resource stack limit set.

Strange enough if I put the above inside a script, the class
isn't instantiated. The recursion only starts when I manually
trigger C() in interactive mode or do something like
'print C()'. Is this a bug or a feature ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Thu Aug 31 17:32:29 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 31 Aug 2000 17:32:29 +0200 (CEST)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 31, 2000 10:58:49 AM
Message-ID: <200008311532.RAA04028@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Please try this again on various platforms with this version:
> 
>     i = 0
>     class C:
>       def __getattr__(self, name):
> 	  global i
> 	  print i
> 	  i += 1
> 	  return self.name # common beginners' mistake
> 
>     C() # This tries to get __init__, triggering the recursion
> 

            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Are you sure?

Although strange, this is not the case and instantiating C succeeds
(try "python rec.py", where rec.py is the above code).

A closer look at the code shows that Instance_New goes on calling
getattr2 which calls class_lookup, which returns NULL, etc, etc,
but the presence of __getattr__ is not checked in this path.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Thu Aug 31 17:28:21 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 31 Aug 2000 08:28:21 -0700
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 31, 2000 at 03:24:35AM -0400
References: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>
Message-ID: <20000831082821.B3569@ActiveState.com>

Tim (or anyone with python-list logs), can you forward this to Sachin (who
reported the bug).

On Thu, Aug 31, 2000 at 03:24:35AM -0400, Tim Peters wrote:
> 
> 
> -----Original Message-----
> From: python-list-admin at python.org
> [mailto:python-list-admin at python.org]On Behalf Of Sachin Desai
> Sent: Thursday, August 31, 2000 2:49 AM
> To: python-list at python.org
> Subject: test_largefile cause kernel panic in Mac OS X DP4
> 
> 
> 
> Has anyone experienced this. I updated my version of python to the latest
> source from the CVS repository and successfully built it. Upon executing a
> "make test", my machine ended up in a kernel panic when the test being
> executed was "test_largefile".
> 
> My configuration is:
>     Powerbook G3
>     128M RAM
>     Mac OS X DP4
> 
> I guess my next step is to log a bug with Apple.
> 

I added this test module. It would be nice to have a little bit more
information seeing as I have never played on a Mac (OS X acts like BSD under
the hood, right?)

1. Can you tell me, Sachin, *where* in test_largefile it is failing? The file
   is python/dist/src/Lib/test/test_largefile.py. Try running it directly:
   > python Lib/test/test_largefile.py
2. If it dies before it produces any output can you tell me if it died on
   line 18:
      f.seek(2147483649L)
   which, I suppose is possible. Maybe this is not a good way to determine if
   the system has largefile support.


Jeremy, Tim, Guido, 
As with the NetBSD compile bug. I won't have time to fix this by the freeze
today unless it is I get info from the people who encountered these bugs and
it is *really* easy to fix.


Trent
    

-- 
Trent Mick
TrentM at ActiveState.com



From guido at beopen.com  Thu Aug 31 18:30:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:30:48 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 17:24:58 +0200."
             <39AE78CA.809E660A@lemburg.com> 
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>  
            <39AE78CA.809E660A@lemburg.com> 
Message-ID: <200008311630.LAA16022@cj20424-a.reston1.va.home.com>

> > Please try this again on various platforms with this version:
> > 
> >     i = 0
> >     class C:
> >       def __getattr__(self, name):
> >           global i
> >           print i
> >           i += 1
> >           return self.name # common beginners' mistake
> > 
> >     C() # This tries to get __init__, triggering the recursion
> > 
> > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> > have no idea what units).
> 
> 8192 refers to kB, i.e. 8 MB.
> 
> I get 6053 on SuSE Linux 6.2 without resource stack limit set.
> 
> Strange enough if I put the above inside a script, the class
> isn't instantiated. The recursion only starts when I manually
> trigger C() in interactive mode or do something like
> 'print C()'. Is this a bug or a feature ?

Aha.  I was wrong -- it's happening in repr(), not during
construction.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at fnal.gov  Thu Aug 31 17:50:38 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 10:50:38 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311630.LAA16022@cj20424-a.reston1.va.home.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
	<39ACDA4F.3EF72655@lemburg.com>
	<000d01c0126c$dfe700c0$766940d5@hagrid>
	<39ACE51F.3AEC75AB@lemburg.com>
	<200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
	<39AE5098.36746F4B@lemburg.com>
	<200008311558.KAA15649@cj20424-a.reston1.va.home.com>
	<39AE78CA.809E660A@lemburg.com>
	<200008311630.LAA16022@cj20424-a.reston1.va.home.com>
Message-ID: <14766.32462.663536.177308@buffalo.fnal.gov>

Guido van Rossum writes:
 > > > Please try this again on various platforms with this version:
 > > > 
 > > >     i = 0
 > > >     class C:
 > > >       def __getattr__(self, name):
 > > >           global i
 > > >           print i
 > > >           i += 1
 > > >           return self.name # common beginners' mistake
 > > > 
 > > >     C() # This tries to get __init__, triggering the recursion
 > > > 
 > > > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
 > > > have no idea what units).

I get a core dump after 4824 iterations on a not-quite-Red-Hat box,
with an 8MB stack limit.

What about the idea that was suggested to use a sigsegv catcher?  Or
reading info from /proc (yes, there is a lot of overhead here, but if
we do in infrequently enough we might just get away with it.  It could
be a configure-time option disable by default).  I still think there
are even more tricks possible here, and we should pursue this after
2.0b1.  I volunteer to help work on it ;-)






From bwarsaw at beopen.com  Thu Aug 31 17:53:19 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 11:53:19 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
	<39ACDA4F.3EF72655@lemburg.com>
	<000d01c0126c$dfe700c0$766940d5@hagrid>
	<39ACE51F.3AEC75AB@lemburg.com>
	<200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
	<39AE5098.36746F4B@lemburg.com>
	<20000831054804.A3278@lyra.org>
Message-ID: <14766.32623.705548.109625@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> 10k iterations on my linux box

9143 on mine.

I'll note that Emacs has a similar concept, embodied in
max-lisp-eval-depth.  The documentation for this variable clearly
states that its purpose is to avoid infinite recursions that would
overflow the C stack and crash Emacs.  On my XEmacs 21.1.10,
max-lisp-eval-depth is 500.  Lisp tends to be more recursive than
Python, but it's also possible that there are fewer ways to `hide'
lots of C stack between Lisp function calls.

So random.choice(range(500, 9143)) seems about right to me <wink>.

-Barry



From jeremy at beopen.com  Thu Aug 31 17:56:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 11:56:20 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000831075321.A3099@keymaster.enme.ucalgary.ca>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<14765.45843.401319.187156@bitdiddle.concentric.net>
	<20000831075321.A3099@keymaster.enme.ucalgary.ca>
Message-ID: <14766.32804.933498.914265@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

  NS> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
  >> I would guess that pickle makes attacks easier: It has more
  >> features, e.g. creating instances of arbitrary classes (provided
  >> that the attacker knows what classes are available).

  NS> marshal can handle code objects.  That seems pretty scary to me.
  NS> I would vote for not including these unsecure classes in the
  NS> standard distribution.  Software that expects them should
  NS> include their own version of Cookie.py or be fixed.

If a server is going to use cookies that contain marshal or pickle
data, they ought to be encrypted or protected by a secure hash.

Jeremy



From effbot at telia.com  Thu Aug 31 19:47:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 19:47:45 +0200
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
Message-ID: <008701c01373$95ced1e0$766940d5@hagrid>

iirc, I've been bitten by this a couple of times too
(before I switched to asyncore...)

any special reason why the input socket is unbuffered
by default?

</F>

----- Original Message ----- 
From: "Andy Bond" <bond at dstc.edu.au>
Newsgroups: comp.lang.python
Sent: Thursday, August 31, 2000 8:41 AM
Subject: SocketServer and makefile()


> I've been working with BaseHTTPServer which in turn uses SocketServer to
> write a little web server.  It is used to accept PUT requests of 30MB chunks
> of data.  I was having a problem where data was flowing at the rate of
> something like 64K per second over a 100MB network.  Weird.  Further tracing
> showed that the rfile variable from SocketServer (used to suck in data to
> the http server) was created using makefile on the original socket
> descriptor.  It was created with an option of zero for buffering (see
> SocketServer.py) which means unbuffered.
> 
> Now some separate testing with socket.py showed that I could whip a 30MB
> file across using plain sockets and send/recv but if I made the receivor use
> makefile on the socket and then read, it slowed down to my 1 sec per 64K.
> If I specify a buffer (something big but less than 64K ... IP packet size?)
> then I am back in speedy territory.  The unbuffered mode seems almost like
> it is sending the data 1 char at a time AND this is the default mode used in
> SocketServer and subsequently BaseHTTPServer ...
> 
> This is on solaris 7, python 1.5.2.  Anyone else found this to be a problem
> or am I doing something wrong?
> 
> andy




From jeremy at beopen.com  Thu Aug 31 20:34:23 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 14:34:23 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
Message-ID: <14766.42287.968420.289804@bitdiddle.concentric.net>

Is the test for linuxaudiodev supposed to play the Spanish Inquistion
.au file?  I just realized that the test does absolutely nothing on my
machine.  (I guess I need to get my ears to raise an exception if they
don't hear anything.)

I can play the .au file and I use a variety of other audio tools
regularly.  Is Peter still maintaining it or can someone else offer
some assistance?

Jeremy



From guido at beopen.com  Thu Aug 31 21:57:17 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 14:57:17 -0500
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: Your message of "Thu, 31 Aug 2000 14:34:23 -0400."
             <14766.42287.968420.289804@bitdiddle.concentric.net> 
References: <14766.42287.968420.289804@bitdiddle.concentric.net> 
Message-ID: <200008311957.OAA22338@cj20424-a.reston1.va.home.com>

> Is the test for linuxaudiodev supposed to play the Spanish Inquistion
> .au file?  I just realized that the test does absolutely nothing on my
> machine.  (I guess I need to get my ears to raise an exception if they
> don't hear anything.)

Correct.

> I can play the .au file and I use a variety of other audio tools
> regularly.  Is Peter still maintaining it or can someone else offer
> some assistance?

Does your machine have a sound card & speakers?  Mine doesn't.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at fnal.gov  Thu Aug 31 21:04:15 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 14:04:15 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.42287.968420.289804@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <14766.44079.900005.766299@buffalo.fnal.gov>

The problem is that the test file 

audiotest.au: Sun/NeXT audio data: 8-bit ISDN u-law, mono, 8000 Hz

and the linuxaudiodev module seems to be (implicitly) expecting ".wav" format
(Microsoft RIFF).

If you open a .wav file and write it to the linuxaudiodev object, it works

There is a function in linuxaudiodev to set audio format - there
doesn't seem to be much documentation, the source has:

if (!PyArg_ParseTuple(args, "iiii:setparameters",
                          &rate, &ssize, &nchannels, &fmt))
        return NULL;
  
 and when I do

x = linuxaudiodev.open('w')
x.setparameters(8000, 1, 8, linuxaudiodev.AFMT_MU_LAW )

I get:
linuxaudiodev.error: (0, 'Error')

Also tried '1' for the sample size, thinking it might be in bytes.

The sample size really ought to be implicit in the format.  

The code in linuxaudiodev.c looks sort of dubious to me.  This model
is a little too simple for the variety of audio hardware and software
on Linux systems.  I have some homebrew audio stuff I've written which
I think works better, but it's nowhere near ready for distribution.
Maybe I'll clean it up and submit it for inclusion post-1.6

In the meanwhile, you could ship a .wav file for use on Linux (and
Windows?) machines.  (Windows doesn't usually like .au either)







From jeremy at beopen.com  Thu Aug 31 21:11:18 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 15:11:18 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <200008311957.OAA22338@cj20424-a.reston1.va.home.com>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
	<200008311957.OAA22338@cj20424-a.reston1.va.home.com>
Message-ID: <14766.44502.812468.677142@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  >> I can play the .au file and I use a variety of other audio tools
  >> regularly.  Is Peter still maintaining it or can someone else
  >> offer some assistance?

  GvR> Does your machine have a sound card & speakers?  Mine doesn't.

Yes.  (I bought the Cambridge Soundworks speakers that were on my old
machine from CNRI.)

Jeremy



From gstein at lyra.org  Thu Aug 31 21:18:26 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 12:18:26 -0700
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: <008701c01373$95ced1e0$766940d5@hagrid>; from effbot@telia.com on Thu, Aug 31, 2000 at 07:47:45PM +0200
References: <008701c01373$95ced1e0$766940d5@hagrid>
Message-ID: <20000831121826.F11297@lyra.org>

I ran into this same problem on the client side.

The server does a makefile() so that it can do readline() to fetch the HTTP
request line and then the MIME headers. The *problem* is that if you do
something like:

    f = sock.makefile()
    line = f.readline()
    data = sock.recv(1000)

You're screwed if you have buffering enabled. "f" will read in a bunch of
data -- past the end of the line. That data now sits inside f's buffer and
is not available to the sock.recv() call.

If you forget about sock and just stick to f, then you'd be okay. But
SocketServer and/or BaseHTTPServer doesn't -- it uses both objects to do the
reading.

Solution? Don't use rfile for reading, but go for the socket itself. Or
revamp the two classes to forget about the socket once the files (wfile and
rfile) are created. The latter might not be possible, tho.

Dunno why the unbuffered reading would be slow. I'd think it would still
read large chunks at a time when you request it.

Cheers,
-g

On Thu, Aug 31, 2000 at 07:47:45PM +0200, Fredrik Lundh wrote:
> iirc, I've been bitten by this a couple of times too
> (before I switched to asyncore...)
> 
> any special reason why the input socket is unbuffered
> by default?
> 
> </F>
> 
> ----- Original Message ----- 
> From: "Andy Bond" <bond at dstc.edu.au>
> Newsgroups: comp.lang.python
> Sent: Thursday, August 31, 2000 8:41 AM
> Subject: SocketServer and makefile()
> 
> 
> > I've been working with BaseHTTPServer which in turn uses SocketServer to
> > write a little web server.  It is used to accept PUT requests of 30MB chunks
> > of data.  I was having a problem where data was flowing at the rate of
> > something like 64K per second over a 100MB network.  Weird.  Further tracing
> > showed that the rfile variable from SocketServer (used to suck in data to
> > the http server) was created using makefile on the original socket
> > descriptor.  It was created with an option of zero for buffering (see
> > SocketServer.py) which means unbuffered.
> > 
> > Now some separate testing with socket.py showed that I could whip a 30MB
> > file across using plain sockets and send/recv but if I made the receivor use
> > makefile on the socket and then read, it slowed down to my 1 sec per 64K.
> > If I specify a buffer (something big but less than 64K ... IP packet size?)
> > then I am back in speedy territory.  The unbuffered mode seems almost like
> > it is sending the data 1 char at a time AND this is the default mode used in
> > SocketServer and subsequently BaseHTTPServer ...
> > 
> > This is on solaris 7, python 1.5.2.  Anyone else found this to be a problem
> > or am I doing something wrong?
> > 
> > andy
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From cgw at fnal.gov  Thu Aug 31 21:11:30 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 14:11:30 -0500 (CDT)
Subject: Silly correction to: [Python-Dev] linuxaudiodev test does nothing
Message-ID: <14766.44514.531109.440309@buffalo.fnal.gov>

I wrote:

 >  x.setparameters(8000, 1, 8, linuxaudiodev.AFMT_MU_LAW )

where I meant:

 > x.setparameters(8000, 8, 1, linuxaudiodev.AFMT_MU_LAW )

In fact I tried just about every combination of arguments, closing and
reopening the device each time, but still no go.

I also wrote:

 > Maybe I'll clean it up and submit it for inclusion post-1.6

where of course I meant to say post-2.0b1





From effbot at telia.com  Thu Aug 31 21:46:54 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 21:46:54 +0200
Subject: [Python-Dev] one last SRE headache
Message-ID: <023301c01384$39b2bdc0$766940d5@hagrid>

can anyone tell me how Perl treats this pattern?

    r'((((((((((a))))))))))\41'

in SRE, this is currently a couple of nested groups, surrounding
a single literal, followed by a back reference to the fourth group,
followed by a literal "1" (since there are less than 41 groups)

in PRE, it turns out that this is a syntax error; there's no group 41.

however, this test appears in the test suite under the section "all
test from perl", but they're commented out:

# Python does not have the same rules for \\41 so this is a syntax error
#    ('((((((((((a))))))))))\\41', 'aa', FAIL),
#    ('((((((((((a))))))))))\\41', 'a!', SUCCEED, 'found', 'a!'),

if I understand this correctly, Perl treats as an *octal* escape
(chr(041) == "!").

now, should I emulate PRE, Perl, or leave it as it is...

</F>

PS. in case anyone wondered why I haven't seen this before, it's
because I just discovered that the test suite masks syntax errors
under some circumstances...




From guido at beopen.com  Thu Aug 31 22:48:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 15:48:16 -0500
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: Your message of "Thu, 31 Aug 2000 12:18:26 MST."
             <20000831121826.F11297@lyra.org> 
References: <008701c01373$95ced1e0$766940d5@hagrid>  
            <20000831121826.F11297@lyra.org> 
Message-ID: <200008312048.PAA23324@cj20424-a.reston1.va.home.com>

> I ran into this same problem on the client side.
> 
> The server does a makefile() so that it can do readline() to fetch the HTTP
> request line and then the MIME headers. The *problem* is that if you do
> something like:
> 
>     f = sock.makefile()
>     line = f.readline()
>     data = sock.recv(1000)
> 
> You're screwed if you have buffering enabled. "f" will read in a bunch of
> data -- past the end of the line. That data now sits inside f's buffer and
> is not available to the sock.recv() call.
> 
> If you forget about sock and just stick to f, then you'd be okay. But
> SocketServer and/or BaseHTTPServer doesn't -- it uses both objects to do the
> reading.
> 
> Solution? Don't use rfile for reading, but go for the socket itself. Or
> revamp the two classes to forget about the socket once the files (wfile and
> rfile) are created. The latter might not be possible, tho.

I was about to say that you have it backwards, and that you should
only use rfile & wfile, when I realized that CGIHTTPServer.py needs
this!  The subprocess needs to be able to read the rest of the socket,
for POST requests.  So you're right.

Solution?  The buffer size should be an instance or class variable.
Then SocketServer can set it to buffered by default, and CGIHTTPServer
can set it to unbuffered.

> Dunno why the unbuffered reading would be slow. I'd think it would still
> read large chunks at a time when you request it.

System call overhead?  I had the same complaint about Windows, where
apparently winsock makes you pay more of a performance penalty than
Unix does in the same case.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Thu Aug 31 21:46:03 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 31 Aug 2000 15:46:03 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <023301c01384$39b2bdc0$766940d5@hagrid>; from effbot@telia.com on Thu, Aug 31, 2000 at 09:46:54PM +0200
References: <023301c01384$39b2bdc0$766940d5@hagrid>
Message-ID: <20000831154603.A15688@kronos.cnri.reston.va.us>

On Thu, Aug 31, 2000 at 09:46:54PM +0200, Fredrik Lundh wrote:
>can anyone tell me how Perl treats this pattern?
>    r'((((((((((a))))))))))\41'

>if I understand this correctly, Perl treats as an *octal* escape
>(chr(041) == "!").

Correct.  From perlre:

       You may have as many parentheses as you wish.  If you have more
       than 9 substrings, the variables $10, $11, ... refer to the
       corresponding substring.  Within the pattern, \10, \11,
       etc. refer back to substrings if there have been at least that
       many left parentheses before the backreference.  Otherwise (for
       backward compatibility) \10 is the same as \010, a backspace,
       and \11 the same as \011, a tab.  And so on.  (\1 through \9
       are always backreferences.)  

In other words, if there were 41 groups, \41 would be a backref to
group 41; if there aren't, it's an octal escape.  This magical
behaviour was deemed not Pythonic, so pre uses a different rule: it's
always a character inside a character class ([\41] isn't a syntax
error), and outside a character class it's a character if there are
exactly 3 octal digits; otherwise it's a backref.  So \41 is a backref
to group 41, but \041 is the literal character ASCII 33.

--amk




From gstein at lyra.org  Thu Aug 31 22:04:18 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 13:04:18 -0700
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: <200008312048.PAA23324@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 31, 2000 at 03:48:16PM -0500
References: <008701c01373$95ced1e0$766940d5@hagrid> <20000831121826.F11297@lyra.org> <200008312048.PAA23324@cj20424-a.reston1.va.home.com>
Message-ID: <20000831130417.K11297@lyra.org>

On Thu, Aug 31, 2000 at 03:48:16PM -0500, Guido van Rossum wrote:
> I wrote:
>...
> > Solution? Don't use rfile for reading, but go for the socket itself. Or
> > revamp the two classes to forget about the socket once the files (wfile and
> > rfile) are created. The latter might not be possible, tho.
> 
> I was about to say that you have it backwards, and that you should
> only use rfile & wfile, when I realized that CGIHTTPServer.py needs
> this!  The subprocess needs to be able to read the rest of the socket,
> for POST requests.  So you're right.

Ooh! I hadn't considered that case. Yes: you can't transfer the contents of
a FILE's buffer to the CGI, but you can pass a file descriptor (the socket).

> Solution?  The buffer size should be an instance or class variable.
> Then SocketServer can set it to buffered by default, and CGIHTTPServer
> can set it to unbuffered.

Seems reasonable.

> > Dunno why the unbuffered reading would be slow. I'd think it would still
> > read large chunks at a time when you request it.
> 
> System call overhead?  I had the same complaint about Windows, where
> apparently winsock makes you pay more of a performance penalty than
> Unix does in the same case.

Shouldn't be. There should still be an rfile.read(1000) in that example app
(with the big transfers). That read() should be quite fast -- the buffering
should have almost no effect.

So... what is the underlying problem?

[ IOW, there are two issues: the sock vs file thing; and why rfile is so
  darn slow; I have no insights on the latter. ]

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From effbot at telia.com  Thu Aug 31 22:08:23 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 22:08:23 +0200
Subject: [Python-Dev] one last SRE headache
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>
Message-ID: <027f01c01387$3ae9fde0$766940d5@hagrid>

amk wrote:
> outside a character class it's a character if there are exactly
> 3 octal digits; otherwise it's a backref.  So \41 is a backref
> to group 41, but \041 is the literal character ASCII 33.

so what's the right way to parse this?

read up to three digits, check if they're a valid octal
number, and treat them as a decimal group number if
not?

</F>




From guido at beopen.com  Thu Aug 31 23:10:19 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 16:10:19 -0500
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: Your message of "Thu, 31 Aug 2000 13:04:18 MST."
             <20000831130417.K11297@lyra.org> 
References: <008701c01373$95ced1e0$766940d5@hagrid> <20000831121826.F11297@lyra.org> <200008312048.PAA23324@cj20424-a.reston1.va.home.com>  
            <20000831130417.K11297@lyra.org> 
Message-ID: <200008312110.QAA23506@cj20424-a.reston1.va.home.com>

> > > Dunno why the unbuffered reading would be slow. I'd think it would still
> > > read large chunks at a time when you request it.
> > 
> > System call overhead?  I had the same complaint about Windows, where
> > apparently winsock makes you pay more of a performance penalty than
> > Unix does in the same case.
> 
> Shouldn't be. There should still be an rfile.read(1000) in that example app
> (with the big transfers). That read() should be quite fast -- the buffering
> should have almost no effect.
> 
> So... what is the underlying problem?
> 
> [ IOW, there are two issues: the sock vs file thing; and why rfile is so
>   darn slow; I have no insights on the latter. ]

Should, shouldn't...

It's a quality of implementation issue in stdio.  If stdio, when
seeing a large read on an unbuffered file, doesn't do something smart
but instead calls getc() for each character, that would explain this.
It's dumb, but not illegal.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 31 23:12:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 16:12:29 -0500
Subject: [Python-Dev] one last SRE headache
In-Reply-To: Your message of "Thu, 31 Aug 2000 22:08:23 +0200."
             <027f01c01387$3ae9fde0$766940d5@hagrid> 
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>  
            <027f01c01387$3ae9fde0$766940d5@hagrid> 
Message-ID: <200008312112.QAA23526@cj20424-a.reston1.va.home.com>

> amk wrote:
> > outside a character class it's a character if there are exactly
> > 3 octal digits; otherwise it's a backref.  So \41 is a backref
> > to group 41, but \041 is the literal character ASCII 33.
> 
> so what's the right way to parse this?
> 
> read up to three digits, check if they're a valid octal
> number, and treat them as a decimal group number if
> not?

Suggestion:

If there are fewer than 3 digits, it's a group.

If there are exactly 3 digits and you have 100 or more groups, it's a
group -- too bad, you lose octal number support.  Use \x. :-)

If there are exactly 3 digits and you have at most 99 groups, it's an
octal escape.

(Can you even have more than 99 groups in SRE?)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From m.favas at per.dem.csiro.au  Thu Aug 31 22:17:14 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 04:17:14 +0800
Subject: [Fwd: [Python-Dev] test_gettext.py fails on 64-bit architectures]
Message-ID: <39AEBD4A.55ABED9E@per.dem.csiro.au>


-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913
-------------- next part --------------
An embedded message was scrubbed...
From: Mark Favas <m.favas at per.dem.csiro.au>
Subject: Re: [Python-Dev] test_gettext.py fails on 64-bit architectures
Date: Fri, 01 Sep 2000 04:16:01 +0800
Size: 2964
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000901/b5f46724/attachment.eml>

From effbot at telia.com  Thu Aug 31 22:33:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 22:33:11 +0200
Subject: [Python-Dev] one last SRE headache
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>              <027f01c01387$3ae9fde0$766940d5@hagrid>  <200008312112.QAA23526@cj20424-a.reston1.va.home.com>
Message-ID: <028d01c0138a$b2de46a0$766940d5@hagrid>

guido wrote:
> Suggestion:
> 
> If there are fewer than 3 digits, it's a group.
> 
> If there are exactly 3 digits and you have 100 or more groups, it's a
> group -- too bad, you lose octal number support.  Use \x. :-)
> 
> If there are exactly 3 digits and you have at most 99 groups, it's an
> octal escape.

I had to add one rule:

    If it starts with a zero, it's always an octal number.
    Up to two more octal digits are accepted after the
    leading zero.

but this still fails on this pattern:

    r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'

where the last part is supposed to be a reference to
group 11, followed by a literal '9'.

more ideas?

> (Can you even have more than 99 groups in SRE?)

yes -- the current limit is 100 groups.  but that's an
artificial limit, and it should be removed.

</F>




From m.favas at per.dem.csiro.au  Thu Aug 31 22:32:52 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 04:32:52 +0800
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <39AEC0F4.746656E2@per.dem.csiro.au>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
>...
> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?
> 
> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

On my DEC/Compaq/OSF1/Tru64 Unix box with the default stacksize of 2048k
I get 6225 iterations before seg faulting...
-- 
Mark



From ping at lfw.org  Thu Aug 31 23:04:26 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 16:04:26 -0500 (CDT)
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <028d01c0138a$b2de46a0$766940d5@hagrid>
Message-ID: <Pine.LNX.4.10.10008311559180.10613-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Fredrik Lundh wrote:
> I had to add one rule:
> 
>     If it starts with a zero, it's always an octal number.
>     Up to two more octal digits are accepted after the
>     leading zero.

Fewer rules are better.  Let's not arbitrarily rule out
the possibility of more than 100 groups.

The octal escapes are a different kind of animal than the
backreferences: for a backreference, there is *actually*
a backslash followed by a number in the regular expression;
but we already have a reasonable way to put funny characters
into regular expressions.

That is, i propose *removing* the translation of octal
escapes from the regular expression engine.  That's the
job of the string literal:

    r'\011'    is a backreference to group 11

    '\\011'    is a backreference to group 11

    '\011'     is a tab character

This makes automatic construction of regular expressions
a tractable problem.  We don't want to introduce so many
exceptional cases that an attempt to automatically build
regular expressions will turn into a nightmare of special
cases.
    

-- ?!ng




From jeremy at beopen.com  Thu Aug 31 22:47:39 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 16:47:39 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AEC0F4.746656E2@per.dem.csiro.au>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
Message-ID: <14766.50283.758598.632542@bitdiddle.concentric.net>

I've just checked in Misc/find_recursionlimit.py that uses recursion
through various __ methods (.e.g __repr__) to generate infinite
recursion.  These tend to use more C stack frames that a simple
recursive function.

I've set the Python recursion_limit down to 2500, which is safe for
all tests in find_recursionlimit on my Linux box.  The limit can be
bumped back up, so I'm happy to have it set low by default.

Does anyone have a platform where this limit is no low enough?

Jeremy



From ping at lfw.org  Thu Aug 31 23:07:32 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 16:07:32 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008310237.OAA17328@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008311604500.10613-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Greg Ewing wrote:
> Peter Schneider-Kamp <nowonder at nowonder.de>:
> 
> > As far as I know adding a builtin indices() has been
> > rejected as an idea.
> 
> But why? I know it's been suggested, but I don't remember seeing any
> convincing arguments against it. Or much discussion at all.

I submitted a patch to add indices() and irange() previously.  See:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101129&group_id=5470

Guido rejected it:

    gvanrossum: 2000-Aug-17 12:16
        I haven't seen the debate! But I'm asked to pronounce
        anyway, and I just don't like this. Learn to write code
        that doesn't need the list index!

    tim_one: 2000-Aug-15 15:08
        Assigned to Guido for Pronouncement.  The debate's been
        debated, close it out one way or the other.

    ping: 2000-Aug-09 03:00
        There ya go.  I have followed the style of the builtin_range()
        function, and docstrings are included.


-- ?!ng




From bwarsaw at beopen.com  Thu Aug 31 22:55:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 16:55:32 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules Makefile.pre.in,1.63,1.64
References: <200008311656.JAA20666@slayer.i.sourceforge.net>
Message-ID: <14766.50756.893007.253356@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake <fdrake at users.sourceforge.net> writes:

    Fred> If Setup is older than Setup.in, issue a bold warning that
    Fred> the Setup may need to be checked to make sure all the latest
    Fred> information is present.

    Fred> This closes SourceForge patch #101275.

Not quite.  When I run make in the top-level directory, I see this
message:

-------------------------------------------
./Setup.in is newer than Setup;
check to make sure you have all the updates
you need in your Setup file.
-------------------------------------------

I have to hunt around in my compile output to notice that, oh, make
cd'd into Modules so it must be talking about /that/ Setup file.
"Then why did it say ./Setup.in"? :)

The warning should say Modules/Setup.in is newer than Modules/Setup.

-Barry



From cgw at fnal.gov  Thu Aug 31 22:59:12 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 15:59:12 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.44502.812468.677142@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
	<200008311957.OAA22338@cj20424-a.reston1.va.home.com>
	<14766.44502.812468.677142@bitdiddle.concentric.net>
Message-ID: <14766.50976.102853.695767@buffalo.fnal.gov>

Jeremy Hylton writes:
 >   >> I can play the .au file and I use a variety of other audio tools
 >   >> regularly.  Is Peter still maintaining it or can someone else
 >   >> offer some assistance?

The Linux audio programming docs do clearly state:

>    There are three parameters which affect quality (and memory/bandwidth requirements) of sampled audio
>    data. These parameters are the following:		    
>
>           Sample format (sometimes called as number of bits) 
>           Number of channels (mono/stereo) 
>           Sampling rate (speed) 
>
>           NOTE!  
>              It is important to set these parameters always in the above order. Setting speed before
>              number of channels doesn't work with all devices.  

linuxaudiodev.c does this:
    ioctl(self->x_fd, SOUND_PCM_WRITE_RATE, &rate)
    ioctl(self->x_fd, SNDCTL_DSP_SAMPLESIZE, &ssize)
    ioctl(self->x_fd, SNDCTL_DSP_STEREO, &stereo)
    ioctl(self->x_fd, SNDCTL_DSP_SETFMT, &audio_types[n].a_fmt)

which is exactly the reverse order of what is recommended!

Alas, even after fixing this, I *still* can't get linuxaudiodev to
play the damned .au file.  It works fine for the .wav formats.

I'll continue hacking on this as time permits.



From m.favas at per.dem.csiro.au  Thu Aug 31 23:04:48 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 05:04:48 +0800
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net>
Message-ID: <39AEC870.3E1CDAFD@per.dem.csiro.au>

Compaq/DEC/OSF1/Tru64 Unix, default stacksize 2048k:
I get "Limit of 2100 is fine" before stack overflow and segfault.
(On Guido's test script, I got 3532 before crashing, and 6225 on MAL's
test).

Mark

Jeremy Hylton wrote:
> 
> I've just checked in Misc/find_recursionlimit.py that uses recursion
> through various __ methods (.e.g __repr__) to generate infinite
> recursion.  These tend to use more C stack frames that a simple
> recursive function.
> 
> I've set the Python recursion_limit down to 2500, which is safe for
> all tests in find_recursionlimit on my Linux box.  The limit can be
> bumped back up, so I'm happy to have it set low by default.
> 
> Does anyone have a platform where this limit is no low enough?



From bwarsaw at beopen.com  Thu Aug 31 23:14:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 17:14:59 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
Message-ID: <14766.51923.685753.319113@anthem.concentric.net>

I wonder if find_recursionlimit.py shouldn't go in Tools and perhaps
be run as a separate rule in the Makefile (with a corresponding
cleanup of the inevitable core file, and a printing of the last
reasonable value returned).  Or you can write a simple Python wrapper
around find_recursionlimit.py that did the parenthetical tasks.

-Barry



From jeremy at beopen.com  Thu Aug 31 23:22:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 17:22:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
In-Reply-To: <14766.51923.685753.319113@anthem.concentric.net>
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
	<14766.51923.685753.319113@anthem.concentric.net>
Message-ID: <14766.52364.742061.188332@bitdiddle.concentric.net>

>>>>> "BAW" == Barry A Warsaw <bwarsaw at beopen.com> writes:

  BAW> I wonder if find_recursionlimit.py shouldn't go in Tools and
  BAW> perhaps be run as a separate rule in the Makefile (with a
  BAW> corresponding cleanup of the inevitable core file, and a
  BAW> printing of the last reasonable value returned).  Or you can
  BAW> write a simple Python wrapper around find_recursionlimit.py
  BAW> that did the parenthetical tasks.

Perhaps.  It did not imagine we would use the results to change the
recursion limit at compile time or run time automatically.  It seemed
a bit hackish, so I put it in Misc.  Maybe Tools would be better, but
that would require an SF admin request (right?).

Jeremy



From skip at mojam.com  Thu Aug 31 23:32:58 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 31 Aug 2000 16:32:58 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.50283.758598.632542@bitdiddle.concentric.net>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
Message-ID: <14766.53002.467504.523298@beluga.mojam.com>

    Jeremy> Does anyone have a platform where this limit is no low enough?

Yes, apparently I do.  My laptop is configured so:

     Pentium III
     128MB RAM
     211MB swap
     Mandrake Linux 7.1

It spits out 2400 as the last successful test, even fresh after a reboot
with no swap space in use and lots of free memory and nothing else running
besides boot-time daemons.

Skip



From bwarsaw at beopen.com  Thu Aug 31 23:43:54 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 17:43:54 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
	<14766.51923.685753.319113@anthem.concentric.net>
	<14766.52364.742061.188332@bitdiddle.concentric.net>
Message-ID: <14766.53658.752985.58503@anthem.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy at beopen.com> writes:

    JH> Perhaps.  It did not imagine we would use the results to
    JH> change the recursion limit at compile time or run time
    JH> automatically.  It seemed a bit hackish, so I put it in Misc.
    JH> Maybe Tools would be better, but that would require an SF
    JH> admin request (right?).

Yes, to move the ,v file, but there hasn't been enough revision
history to worry about it.  Just check it in someplace in Tools and
cvsrm it from Misc.



From cgw at fnal.gov  Thu Aug 31 23:45:15 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 16:45:15 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.53002.467504.523298@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
Message-ID: <200008312145.QAA10295@buffalo.fnal.gov>

Skip Montanaro writes:
 >      211MB swap
 >      Mandrake Linux 7.1
 > 
 > It spits out 2400 as the last successful test, even fresh after a reboot
 > with no swap space in use and lots of free memory and nothing else running
 > besides boot-time daemons.

I get the exact same value.  Of course the amount of other stuff
running makes no differemce, you get the core dump because you've hit
the RLIMIT for stack usage, not because you've exhausted memory.
Amount of RAM in the machine, or swap space in use has nothing to do
with it.  Do "ulimit -s unlimited" and see what happens...

There can be no universally applicable default value here because
different people will have different rlimits depending on how their
sysadmins chose to set this up.




From cgw at fnal.gov  Thu Aug 31 23:52:29 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 16:52:29 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.54008.173276.72324@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54008.173276.72324@beluga.mojam.com>
Message-ID: <14766.54173.228568.55862@buffalo.fnal.gov>

Skip Montanaro writes:

 > Makes no difference:

Allright, I'm confused,  I'll shut up now ;-)



From skip at mojam.com  Thu Aug 31 23:52:33 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 31 Aug 2000 16:52:33 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.53381.634928.615048@buffalo.fnal.gov>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
Message-ID: <14766.54177.584090.198596@beluga.mojam.com>


    Charles> I get the exact same value.  Of course the amount of other
    Charles> stuff running makes no differemce, you get the core dump
    Charles> because you've hit the RLIMIT for stack usage, not because
    Charles> you've exhausted memory.  Amount of RAM in the machine, or swap
    Charles> space in use has nothing to do with it.  Do "ulimit -s
    Charles> unlimited" and see what happens...

Makes no difference:

    % ./python
    Python 2.0b1 (#81, Aug 31 2000, 15:53:42)  [GCC 2.95.3 19991030 (prerelease)] on linux2
    Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
    Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
    >>>
    % ulimit -a
    core file size (blocks)     0
    data seg size (kbytes)      unlimited
    file size (blocks)          unlimited
    max locked memory (kbytes)  unlimited
    max memory size (kbytes)    unlimited
    open files                  1024
    pipe size (512 bytes)       8
    stack size (kbytes)         unlimited
    cpu time (seconds)          unlimited
    max user processes          2048
    virtual memory (kbytes)     unlimited
    % ./python Misc/find_recursionlimit.py
    ...
    Limit of 2300 is fine
    recurse
    add
    repr
    init
    getattr
    getitem
    Limit of 2400 is fine
    recurse
    add
    repr
    Segmentation fault

Skip



From tim_one at email.msn.com  Thu Aug 31 23:55:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:55:56 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <023301c01384$39b2bdc0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEIHDAA.tim_one@email.msn.com>

The PRE documentation expresses the true intent:

    \number
    Matches the contents of the group of the same number. Groups
    are numbered starting from 1. For example, (.+) \1 matches 'the the'
    or '55 55', but not 'the end' (note the space after the group). This
    special sequence can only be used to match one of the first 99 groups.
    If the first digit of number is 0, or number is 3 octal digits long,
    it will not be interpreted as a group match, but as the character with
    octal value number. Inside the "[" and "]" of a character class, all
    numeric escapes are treated as characters

This was discussed at length when we decided to go the Perl-compatible
route, and Perl's rules for backreferences were agreed to be just too ugly
to emulate.  The meaning of \oo in Perl depends on how many groups precede
it!  In this case, there are fewer than 41 groups, so Perl says "octal
escape"; but if 41 or more groups had preceded, it would mean
"backreference" instead(!).  Simply unbearably ugly and error-prone.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Fredrik Lundh
> Sent: Thursday, August 31, 2000 3:47 PM
> To: python-dev at python.org
> Subject: [Python-Dev] one last SRE headache
>
>
> can anyone tell me how Perl treats this pattern?
>
>     r'((((((((((a))))))))))\41'
>
> in SRE, this is currently a couple of nested groups, surrounding
> a single literal, followed by a back reference to the fourth group,
> followed by a literal "1" (since there are less than 41 groups)
>
> in PRE, it turns out that this is a syntax error; there's no group 41.
>
> however, this test appears in the test suite under the section "all
> test from perl", but they're commented out:
>
> # Python does not have the same rules for \\41 so this is a syntax error
> #    ('((((((((((a))))))))))\\41', 'aa', FAIL),
> #    ('((((((((((a))))))))))\\41', 'a!', SUCCEED, 'found', 'a!'),
>
> if I understand this correctly, Perl treats as an *octal* escape
> (chr(041) == "!").
>
> now, should I emulate PRE, Perl, or leave it as it is...
>
> </F>
>
> PS. in case anyone wondered why I haven't seen this before, it's
> because I just discovered that the test suite masks syntax errors
> under some circumstances...
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev





From m.favas at per.dem.csiro.au  Thu Aug 31 23:56:25 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 05:56:25 +0800
Subject: [Python-Dev] Syntax error in Makefile for "make install"
Message-ID: <39AED489.F953E9EE@per.dem.csiro.au>

Makefile in the libainstall target of "make install" uses the following
construct:
                @if [ "$(MACHDEP)" == "beos" ] ; then \
This "==" is illegal in all the /bin/sh's I have lying around, and leads
to make failing with:
/bin/sh: test: unknown operator ==
make: *** [libainstall] Error 1

-- 
Mark



From tim_one at email.msn.com  Thu Aug 31 23:01:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:01:10 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <200008312112.QAA23526@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEJHDAA.tim_one@email.msn.com>

> Suggestion:
>
> If there are fewer than 3 digits, it's a group.

Unless it begins with a 0 (that's what's documented today -- read the docs
<wink>).

> If there are exactly 3 digits and you have 100 or more groups, it's a
> group -- too bad, you lose octal number support.  Use \x. :-)

The docs say you can't use backreferences for groups higher than 99.

> If there are exactly 3 digits and you have at most 99 groups, it's an
> octal escape.

If we make the meaning depend on the number of preceding groups, we may as
well emulate *all* of Perl's ugliness here.





From thomas at xs4all.net  Thu Aug 31 23:38:59 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 23:38:59 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 31, 2000 at 10:58:49AM -0500
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <20000831233859.K12695@xs4all.nl>

On Thu, Aug 31, 2000 at 10:58:49AM -0500, Guido van Rossum wrote:

>     C() # This tries to get __init__, triggering the recursion

> I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> have no idea what units).

That's odd... On BSDI, with a 2Mbyte stacklimit (ulimit -s says 2048) I get
almost as many recursions: 5136. That's very much not what I would expect...
With a stack limit of 8192, I can go as high as 19997 recursions! I wonder
why that is...

Wait a minute... The Linux SEGV isn't stacksize related at all! Observe:

centurion:~ > limit stacksize 8192
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 65536
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 2048
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 128
centurion:~ > python teststack.py | tail -3
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 1024
centurion:~ > python teststack.py | tail -3
2677
2678
26Segmentation fault (core dumped) 

centurion:~ > limit stacksize 1500
centurion:~ > python teststack.py | tail -3
3496
3497
349Segmentation fault (core dumped) 

I don't have time to pursue this, however. I'm trying to get my paid work
finished tomorrow, so that I can finish my *real* work over the weekend:
augassign docs & some autoconf changes :-) 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Thu Aug 31 23:07:37 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:07:37 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <028d01c0138a$b2de46a0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEELHDAA.tim_one@email.msn.com>

[/F]
> I had to add one rule:
>
>     If it starts with a zero, it's always an octal number.
>     Up to two more octal digits are accepted after the
>     leading zero.
>
> but this still fails on this pattern:
>
>     r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'
>
> where the last part is supposed to be a reference to
> group 11, followed by a literal '9'.

But 9 isn't an octal digit, so it fits w/ your new rule just fine.  \117
here instead would be an octal escape.





From skip at mojam.com  Tue Aug  1 00:07:02 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 31 Jul 2000 17:07:02 -0500 (CDT)
Subject: [Python-Dev] SET_LINENO and python options
In-Reply-To: <20000730080718.A22903@newcnri.cnri.reston.va.us>
References: <LNBBLJKPBEHFEDALKOLCAEOEGMAA.tim_one@email.msn.com>
	<200007300239.EAA21825@python.inrialpes.fr>
	<20000730080718.A22903@newcnri.cnri.reston.va.us>
Message-ID: <14725.63622.190585.197392@beluga.mojam.com>

    amk> It always seemed odd to me that the current line number is always
    amk> kept up to date, even though 99.999% of the time, no one will care.
    amk> Why not just keep a small table that holds the offset in the
    amk> bytecode at which each line starts, and look it up when it's
    amk> needed?

(I'm probably going to wind up seeming like a fool, responding late to this
thread without having read it end-to-end, but...)

Isn't that what the code object's co_lnotab is for?  I thought the idea was
to dispense with SET_LINENO altogether and just compute line numbers using
co_lnotab on those rare occasions (debugging, tracebacks, etc) when you
needed them.

Skip



From greg at cosc.canterbury.ac.nz  Tue Aug  1 01:45:02 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 11:45:02 +1200 (NZST)
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended
 slicing for lists)
In-Reply-To: <Pine.LNX.4.10.10007290934240.5008-100000@localhost>
Message-ID: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>

I think there are some big conceptual problems with allowing
negative steps in a slice.

With ordinary slices, everything is very clear if you think
of the indices as labelling the points between the list
elements.

With a step, this doesn't work any more, and you have to
think in terms of including the lower index but excluding the
upper index.

But what do "upper" and "lower" mean when the step is negative?
There are several things that a[i:j:-1] could plausibly mean:

   [a[i], a[i-1], ..., a[j+1]]

   [a[i-1], a[i-2], ..., a[j]]

   [a[j], a[j-1], ..., a[i+1]]

   [a[j-1], a[j-2], ..., a[i]]

And when you consider negative starting and stopping values,
it just gets worse. These have no special meaning to range(),
but in list indexing they do. So what do they mean in a slice
with a step? Whatever is chosen, it can't be consistent with
both.

In the face of such confusion, the only Pythonic thing would
seem to be to disallow these things.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Aug  1 02:01:45 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 12:01:45 +1200 (NZST)
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: <200007281147.GAA04007@cj20424-a.reston1.va.home.com>
Message-ID: <200008010001.MAA10295@s454.cosc.canterbury.ac.nz>

> The way I understand this, mixing indices and slices is used all
> the time to reduce the dimensionality of an array.

I wasn't really suggesting that they should be disallowed.
I was trying to point out that their existence makes it
hard to draw a clear distinction between indexing and slicing.

If it were the case that

   a[i,j,...,k]

was always equivalent to

   a[i][j]...[k]

then there would be no problem -- you could consider each
subscript individually as either an index or a slice. But
that's not the way it is.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Aug  1 02:07:08 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 01 Aug 2000 12:07:08 +1200 (NZST)
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEPDGMAA.tim_one@email.msn.com>
Message-ID: <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>

Tim Peters:

> The problem isn't that repr sticks in backslash escapes, the problem is that
> repr gets called when repr is inappropriate.

Seems like we need another function that does something in
between str() and repr(). It would be just like repr() except
that it wouldn't put escape sequences in strings unless
absolutely necessary, and it would apply this recursively
to sub-objects.

Not sure what to call it -- goofy() perhaps :-)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From bwarsaw at beopen.com  Tue Aug  1 02:25:43 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 31 Jul 2000 20:25:43 -0400 (EDT)
Subject: [Python-Dev] Should repr() of string should observe locale?
References: <LNBBLJKPBEHFEDALKOLCKEPDGMAA.tim_one@email.msn.com>
	<200008010007.MAA10298@s454.cosc.canterbury.ac.nz>
Message-ID: <14726.6407.729299.113509@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Seems like we need another function that does something in
    GE> between str() and repr().

I'd bet most people don't even understand why there has to be two
functions that do almost the same thing.

-Barry



From guido at beopen.com  Tue Aug  1 05:32:18 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 31 Jul 2000 22:32:18 -0500
Subject: [Python-Dev] test_re fails with re==pre
In-Reply-To: Your message of "Mon, 31 Jul 2000 23:59:34 +0200."
             <20000731215940.28A11E266F@oratrix.oratrix.nl> 
References: <20000731215940.28A11E266F@oratrix.oratrix.nl> 
Message-ID: <200008010332.WAA25069@cj20424-a.reston1.va.home.com>

> Test_re now works fine if re is sre, but it still fails if re is pre.
> 
> Is this an artifact of the test harness or is there still some sort of
> incompatibility lurking in there?

It's because the tests are actually broken for sre: it prints a bunch
of "=== Failed incorrectly ..." messages.  We added these as "expected
output" to the test/output/test_re file.  The framework just notices
there's a difference and blames pre.

Effbot has promised a new SRE "real soon now" ...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Aug  1 06:01:34 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 31 Jul 2000 23:01:34 -0500
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
In-Reply-To: Your message of "Tue, 01 Aug 2000 11:45:02 +1200."
             <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> 
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> 
Message-ID: <200008010401.XAA25180@cj20424-a.reston1.va.home.com>

> I think there are some big conceptual problems with allowing
> negative steps in a slice.
> 
> With ordinary slices, everything is very clear if you think
> of the indices as labelling the points between the list
> elements.
> 
> With a step, this doesn't work any more, and you have to
> think in terms of including the lower index but excluding the
> upper index.
> 
> But what do "upper" and "lower" mean when the step is negative?
> There are several things that a[i:j:-1] could plausibly mean:
> 
>    [a[i], a[i-1], ..., a[j+1]]
> 
>    [a[i-1], a[i-2], ..., a[j]]
> 
>    [a[j], a[j-1], ..., a[i+1]]
> 
>    [a[j-1], a[j-2], ..., a[i]]
> 
> And when you consider negative starting and stopping values,
> it just gets worse. These have no special meaning to range(),
> but in list indexing they do. So what do they mean in a slice
> with a step? Whatever is chosen, it can't be consistent with
> both.
> 
> In the face of such confusion, the only Pythonic thing would
> seem to be to disallow these things.

You have a point!  I just realized today that my example L[9:-1:-1]
does *not* access L[0:10] backwards, because of the way the first -1
is interpreted as one before the end of the list L. :(

But I'm not sure we can forbid this behavior (in general) because the
NumPy folks are already using this.  Since these semantics are up to
the object, and no built-in objects support extended slices (yet), I'm
not sure that this behavior has been documented anywhere except in
NumPy.

However, for built-in lists I think it's okay to forbid a negative
step until we've resolved this...

This is something to consider for patch 100998 which currently
implements (experimental) extended slices for lists...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From ping at lfw.org  Tue Aug  1 02:02:40 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 31 Jul 2000 17:02:40 -0700 (PDT)
Subject: [Python-Dev] Reordering opcodes (PEP 203 Augmented Assignment)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIEIGDCAA.MarkH@ActiveState.com>
Message-ID: <Pine.LNX.4.10.10007311701050.5008-100000@localhost>

On Mon, 31 Jul 2000, Mark Hammond wrote:
> IDLE and Pythonwin are able to debug arbitary programs once they have
> started - and they are both written in Python.

But only if you start them *in* IDLE or Pythonwin, right?

> * You do not want to debug the IDE itself, just a tiny bit of code running
> under the IDE.  Making the IDE take the full hit simply because it wants to
> run a debugger occasionally isnt fair.

Well, running with trace hooks in place is no different from
the way things run now.

> The end result is that all IDEs will run with debugging enabled.

Right -- that's what currently happens.  I don't see anything wrong
with that.

> * Python often is embedded, for example, in a Web Server, or used for CGI.
> It should be possible to debug these programs directly.

But we don't even have a way to do this now.  Attaching to an
external running process is highly system-dependent trickery.

If printing out tracebacks and other information isn't enough
and you absolutely have to step the program under a debugger,
the customary way of doing this now is to run a non-forking
server under the debugger.  In that case, you just start a
non-forking server under IDLE which sets -g, and you're fine.


Anyway, i suppose this is all rather moot now that Vladimir has a
clever scheme for tracing even without SET_LINENO.  Go Vladimir!
Your last proposal sounded great.


-- ?!ng




From effbot at telia.com  Tue Aug  1 08:20:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 08:20:01 +0200
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz>
Message-ID: <001a01bffb80$87514860$f2a6b5d4@hagrid>

greg wrote:

> I think there are some big conceptual problems with allowing
> negative steps in a slice.

wasn't "slices" supposed to work the same way as "ranges"?

from PEP-204:

    "Extended slices do show, however, that there is already a
    perfectly valid and applicable syntax to denote ranges in a way
    that solve all of the earlier stated disadvantages of the use of
    the range() function"

> In the face of such confusion, the only Pythonic thing would
> seem to be to disallow these things.

...and kill PEP-204 at the same time.

</F>




From tim_one at email.msn.com  Tue Aug  1 08:16:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 02:16:41 -0400
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <14726.6407.729299.113509@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEEPGNAA.tim_one@email.msn.com>

[Barry A. Warsaw]
> I'd bet most people don't even understand why there has to be two
> functions that do almost the same thing.

Indeed they do not.  The docs are too vague about the intended differences
between str and repr; in 1.5.2 and earlier, string was just about the only
builtin type that actually had distinct str() and repr() implementations, so
it was easy to believe that strings were somehow a special case with unique
behavior; 1.6 extends that (just) to floats, where repr(float) now displays
enough digits so that the output can be faithfully converted back to the
float you started with.  This is starting to bother people in the way that
distinct __str__ and __repr__ functions have long frustrated me in my own
classes:  the default (repr) at the prompt leads to bloated output that's
almost always not what I want to see.  Picture repr() applied to a matrix
object!  If it meets the goal of producing a string sufficient to reproduce
the object when eval'ed, it may spray megabytes of string at the prompt.
Many classes implement __repr__ to do what __str__ was intended to do as a
result, just to get bearable at-the-prompt behavior.  So "learn by example"
too often teaches the wrong lesson too.  I'm not surprised that users are
confused!

Python is *unusual* in trying to cater to more than one form of to-string
conversion across the board.  It's a mondo cool idea that hasn't got the
praise it deserves, but perhaps that's just because the current
implementation doesn't really work outside the combo of the builtin types +
plain-ASCII strings.  Unescaping locale printables in repr() is the wrong
solution to a small corner of the right problem.





From effbot at telia.com  Tue Aug  1 08:27:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 08:27:15 +0200
Subject: [Python-Dev] Reordering opcodes (PEP 203 Augmented Assignment)
References: <Pine.LNX.4.10.10007311701050.5008-100000@localhost>
Message-ID: <006401bffb81$89a7ed20$f2a6b5d4@hagrid>

ping wrote:

> > * Python often is embedded, for example, in a Web Server, or used for CGI.
> > It should be possible to debug these programs directly.
> 
> But we don't even have a way to do this now.  Attaching to an
> external running process is highly system-dependent trickery.

not under Python: just add an import statement to the script, tell
the server to reload it, and off you go...

works on all platforms.

</F>




From paul at prescod.net  Tue Aug  1 08:34:53 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 02:34:53 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBAEEGDCAA.mhammond@skippinet.com.au>
Message-ID: <39866F8D.FCFA85CB@prescod.net>

Mark Hammond wrote:
> 
> >   Interesting; I'd understood from Paul that you'd given approval to
> > this module.
> 
> Actually, it was more more along the lines of me promising to spend some
> time "over the next few days", and not getting to it.  However, I believe
> it was less than a week before it was just checked in.

It was checked in the day before the alpha was supposed to go out. I
thought that was what you wanted! On what schedule would you have
preferred us to do it?

> I fear this may be a general symptom of the new flurry of activity; no-one
> with a real job can keep up with this list, meaning valuable feedback on
> many proposals is getting lost.  For example, DigiCool have some obviously
> smart people, but they are clearly too busy to offer feedback on anything
> lately.  That is a real shame, and a good resource we are missing out on.

>From my point of view, it was the appearance of _winreg that prompted
the "flurry of activity" that led to winreg. I would never have bothered
with winreg if I were not responding to the upcoming "event" of the
defacto standardization of _winreg. It was clearly designed (and I use
the word loosely) by various people at Microsoft over several years --
with sundry backwards and forwards compatibility hacks embedded.

I'm all for slow and steady, deliberate design. I'm sorry _winreg was
rushed but I could only work with the time I had and the interest level
of the people around. Nobody else wanted to discuss it. Nobody wanted to
review the module. Hardly anyone here even knew what was in the OLD
module.

> I am quite interested to hear from people like Gordon and Bill
> about their thoughts.

I am too. I would *also* be interested in hearing from people who have
not spent the last five years with the Microsoft API because _winreg was
a very thin wrapper over it and so will be obvious to those who already
know it.

I have the feeling that an abstraction over the APIs would never be as
"comfortable" as the Microsoft API you've been using for all of these
years.
-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"





From paul at prescod.net  Tue Aug  1 09:16:30 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 03:16:30 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
Message-ID: <3986794E.ADBB938C@prescod.net>

(reorganizing the important stuff to the top)

Mark Hammond wrote:
> Still-can't-see-the-added-value-ly,

I had no personal interest in an API for the windows registry but I
could not, in good conscience, let the original one become the 
standard Python registry API. 

Here are some examples:

(num_subkeys, num_values, last_modified ) = winreg.QueryInfoKey( key )
for i in range( num_values ):
	(name,value)=winreg.EnumValue( key, i )
		if name==valuename: print "found"

Why am I enumerating but not using the Python enumeration protocol? Why
do I have to get a bogus 3-tuple before I begin enumerating? Where else
are the words "Query" and "Enum" used in Python APIs?

and

winreg.SetValueEx( key, "ProgramFilesDir", None, winreg.REG_SZ,
r"c:\programs" )

Note that the first argument is the key object (so why isn't this a
method?) and the third argument is documented as bogus. In fact, in
the OpenKey documentation you are requested to "always pass 0 please".

All of that was appropriate when winreg was documented "by reference" to
the Microsoft documentation but if it is going to be a real, documented
module in the Python library then the bogus MS junk should go.

The truth is I would prefer NOT to work on winreg and leave both 
versions out of the library. But Unless someone else is willing to 
design and implement a decent API, I took that burden upon myself 
rather than see more weird stuff in the Python API.

So the value add is:

 * uses Python iteration protocol
 * uses Python mapping protocol
 * uses Python method invocation syntax
 * uses only features that will be documented
 * does not expose integers as object handles (even for HKLM etc.)
 * uses inspectable, queryable objects even as docstrings
 * has a much more organized module dictionary (do a dir( _winreg))

If you disagree with those principles then we are in trouble. If you
have quibbles about specifics then let's talk.

> Ive just updated the test suite so that test_winreg2.py actually works.
> 
> It appears that the new winreg.py module is still in a state of flux, but
> all work has ceased.  The test for this module has lots of placeholders
> that are not filled in. Worse, the test code was checked in an obviously
> broken state (presumably "to be done", but guess who the bunny who had to
> do it was :-(

The tests ran fine on my machine. Fred had to make minor changes before
he checked it in for me because of module name changes. It's possible
that he mistyped a search and replace or something...or that I had a 
system dependency. Since I changed jobs I no longer have access to 
Visual C++ and have not had luck getting GCC to compile _winreg. This
makes further testing difficult until someone cuts a Windows binary 
build of Python (which is perpetually imminent).

The test cases are not all filled in. The old winreg test tested each
method on average one time. The new winreg tries harder to test each in
a variety of situations. Rather than try to keep all cases in my head I
created empty function bodies. Now we have clear documentation of what
is done and tested and what is to be tested still. Once an alpha is cut,
(or I fix my compiler situation) I can finish that process.

> Browsing the source made it clear that the module docstrings are still
> incomplete (eg "For information on the key API, open a key and look at its
> docstring.").  

The docstrings are not complete, but they are getting better and the old
winreg documentation was certainly not complete either! I admit I got
into a little bit of recursive projects wherein I didn't want to write
the winreg, minidom, SAX, etc. documentation twice so I started working
on stuff that would extract the docstrings and generate LaTeX. That's
almost done and I'll finish up the documentation. That's what the beta
period is for, right?

> Eg, the specific example I had a problem with was:
> 
> key[value]
> 
> Returns a result that includes the key index!  This would be similar to a
> dictionary index _always_ returning the tuple, and the first element of the
> tuple is _always_ the key you just indexed.

There is a very clearly labelled (and docstring-umented) getValueData
method:

key.getValueData("FOO") 

That gets only the value. Surely that's no worse than the original:

winreg.QueryValue( key, "FOO" )

If this is your only design complaint then I don't see cause for alarm
yet.

Here's why I did it that way:

You can fetch data values by their names or by their indexes. If
you've just indexed by the name then of course you know it. If you've
just fetched by the numeric index then you don't. I thought it was more
consistent to have the same value no matter how you indexed. Also, when
you get a value, you should also get a type, because the types can be
important. In that case it still has to be a tuple, so it's just a
question of a two-tuple or a three-tuple. Again, I thought that the
three-tuple was more consistent. Also, this is the same return value
returned by the existing EnumValue function.

> Has anyone else actually looked at or played with this, and still believe
> it is an improvement over _winreg?  I personally find it unintuitive, and
> will personally continue to use _winreg.  If we can't find anyone to
> complete it, document it, and stand up and say they really like it, I
> suggest we pull it.

I agree that it needs more review. I could not get anyone interested in
a discussion of how the API should look, other than pointing at old
threads.

You are, of course, welcome to use whatever you want but I think it
would be productive to give the new API a workout in real code and then
report specific design complaints. If others concur, we can change it.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"





From mwh21 at cam.ac.uk  Tue Aug  1 08:59:11 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 01 Aug 2000 07:59:11 +0100
Subject: [Python-Dev] Negative slice steps considered unhealthy (extended slicing for lists)
In-Reply-To: "Fredrik Lundh"'s message of "Tue, 1 Aug 2000 08:20:01 +0200"
References: <200007312345.LAA10291@s454.cosc.canterbury.ac.nz> <001a01bffb80$87514860$f2a6b5d4@hagrid>
Message-ID: <m34s55a2m8.fsf@atrus.jesus.cam.ac.uk>

"Fredrik Lundh" <effbot at telia.com> writes:

> greg wrote:
> 
> > I think there are some big conceptual problems with allowing
> > negative steps in a slice.
> 
> wasn't "slices" supposed to work the same way as "ranges"?

The problem is that for slices (& indexes in general) that negative
indices have a special interpretation:

range(10,-1,-1)
range(10)[:-1]

Personally I don't think it's that bad (you just have to remember to
write :: instead of :-1: when you want to step all the way back to the
beginning).  More serious is what you do with out of range indices -
and NumPy is a bit odd with this one, it seems:

>>> l = Numeric.arrayrange(10)
>>> l[30::-2]
array([0, 8, 6, 4, 2, 0])

What's that initial "0" doing there?  Can someone who actually
understands NumPy explain this?

Cheers,
M.

(PS: PySlice_GetIndices is in practice a bit useless because when it
fails it offers no explanation of why!  Does any one use this
function, or should I submit a patch to make it a bit more helpful (&
support longs)?)

-- 
    -Dr. Olin Shivers,
     Ph.D., Cranberry-Melon School of Cucumber Science
                                           -- seen in comp.lang.scheme




From tim_one at email.msn.com  Tue Aug  1 09:57:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 03:57:06 -0400
Subject: [Python-Dev] Should repr() of string should observe locale?
In-Reply-To: <200008010007.MAA10298@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEFEGNAA.tim_one@email.msn.com>

[Greg Ewing]
> Seems like we need another function that does something in
> between str() and repr(). It would be just like repr() except
> that it wouldn't put escape sequences in strings unless
> absolutely necessary, and it would apply this recursively
> to sub-objects.
>
> Not sure what to call it -- goofy() perhaps :-)

In the previous incarnation of this debate, a related (more str-like than
repr-like) intermediate was named ssctsoos().  Meaning, of course <wink>,
"str() special casing the snot out of strings".  It was purely a hack, and I
was too busy working at Dragon at the time to give it the thought it needed.
Now I'm too busy working at PythonLabs <0.5 wink>.

not-a-priority-ly y'rs  - tim





From MarkH at ActiveState.com  Tue Aug  1 09:59:22 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 1 Aug 2000 17:59:22 +1000
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <39866F8D.FCFA85CB@prescod.net>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>

I am going to try very hard to avoid antagonistic statements - it doesnt
help anyone or anything when we are so obviously at each others throats.

Let me start by being conciliatory:  I do appreciate the fact that you made
the effort on the winreg module, and accept it was done for all the right
reasons.  The fact that I dont happen to like it doesnt imply any personal
critisism - I believe we simply have a philosophical disagreement.  But
then again, they are the worst kinds of disagreements to have!

> > Actually, it was more more along the lines of me promising to
> spend some
> > time "over the next few days", and not getting to it.
> However, I believe
> > it was less than a week before it was just checked in.
>
> It was checked in the day before the alpha was supposed to go out. I
> thought that was what you wanted! On what schedule would you have
> preferred us to do it?

Im not sure, but one that allowed everyone with relevant input to give it.
Guido also stated he was not happy with the process.  I would have
preferred to have it miss the alpha than to go out with a design we are not
happy with.

> >From my point of view, it was the appearance of _winreg that prompted
> the "flurry of activity" that led to winreg. I would never have bothered
> with winreg if I were not responding to the upcoming "event" of the
> defacto standardization of _winreg. It was clearly designed (and I use
> the word loosely) by various people at Microsoft over several years --
> with sundry backwards and forwards compatibility hacks embedded.

Agreed.  However, the main problem was that people were assuming win32api
was around to get at the registry.  The win32api module's registry
functions have been around for _ages_.  None of its users have ever
proposed a more Pythonic API.  Thus I find it a little strange that someone
without much experience in the API should find it such an abomination,
while experienced users of the API were clearly happy (ok - maybe "happy"
isnt the right word - but no unhappy enough to complain :-)

If nothing else, it allows the proliferation of documentation on the Win32
API to apply to Python.  This is clearly not true with the new module.

This is also a good case for using the .NET API.  However, it still would
not provide Python indexing, iteration etc.  However, as I state below, Im
not convinced this is a problem.

> I'm all for slow and steady, deliberate design. I'm sorry _winreg was
> rushed but I could only work with the time I had and the interest level
> of the people around. Nobody else wanted to discuss it. Nobody wanted to
> review the module. Hardly anyone here even knew what was in the OLD
> module.

I dont belive that is fair.  As I said, plenty of people has used win32api,
and were sick of insisting their users install my extensions.  distutils
was the straw that broke the serpents back, IIRC.

It is simply the sheer volume of people who _did_ use the win32api registry
functions that forced the new winreg module.

The fact that no one else wanted to discuss it, or review it, or generally
seemed to care should have been indication that the new winreg was not
really required, rather than taken as proof that a half-baked module that
has not had any review should be checked in.

> I am too. I would *also* be interested in hearing from people who have
> not spent the last five years with the Microsoft API because _winreg was
> a very thin wrapper over it and so will be obvious to those who already
> know it.

Agreed - but it isnt going to happen.  There are not enough people on this
list who are not experienced with Windows, but also intend getting that
experience during the beta cycle.  I hope you would agree that adding an
experimental module to Python simply as a social experiment is not the
right thing to do.  Once winreg is released, it will be too late to remove,
even if the consesus is that it should never have been released in the
first place.

> I have the feeling that an abstraction over the APIs would never be as
> "comfortable" as the Microsoft API you've been using for all of these
> years.

Again agreed - although we should replace the "you've" with "you and every
other Windows programmer" - which tends to make the case for _winreg
stronger, IMO.

Moving to the second mail:

> All of that was appropriate when winreg was documented "by reference" to
> the Microsoft documentation but if it is going to be a real, documented
> module in the Python library then the bogus MS junk should go.

I agree in principal, but IMO it is obvious this will not happen.  It hasnt
happened yet, and you yourself have moved into more interesting PEPs.  How
do you propose this better documentation will happen?

> The truth is I would prefer NOT to work on winreg and leave both
> versions out of the library.

Me too - it has just cost me work so far, and offers me _zero_ benefit (if
anyone in the world can assume that the win32api module is around, it
surely must be me ;-).  However, this is a debate you need to take up with
the distutils people, and everyone else who has asked for registry access
in the core.  Guido also appears to have heard these calls, hence we had
his complete support for some sort of registry module for the core.

> So the value add is:
...
> If you disagree with those principles then we are in trouble. If you
> have quibbles about specifics then let's talk.

I'm afraid we are in a little trouble ;-)  These appear dubious to me.  If
I weigh in the number of calls over the years for a more Pythonic API over
the win32api functions, I become more convinced.

The registry is a tree structure similar to a file system.  There havent
been any calls I have heard to move the os.listdir() function or the glob
module to a more "oo" style?  I dont see a "directory" object that supports
Python style indexing or iteration.  I dont see any of your other benefits
being applied to Python's view of the file system - so why is the registry
so different?

To try and get more productive:  Bill, Gordon et al appear to have the
sense to stay out of this debate.  Unless other people do chime in, Paul
and I will remain at an impasse, and no one will be happy.  I would much
prefer to move this forward than to vent at each other regarding mails
neither of us can remember in detail ;-)

So what to do?  Anyone?  If even _one_ experienced Windows developer on
this list can say they believe "winreg" is appropriate and intuitive, I am
happy to shut up (and then simply push for better winreg documentation ;-)

Mark.




From moshez at math.huji.ac.il  Tue Aug  1 10:36:29 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 1 Aug 2000 11:36:29 +0300 (IDT)
Subject: [Python-Dev] Access to the Bug Database
Message-ID: <Pine.GSO.4.10.10008011134540.9510-100000@sundial>

Hi!

I think I need access to the bug database -- but in the meantime,
anyone who wants to mark 110612 as closed is welcome to.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Tue Aug  1 10:40:53 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 04:40:53 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFFGNAA.tim_one@email.msn.com>

FWIW, I ignored all the winreg modules, and all the debate about them.  Why?
Just because Mark's had been in use for years already, so was already
battle-tested.  There's no chance that any other platform will ever make use
of this module, and given that its appeal is thus solely to Windows users,
it was fine by me if it didn't abstract *anything* away from MS's Win32 API.
MS's APIs are hard enough to understand without somebody else putting their
own layers of confusion <0.9 wink> on top of them.

May as well complain that the SGI-specific cd.open() function warns that if
you pass anything at all to its optional "mode" argument, it had better be
the string "r" (maybe that makes some kind of perverse sense to SGI weenies?
fine by me if so).

So, sorry, but I haven't even looked at Paul's code.  I probably should,
but-- jeez! --there are so many other things that *need* to get done.  I did
look at Mark's (many months ago) as part of helping him reformat it to
Guido's tastes, and all I remember thinking about it then is "yup, looks a
whole lot like the Windows registry API -- when I need it I'll be able to
browse the MS docs lightly and use it straight off -- good!".

So unless Mark went and did something like clean it up <wink>, I still think
it's good.





From tim_one at email.msn.com  Tue Aug  1 11:27:59 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 1 Aug 2000 05:27:59 -0400
Subject: [Python-Dev] Access to the Bug Database
In-Reply-To: <Pine.GSO.4.10.10008011134540.9510-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFGGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> I think I need access to the bug database

Indeed, you had no access to the SF bug database at all.  Neither did a
bunch of others.  I have a theory about that:  I mentioned several weeks ago
that IE5 simply could not display the Member Permissions admin page
correctly, after it reached a certain size.  I downloaded a stinking
Netscape then, and that worked fine until it reached *another*, larger size,
at which point the display of some number of the bottom-most entries (like
where moshez lives) gets totally screwed up *sometimes*.  My *suspicion* is
that if an admin changes a permission while either IE5 or NS is in this
screwed-up state, it wreaks havoc with the permissions of the members whose
display lines are screwed up.  It's a weak suspicion <wink>, but a real one:
I've only seen NS screw up some contiguous number of the bottom-most lines,
I expect all the admins are using NS, and it was all and only a contiguous
block of developers at the bottom of the page who had their Bug Manager
permissions set to None (along with other damaged values) when I brought up
the page.

So, admins, look out for that!

Anyway, I just went thru and gave every developer admin privileges on the SF
Bug Manager.  Recall that it will probably take about 6 hours to take
effect, though.

> -- but in the meantime, anyone who wants to mark 110612 as
> closed is welcome to.

No, they're not:  nobody who knows *why* the bug is being closed should even
think about closing it.  It's still open.

you're-welcome<wink>-ly y'rs  - tim





From Vladimir.Marangozov at inrialpes.fr  Tue Aug  1 11:53:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 1 Aug 2000 11:53:36 +0200 (CEST)
Subject: [Python-Dev] SET_LINENO and python options
In-Reply-To: <14725.63622.190585.197392@beluga.mojam.com> from "Skip Montanaro" at Jul 31, 2000 05:07:02 PM
Message-ID: <200008010953.LAA02082@python.inrialpes.fr>

Skip Montanaro wrote:
> 
> Isn't that what the code object's co_lnotab is for?  I thought the idea was
> to dispense with SET_LINENO altogether and just compute line numbers using
> co_lnotab on those rare occasions (debugging, tracebacks, etc) when you
> needed them.

Don't worry about it anymore. It's all in Postponed patch #101022 at SF.
It makes the current "-O" the default (and uses co_lnotab), and reverts
back to the current default with "-d".

I give myself a break on this. You guys need to test it now and report
some feedback and impressions. If only to help Guido making up his mind
and give him a chance to pronounce on it <wink>. 

[?!ng]
> Anyway, i suppose this is all rather moot now that Vladimir has a
> clever scheme for tracing even without SET_LINENO.  Go Vladimir!
> Your last proposal sounded great.

Which one? They are all the latest <wink>.
See also the log msg of the latest tiny patch update at SF.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From nhodgson at bigpond.net.au  Tue Aug  1 12:47:12 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Tue, 1 Aug 2000 20:47:12 +1000
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
Message-ID: <010501bffba5$db4ebf90$8119fea9@neil>

> So what to do?  Anyone?  If even _one_ experienced Windows developer on
> this list can say they believe "winreg" is appropriate and intuitive, I am
> happy to shut up (and then simply push for better winreg documentation ;-)

   Sorry but my contribution isn't going to help much with breaking the
impasse.

   Registry code tends to be little lumps of complexity you don't touch once
it is working. The Win32 Reg* API is quite ugly - RegCreateKeyEx
takes/returns 10 parameters but you only normally want 3 and the return
status and everyone asks for KEY_ALL_ACCESS until the installation testers
tell you it fails for non-Administrators. So it would be good if the API was
simpler and defaulted everything you don't need to set.

   But I only hack the registry about once a year with Python. So if its
closer to the Win32 API then that helps me to use existing knowledge and
documentation.

   When writing an urllib patch recently, winreg seemed OK. Is it complete
enough? Are the things you can't do with it important for its role? IMO, if
winreg can handle the vast majority of cases (say, 98%) then its a useful
tool and people who need RegSetKeySecurity and similar can go to win32api.
Do the distutils developers know how much registry access they need?

   Enough fence sitting for now,

   Neil






From MarkH at ActiveState.com  Tue Aug  1 13:08:58 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 1 Aug 2000 21:08:58 +1000
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <010501bffba5$db4ebf90$8119fea9@neil>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCELIDCAA.MarkH@ActiveState.com>

Just to clarify (or confuse) the issue:

>    When writing an urllib patch recently, winreg seemed OK. Is
> it complete
> enough? Are the things you can't do with it important for its
> role? IMO, if
> winreg can handle the vast majority of cases (say, 98%) then its a useful
> tool and people who need RegSetKeySecurity and similar can go to
> win32api.

Note that Neil was actually using _winreg - the exposure of the raw Win32
API.  Part of my applying the patch was to rename the usage of "winreg" to
"_winreg".

Between the time of you writing the original patch and it being applied,
the old "winreg" module was renamed to "_winreg", and Paul's new
"winreg.py" was added.  The bone of contention is the new "winreg.py"
module, which urllib does _not_ use.

Mark.




From jim at interet.com  Tue Aug  1 15:28:40 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Tue, 01 Aug 2000 09:28:40 -0400
Subject: [Python-Dev] InfoWorld July 17 looks at Zope and Python
References: <397DB146.C68F9CD0@interet.com> <398654A8.37EB17BA@prescod.net>
Message-ID: <3986D088.E82E2162@interet.com>

Paul Prescod wrote:
> 
> Would you mind giving me the jist of the review? 20-word summary, if you
> don't mind.

Please note that I don't necessarily agree with the
reviews.  Also, there is no such thing as bad publicity.

Page 50: "Zope is a powerful application server.  Version
2.2 beta scales well, but enterprise capability, Python
language raise costs beyond the competition's."

Author claims he must purchase ZEO for $25-50K which is
too expensive.  Zope is dedicated to OOP, but shops not
doing OOP will have problems understanding it.  Python
expertise is necessary, but shops already know VB, C++ and
JavaScript.

Page 58:  "After many tutorials, I'm still waiting to
become a Zope addict."

Zope is based on Python, but that is no problem because
you do most programming in DTML which is like HTML.  It is
hard to get started in Zope because of lack of documentation,
it is hard to write code in browser text box, OOP-to-the-max
philosophy is unlike a familiar relational data base.
Zope has an unnecessarily high nerd factor.  It fails to
automate simple tasks.


My point in all this is that we design features to
appeal to computer scientists instead of "normal users".

JimA



From billtut at microsoft.com  Tue Aug  1 15:57:37 2000
From: billtut at microsoft.com (Bill Tutt)
Date: Tue, 1 Aug 2000 06:57:37 -0700 
Subject: [Python-Dev] New winreg module really an improvement?
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A610A@red-msg-07.redmond.corp.microsoft.com>

Mark wrote: 
> To try and get more productive:  Bill, Gordon et al appear to have the
> sense to stay out of this debate.  Unless other people do chime in, Paul
> and I will remain at an impasse, and no one will be happy.  I would much
> prefer to move this forward than to vent at each other regarding mails
> neither of us can remember in detail ;-)

I'm actually in the process of checking it out, and am hoping to compose
some comments on it later today.
I do know this about abstracting the registry APIs. If it doesn't allow you
to do everything you can do with the normal APIs, then you've failed in your
abstraction. (Which is probably why I've never yet seen a successful
abstraction of the API. :) )
The registry is indeed a bizarre critter. Key's have values, and values have
values. Ugh.... It's enough to drive a sane man bonkers, and here I was
being annoyed by the person who originally designed the NT console APIs,
silly me....

Bill





From gmcm at hypernet.com  Tue Aug  1 18:16:54 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 1 Aug 2000 12:16:54 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEELEDCAA.MarkH@ActiveState.com>
References: <39866F8D.FCFA85CB@prescod.net>
Message-ID: <1246975873-72274187@hypernet.com>

[Mark]
> To try and get more productive:  Bill, Gordon et al appear to
> have the sense to stay out of this debate.  

Wish I had that much sense...

I'm only +0 on the philosophy of a friendly Pythonic wrapper: 
the registry is only rarely the "right" solution. You need it 
when you have small amounts of persistent data that needs to 
be available to multiple apps and / or Windows. I actively 
discourage use of the registry for any other purposes. So 
making it easy to use is of very low priority for me. 

In addition, I doubt that a sane wrapper can be put over it. At 
first blush, it looks like a nested dict. But the keys are 
ordered. And a leaf is more like a list of tuples [(value, data), ]. 
But if you pull up regedit and look at how it's used, the (user-
speak) "value" may be a (MS-speak) "key", "value" or "data". 
Just note the number of entries where a keyname has one 
(value, data) pair that consists of ("(Default)", "(value not 
set)"). Or the number where keyname must be opened, but 
the (value, data) pair is ("(Default)", something). (It doesn't 
help that "key" may mean "keyname" or "keyhandle", and 
"name" may mean "keyname" or "valuename" and "value" 
may mean "valuename" or "datavalue".)

IOW, this isn't like passing lists (instead of FD_SETs) to  
select. No known abstract container matches the registry. My 
suspicion is that any attempt to map it just means the user 
will have to understand both the underlying API and the 
mapping.

As a practical matter, it looks to me like winreg (under any but 
the most well-informed usage) may well leak handles. If so, 
that would be a disaster. But I don't have time to check it out.

In sum:
 - I doubt the registry can be made to look elegant
 - I use it so little I don't really care

- Gordon



From paul at prescod.net  Tue Aug  1 18:52:45 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 12:52:45 -0400
Subject: [Python-Dev] Winreg recap
Message-ID: <3987005C.9C45D7B6@prescod.net>

I specifically asked everyone here if an abstraction was a good idea. I
got three + votes and no - votes. One of the + votes requested that we
still ship the underlying module. Fine. I was actually pointed (on
python-dev) to specs for an abstraction layer that AFAIK had been
designed *on Python-dev*.

Back then, I said:

> > I've just had a chance to look at the winreg module. I think that it is
> > too low-level.

Mark Hammond said:
> I agree. There was a proposal (from Thomas Heller, IIRC) to do just this.
> I successfully argued there should be _2_ modules for Python - the raw
> low-level API, which guarantees you can do (almost) anything.  A
> higher-level API could cover the 80% of cases.
> ...
> I have no real problem with your proposed design, as long as it it written
> in Python, _using_ the low-level API.  It could be called "registry" or I
> would even be happy for "winreg.pyd" -> "_winreg.pyd" and your new module
> to be called "winreg.py"

Gordon pointed me to the spec. I took it and expanded on it to cover a
wider range of cases.

So now I go off and code it up and in addition to complaining about one
detail, I'm also told that there is no real point to having a high level
API. Windows users are accustomed to hacks and pain so crank it up!

> FWIW, I ignored all the winreg modules, and all the debate about them.  Why?
> Just because Mark's had been in use for years already, so was already
> battle-tested.  There's no chance that any other platform will ever make use
> of this module, and given that its appeal is thus solely to Windows users,
> it was fine by me if it didn't abstract *anything* away from MS's Win32 API.

It is precisely because it is for Windows users -- often coming from VB,
JavaScript or now C# -- that it needs to be abstracted.

I have the philosophy that I come to Python (both the language and the
library) because I want things to be easy and powerful at the same time.
Whereever feasible, our libraries *should* be cleaner and better than
the hacks that they cover up. Shouldn't they? I mean even *Microsoft*
abstracted over the registry API for VB, JavaScript, C# (and perhaps
Java). Are we really going to do less for our users?

To me, Python (language and library) is a refuge from the hackiness of
the rest of the world.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From paul at prescod.net  Tue Aug  1 18:53:31 2000
From: paul at prescod.net (Paul Prescod)
Date: Tue, 01 Aug 2000 12:53:31 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBAEEGDCAA.mhammond@skippinet.com.au> <200007281206.HAA04102@cj20424-a.reston1.va.home.com>
Message-ID: <3987008B.35D5C2A2@prescod.net>

Guido van Rossum wrote:
> 
> I vaguely remember that I wasn't happy with the way this was handled
> either, but was too busy at the time to look into it.  (I can't say
> whether I'm happy with the module or not, since I've never tried to
> use it.  But I do feel unhappy about the process.)

I was also unhappy with the process but from a differEnt perspective.

A new module appeared in the Python library: _winreg It was based on
tried and true code: _winreg, but it's API had many placeholder
arguments (where Microsoft had placeholder arguments) used function call
syntax for things that were clearly methods (as Microsoft did for C),
had an enumeration mechanism that seems, to me, be very unPythonic, 
had many undocumented features and constants, and the documented 
methods and properties often have weird Microsoft conventions 
(e.g. SetValueEx).

The LaTeX documentation for the old winreg says of one method: "This is
Lame Lame Lame, DO NOT USE THIS!!!"

Now I am still working on new winreg. I got involved in a recursive 
project to avoid writing the docs twice in two different formats. We 
are still in the beta period so there is no need to panic about 
documentation yet.

I would love nothing more than to hear that Windows registry handling is
hereby delayed until Python 7 or until someone more interested wants to
work on it for the love of programming. But if that is not going to
happen then I will strongly advise against falling back to _winreg which
is severely non-Pythonic.

> I vaguely remember that Paul Prescod's main gripe with the _winreg API
> was that it's not object-oriented enough -- but that seems his main
> gripe about most code these days. :-)

In this case it wasn't a mild preference, it was a strong allergic
reaction!

> Paul, how much experience with using the Windows registry did you have
> when you designed the new API?

I use it off and on. There are still corners of _winreg that I don't
understand. That's part of why I thought it needed to be covered up with
something that could be fully documented. To get even the level of
understanding I have, of the *original* _winreg, I had to scour the Web.
The perl docs were the most helpful. :)

Anyhow, Mark isn't complaining about me misunderstanding it, he's
complaining about my mapping into the Python object model. That's fair.
That's what we python-dev is for.

As far as Greg using _winreg, my impression was that that code predates
new winreg. I think that anyone who reads even just the docstrings for
the new one and the documentation for the other is going to feel that 
the new one is at the very least more organized and thought out. Whether
it is properly designed is up to users to decide.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Tue Aug  1 20:20:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 13:20:23 -0500
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: Your message of "Tue, 01 Aug 2000 03:16:30 -0400."
             <3986794E.ADBB938C@prescod.net> 
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>  
            <3986794E.ADBB938C@prescod.net> 
Message-ID: <200008011820.NAA30284@cj20424-a.reston1.va.home.com>

Paul wrote:
> I had no personal interest in an API for the windows registry but I
> could not, in good conscience, let the original one become the 
> standard Python registry API.

and later:
> I use it off and on. There are still corners of _winreg that I don't
> understand. That's part of why I thought it needed to be covered up with
> something that could be fully documented. To get even the level of
> understanding I have, of the *original* _winreg, I had to scour the Web.
> The perl docs were the most helpful. :)

I believe this is the crux of the problem.  Your only mistake was that
you criticized and then tried to redesign a (poorly designed) API that
you weren't intimately familiar with.

My boss tries to do this occasionally; he has a tendency to complain
that my code doesn't contain enough classes.  I tell him to go away --
he only just started learning Python from a book that I've never seen,
so he wouldn't understand...

Paul, I think that the best thing to do now is to withdraw winreg.py,
and to keep (and document!) the _winreg extension with the
understanding that it's a wrapper around poorly designed API but at
least it's very close to the C API.  The leading underscore should be
a hint that this is not a module for every day use.

Hopefully someday someone will eventually create a set of higher level
bindings modeled after the Java, VB or C# version of the API.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Tue Aug  1 19:43:16 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 1 Aug 2000 13:43:16 -0400 (EDT)
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>
	<3986794E.ADBB938C@prescod.net>
	<200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <14727.3124.622333.980689@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > and to keep (and document!) the _winreg extension with the
 > understanding that it's a wrapper around poorly designed API but at
 > least it's very close to the C API.  The leading underscore should be
 > a hint that this is not a module for every day use.

  It is documented (as _winreg), but I've not reviewed the section in
great detail (yet!).  It will need to be revised to not refer to the
winreg module as the preferred API.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Tue Aug  1 20:30:48 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 1 Aug 2000 21:30:48 +0300 (IDT)
Subject: [Python-Dev] Bug Database
Message-ID: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>

I've just had a quick view over the database, and saw what we can prune at
no cost:

110647 -- Segmentation fault in "%.1200d" % 1. Fixed for me...
110649 -- Core dumps on compiling big expressions ('['+'1,'*100000+'1]'). 
          Fixed for me -- now throws a SyntaxError
110653 -- Complain about how 
          class foo:
	
		def __init__(self):
			self.bar1 = bar

		def bar(self):
			pass
         Creates cycles. A notabug if I ever saw one.
110654 -- 1+0j tested false. The bug was fixed.
110679 -- math.log(0) dumps core. Gives OverflowError for me...(I'm using
          a different OS, but the same CPU family (intel))
110710 -- range(10**n) gave segfault. Works for me -- either works, or throws
          MemoryError
110711 -- apply(foo, bar, {}) throws MemoryError. Works for me. (But might
          be an SGI problem)
110712 -- seems to be a duplicate of 110711
110715 -- urllib.urlretrieve() segfaults under kernel 2.2.12. Works for
          me with 2.2.15. 
110740, 110741, 110743, 110745, 110746, 110747, 110749, 110750 -- dups of 11715

I've got to go to sleep now....

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From jeremy at beopen.com  Tue Aug  1 20:47:47 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 1 Aug 2000 14:47:47 -0400 (EDT)
Subject: [Python-Dev] Bug Database
In-Reply-To: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
Message-ID: <14727.6995.164586.983795@bitdiddle.concentric.net>

Thanks for doing some triage, Moshe!

I am in the process of moving the bug database from jitterbug to
SourceForge.  There are still a few kinks in the process, which I am
trying to finish today.  There are two problems you will see with the
current database:

    * Many bugs do not contain all the followup messages that
    Jitterbug has.  I am working to add them.

    * There are many duplicates of some bugs.  The duplicates are the
    result of the debugging process for my Jitterbug -> SF conversion
    script.  I will remove these before I am done.  Any bug numbered
    higher than 110740 is probably a duplicate at this point.

The conversion process has placed most of the Jitterbug entries in the
SF bug tracker.  The PR number is in the SF summary and most of the
relevant Jitterbug headers (submittor, data, os, version) are part of
the body of the SF bug.  Any followups to the Jitterbug report are
stored as followup comments in SF.

The SF bug tracker has several fields that we can use to manage bug
reports.

* Category: Describes what part of Python the bug applies to.  Current
values are parser/compiler, core, modules, library, build, windows,
documentation.  We can add more categories, e.g. library/xml, if that
is helpful.

* Priority: We can assign a value from 1 to 9, where 9 is the highest
priority.  We will have to develop some guidelines for what those
priorities mean.  Right now everthing is priority 5 (medium).  I would
hazard a guess that bugs causing core dumps should have much higher
priority.

* Group: These reproduce some of the Jitterbug groups, like trash,
platform-specific, and irreproducible.  These are rough categories
that we can use, but I'm not sure how valuable they are.

* Resolution: What we plan to do about the bug.

* Assigned To: We can now assign bugs to specific people for
resolution.

* Status: Open or Closed.  When a bug has been fixed in the CVS
repository and a test case added to cover the bug, change its status
to Closed.

New bug reports should use the sourceforge interface.

Jeremy



From guido at beopen.com  Tue Aug  1 22:14:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 15:14:39 -0500
Subject: [Python-Dev] Bug Database
In-Reply-To: Your message of "Tue, 01 Aug 2000 14:47:47 -0400."
             <14727.6995.164586.983795@bitdiddle.concentric.net> 
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>  
            <14727.6995.164586.983795@bitdiddle.concentric.net> 
Message-ID: <200008012014.PAA31076@cj20424-a.reston1.va.home.com>

> * Category: Describes what part of Python the bug applies to.  Current
> values are parser/compiler, core, modules, library, build, windows,
> documentation.  We can add more categories, e.g. library/xml, if that
> is helpful.

Before it's too late, would it make sense to try and get the
categories to be the same in the Bug and Patch managers?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From m.favas at per.dem.csiro.au  Tue Aug  1 22:30:42 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Wed, 02 Aug 2000 04:30:42 +0800
Subject: [Python-Dev] regression test failure in test_tokenize?
Message-ID: <39873372.1C6F8CE1@per.dem.csiro.au>

Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:

./python Lib/test/regrtest.py test_tokenize.py 
test_tokenize
test test_tokenize failed -- Writing: "57,4-57,5:\011NUMBER\011'3'",
expected: "57,4-57,8:\011NUMBER\011'3."
1 test failed: test_tokenize

Test produces (snipped):
57,4-57,5:      NUMBER  '3'

Test should produce (if supplied output correct):
57,4-57,8:      NUMBER  '3.14'

Is this just me, or an un-checked checkin? (I noticed some new sre bits
in my current CVS version.)

Mark

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From akuchlin at mems-exchange.org  Tue Aug  1 22:47:57 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 1 Aug 2000 16:47:57 -0400
Subject: [Python-Dev] regression test failure in test_tokenize?
In-Reply-To: <39873372.1C6F8CE1@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Wed, Aug 02, 2000 at 04:30:42AM +0800
References: <39873372.1C6F8CE1@per.dem.csiro.au>
Message-ID: <20000801164757.B27333@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 04:30:42AM +0800, Mark Favas wrote:
>Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:
>Is this just me, or an un-checked checkin? (I noticed some new sre bits
>in my current CVS version.)

test_tokenize works fine using the current CVS on Linux; perhaps this
is a 64-bit problem in sre manifesting itself?

--amk



From effbot at telia.com  Tue Aug  1 23:16:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 1 Aug 2000 23:16:15 +0200
Subject: [Python-Dev] regression test failure in test_tokenize?
References: <39873372.1C6F8CE1@per.dem.csiro.au> <20000801164757.B27333@kronos.cnri.reston.va.us>
Message-ID: <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid>

andrew wrote:
> On Wed, Aug 02, 2000 at 04:30:42AM +0800, Mark Favas wrote:
> >Current CVS (Wed Aug  2 04:22:16 WST 2000) fails on Tru64 Unix:
> >Is this just me, or an un-checked checkin? (I noticed some new sre bits
> >in my current CVS version.)
> 
> test_tokenize works fine using the current CVS on Linux; perhaps this
> is a 64-bit problem in sre manifesting itself?

I've confirmed (and fixed) the bug reported by Mark.  It was a nasty
little off-by-one error in the "branch predictor" code...

But I think I know why you didn't see anything: Guido just checked
in the following change to re.py:

*** 21,26 ****
  #
  
! engine = "sre"
! # engine = "pre"
  
  if engine == "sre":
--- 21,26 ----
  #
  
! # engine = "sre"
! engine = "pre"
  
  if engine == "sre":

</F>




From guido at beopen.com  Wed Aug  2 00:21:51 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 17:21:51 -0500
Subject: [Python-Dev] regression test failure in test_tokenize?
In-Reply-To: Your message of "Tue, 01 Aug 2000 23:16:15 +0200."
             <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid> 
References: <39873372.1C6F8CE1@per.dem.csiro.au> <20000801164757.B27333@kronos.cnri.reston.va.us>  
            <02ac01bffbfd$bc6b27a0$f2a6b5d4@hagrid> 
Message-ID: <200008012221.RAA05722@cj20424-a.reston1.va.home.com>

> But I think I know why you didn't see anything: Guido just checked
> in the following change to re.py:
> 
> *** 21,26 ****
>   #
>   
> ! engine = "sre"
> ! # engine = "pre"
>   
>   if engine == "sre":
> --- 21,26 ----
>   #
>   
> ! # engine = "sre"
> ! engine = "pre"
>   
>   if engine == "sre":

Ouch.  did I really?  I didn't intend to!  I'll back out right away...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From barry at scottb.demon.co.uk  Wed Aug  2 01:01:29 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Wed, 2 Aug 2000 00:01:29 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000701bff108$950ec9f0$060210ac@private>
Message-ID: <000801bffc0c$6d985490$060210ac@private>

If someone in the core of Python thinks a patch implementing
what I've outlined is useful please let me know and I will
generate the patch.

	Barry




From MarkH at ActiveState.com  Wed Aug  2 01:13:31 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Wed, 2 Aug 2000 09:13:31 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000801bffc0c$6d985490$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIENBDCAA.MarkH@ActiveState.com>

> If someone in the core of Python thinks a patch implementing
> what I've outlined is useful please let me know and I will
> generate the patch.

Umm - I'm afraid that I dont keep my python-dev emils for that long, and
right now I'm too lazy/busy to dig around the archives.

Exactly what did you outline?  I know it went around a few times, and I
can't remember who said what.  For my money, I liked Fredrik's solution
best (check Py_IsInitialized() in Py_InitModule4()), but as mentioned that
only solves for the next version of Python; it doesnt solve the fact 1.5
modules will crash under 1.6/2.0

It would definately be excellent to get _something_ in the CNRI 1.6
release, so the BeOpen 2.0 release can see the results.

But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,

Mark.





From jeremy at beopen.com  Wed Aug  2 01:56:27 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 1 Aug 2000 19:56:27 -0400 (EDT)
Subject: [Python-Dev] Bug Database
In-Reply-To: <200008012014.PAA31076@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008012128410.11190-100000@sundial>
	<14727.6995.164586.983795@bitdiddle.concentric.net>
	<200008012014.PAA31076@cj20424-a.reston1.va.home.com>
Message-ID: <14727.25515.570860.775496@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  >> * Category: Describes what part of Python the bug applies to.
  >> Current values are parser/compiler, core, modules, library,
  >> build, windows, documentation.  We can add more categories,
  >> e.g. library/xml, if that is helpful.

  GvR> Before it's too late, would it make sense to try and get the
  GvR> categories to be the same in the Bug and Patch managers?

Yes, as best we can do.  We've got all the same names, though the
capitalization varies sometimes.

Jeremy



From gstein at lyra.org  Wed Aug  2 03:26:51 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 1 Aug 2000 18:26:51 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PC _winreg.c,1.7,1.8
In-Reply-To: <200007280344.UAA12335@slayer.i.sourceforge.net>; from mhammond@users.sourceforge.net on Thu, Jul 27, 2000 at 08:44:43PM -0700
References: <200007280344.UAA12335@slayer.i.sourceforge.net>
Message-ID: <20000801182651.S19525@lyra.org>

This could be simplified quite a bit by using PyObject_AsReadBuffer() from
abstract.h ...

Cheers,
-g

On Thu, Jul 27, 2000 at 08:44:43PM -0700, Mark Hammond wrote:
> Update of /cvsroot/python/python/dist/src/PC
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv12325
> 
> Modified Files:
> 	_winreg.c 
> Log Message:
> Allow any object supporting the buffer protocol to be written as a binary object.
> 
> Index: _winreg.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/PC/_winreg.c,v
> retrieving revision 1.7
> retrieving revision 1.8
> diff -C2 -r1.7 -r1.8
> *** _winreg.c	2000/07/16 12:04:32	1.7
> --- _winreg.c	2000/07/28 03:44:41	1.8
> ***************
> *** 831,837 ****
>   				*retDataSize = 0;
>   			else {
> ! 				if (!PyString_Check(value))
> ! 					return 0;
> ! 				*retDataSize = PyString_Size(value);
>   				*retDataBuf = (BYTE *)PyMem_NEW(char,
>   								*retDataSize);
> --- 831,844 ----
>   				*retDataSize = 0;
>   			else {
> ! 				void *src_buf;
> ! 				PyBufferProcs *pb = value->ob_type->tp_as_buffer;
> ! 				if (pb==NULL) {
> ! 					PyErr_Format(PyExc_TypeError, 
> ! 						"Objects of type '%s' can not "
> ! 						"be used as binary registry values", 
> ! 						value->ob_type->tp_name);
> ! 					return FALSE;
> ! 				}
> ! 				*retDataSize = (*pb->bf_getreadbuffer)(value, 0, &src_buf);
>   				*retDataBuf = (BYTE *)PyMem_NEW(char,
>   								*retDataSize);
> ***************
> *** 840,847 ****
>   					return FALSE;
>   				}
> ! 				memcpy(*retDataBuf,
> ! 				       PyString_AS_STRING(
> ! 				       		(PyStringObject *)value),
> ! 				       *retDataSize);
>   			}
>   			break;
> --- 847,851 ----
>   					return FALSE;
>   				}
> ! 				memcpy(*retDataBuf, src_buf, *retDataSize);
>   			}
>   			break;
> 
> 
> _______________________________________________
> Python-checkins mailing list
> Python-checkins at python.org
> http://www.python.org/mailman/listinfo/python-checkins

-- 
Greg Stein, http://www.lyra.org/



From guido at beopen.com  Wed Aug  2 06:09:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 23:09:38 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
Message-ID: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>

We still don't have a new license for Python 1.6; Bob Kahn and Richard
Stallman need to talk before a decision can be made about how to deal
with the one remaining GPL incompatibility.  While we're all waiting,
we're preparing the CNRI 1.6 release at SourceForge (part of the deal
is that the PythonLabs group finishes the 1.6 release for CNRI).  The
last thing I committed today was the text (dictated by Bob Kahn) for
the new LICENSE file that will be part of the 1.6 beta 1 release.
(Modulo any changes that will be made to the license text to ensure
GPL compatibility.)

Since anyone with an anonymous CVS setup can now read the license
anyway, I might as well post a copy here so that you can all get used
to it...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

======== LICENSE =======================================================

A. HISTORY OF THE SOFTWARE

Python originated in 1991 at Stichting Mathematisch Centrum (CWI) in
the Netherlands as an outgrowth of a language called ABC.  Its
principal author was Guido van Rossum, although it included smaller
contributions from others at CWI and elsewhere.  The last version of
Python issued by CWI was Python 1.2.  In 1995, Mr. van Rossum
continued his work on Python at the Corporation for National Research
Initiatives (CNRI) in Reston, Virginia where several versions of the
software were generated.  Python 1.6 is the last of the versions
developed at CNRI.



B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING Python 1.6, beta 1


1. CNRI LICENSE AGREEMENT 

        PYTHON 1.6, beta 1

        CNRI OPEN SOURCE LICENSE AGREEMENT


IMPORTANT: PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY.

BY CLICKING ON "ACCEPT" WHERE INDICATED BELOW, OR BY COPYING,
INSTALLING OR OTHERWISE USING PYTHON 1.6, beta 1 SOFTWARE, YOU ARE
DEEMED TO HAVE AGREED TO THE TERMS AND CONDITIONS OF THIS LICENSE
AGREEMENT.

1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6, beta 1 software
in source or binary form and its associated documentation, as released
at the www.python.org Internet site on August 5, 2000 ("Python
1.6b1").

2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6b1
alone or in any derivative version, provided, however, that CNRI's
License Agreement is retained in Python 1.6b1, alone or in any
derivative version prepared by Licensee.

Alternately, in lieu of CNRI's License Agreement, Licensee may
substitute the following text (omitting the quotes): "Python 1.6, beta
1, is made available subject to the terms and conditions in CNRI's
License Agreement.  This Agreement may be located on the Internet
using the following unique, persistent identifier (known as a handle):
1895.22/1011.  This Agreement may also be obtained from a proxy server
on the Internet using the URL:http://hdl.handle.net/1895.22/1011".

3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6b1or any part thereof, and wants to make the
derivative work available to the public as provided herein, then
Licensee hereby agrees to indicate in any such work the nature of the
modifications made to Python 1.6b1.

4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.

5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.

6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.

7. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of Virginia, excluding conflict of
law provisions.  Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between CNRI and Licensee.  This License Agreement does not grant
permission to use CNRI trademarks or trade name in a trademark sense
to endorse or promote products or services of Licensee, or any third
party.

8. By clicking on the "ACCEPT" button where indicated, or by copying
installing or otherwise using Python 1.6b1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.

        ACCEPT



2. CWI PERMISSIONS STATEMENT AND DISCLAIMER

Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands.  All rights reserved.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.

STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

========================================================================



From guido at beopen.com  Wed Aug  2 06:42:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 01 Aug 2000 23:42:30 -0500
Subject: [Python-Dev] BeOpen statement about Python license
Message-ID: <200008020442.XAA01587@cj20424-a.reston1.va.home.com>

Bob Weiner, BeOpen's CTO, has this to say about the Python license:

  Here's the official word from BeOpen.com regarding any potential
  license change on Python 1.6 (the last CNRI Python release) and
  subsequent versions:

    The Python license is fully open source compliant, as certified by
    the Open Source Initiative.  That means that if you look at
    www.opensource.org/osd.html, then this license complies with those
    9 precepts, allowing broad freedom of use, distribution and
    modification.

    The Python license will continue to allow fully proprietary
    software development.

    The license issues are down to one point which we are working to
    resolve together with CNRI and involving potential
    GPL-compatibility.  It is a small point regarding a requirement
    that the license be interpreted under the terms of Virginia law.
    One lawyer has said that this doesn't affect GPL-compatibility but
    Richard Stallman of the FSF has felt differently; he views it as a
    potential additional restriction of rights beyond those listed in
    the GPL.  So work continues to resolve on this point before the
    license is published or attached to any code.  We are presently
    waiting for follow-up from Stallman on this point.

  In summary, BeOpen.com is actively working to keep Python the
  extremely open platform it has traditionally been and to resolve
  legal issues such as this in ways that benefit Python users
  worldwide.  CNRI is working along the same lines as well.

  Please assure yourselves and your management that Python continues
  to allow for both open and closed software development.

  Regards,

  Bob Weiner

I (Guido) hope that this, together with the draft license text that I
just posted, clarifies matters for now!  I'll post more news as it
happens,

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Wed Aug  2 08:12:54 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 08:12:54 +0200
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <200008012122.OAA22327@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Tue, Aug 01, 2000 at 02:22:20PM -0700
References: <200008012122.OAA22327@slayer.i.sourceforge.net>
Message-ID: <20000802081254.V266@xs4all.nl>

On Tue, Aug 01, 2000 at 02:22:20PM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src/Lib
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv22316

> Modified Files:
> 	re.py 
> Log Message:
> My fix to the URL accidentally also switched back to the "pre" module.
> Undo that!

This kind of thing is one of the reasons I wish 'cvs commit' would give you
the entire patch you're about to commit in the log-message-edit screen, as
CVS: comments, rather than just the modified files. It would also help with
remembering what the patch was supposed to do ;) Is this possible with CVS,
other than an 'EDITOR' that does this for you ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From paul at prescod.net  Wed Aug  2 09:30:30 2000
From: paul at prescod.net (Paul Prescod)
Date: Wed, 02 Aug 2000 03:30:30 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>  
	            <3986794E.ADBB938C@prescod.net> <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <3987CE16.DB3E72B8@prescod.net>

Guido van Rossum wrote:
> 
> ...
> 
> I believe this is the crux of the problem.  Your only mistake was that
> you criticized and then tried to redesign a (poorly designed) API that
> you weren't intimately familiar with.

I don't think that this has been demonstrated. We have one complaint
about one method from Mark and silence from everyone else (and about
everything else). The Windows registry is weird in its terminology, but
it isn't brain surgery.

Yes, I had to do some research on what various things do but I expect
that almost anyone would have had to do that. Some of the constants in
the module are meant to be used with functions that are not even exposed
in the module. This indicates to me that nobody has clearly thought out
all of the details (and also that _winreg is not a complete binding to
the API). I probably understand the original API as well as anyone and
more than most, by now.

Anyhow, the list at the bottom should demonstrate that I understand the
API at least as well as the Microsoftie that invented the .NET API for
Java, VB and everything else.

> Hopefully someday someone will eventually create a set of higher level
> bindings modeled after the Java, VB or C# version of the API.

Mark sent me those specs and I believe that the module I sent out *is*
very similar to that higher level API.

Specifically (>>> is Python version)

Equals (inherited from Object) 
>>> __cmp__

key.Name
>>> key.name

key.SubKeyCount
>>> len( key.getSubkeys() )

key.ValueCount
>>> len( key.getValues() )

Close
>>> key.close()

CreateSubKey
>>> key.createSubkey()

DeleteSubKey
>>> key.deleteSubkey()

DeleteSubKeyTree
>>> (didn't get around to implementing/testing something like this)

DeleteValue
>>> key.deleteValue()

GetSubKeyNames
>>> key.getSubkeyNames()

GetValue
>>> key.getValueData()

GetValueNames
>>> key.getValueNames()

OpenRemoteBaseKey
>>> key=RemoteKey( ... )

OpenSubKey
>>> key.openSubkey

SetValue
>>> key.setValue()

 ToString
>>> str( key )

My API also has some features for enumerating that this does not have.
Mark has a problem with one of those. I don't see how that makes the
entire API "unintuitive", considering it is more or less a renaming of
the .NET API.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From effbot at telia.com  Wed Aug  2 09:07:27 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 2 Aug 2000 09:07:27 +0200
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>              <3986794E.ADBB938C@prescod.net>  <200008011820.NAA30284@cj20424-a.reston1.va.home.com>
Message-ID: <004d01bffc50$522fa2a0$f2a6b5d4@hagrid>

guido wrote:
> Paul, I think that the best thing to do now is to withdraw winreg.py,
> and to keep (and document!) the _winreg extension with the
> understanding that it's a wrapper around poorly designed API but at
> least it's very close to the C API.  The leading underscore should be
> a hint that this is not a module for every day use.

how about letting _winreg export all functions with their
win32 names, and adding a winreg.py which looks some-
thing like this:

    from _winreg import *

    class Key:
        ....

    HKEY_CLASSES_ROOT = Key(...)
    ...

where the Key class addresses the 80% level: open
keys and read NONE/SZ/EXPAND_SZ/DWORD values
(through a slightly extended dictionary API).

in 2.0, add support to create keys and write values of
the same types, and you end up supporting the needs
of 99% of all applications.

> Hopefully someday someone will eventually create a set of higher level
> bindings modeled after the Java, VB or C# version of the API.

how about Tcl?  I'm pretty sure their API (which is very
simple, iirc) addresses the 99% level...

</F>




From moshez at math.huji.ac.il  Wed Aug  2 09:00:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 10:00:40 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python
 (fwd))
Message-ID: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>

Do we have a procedure for putting more batteries in the core? I'm
not talking about stuff like PEP-206, I'm talking about small, useful
modules like Cookies.py.


--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez

---------- Forwarded message ----------
Date: Tue, 01 Aug 2000 12:03:12 PDT
From: Brian Wisti <bwisti at hotmail.com>
To: tutor at python.org
Subject: Tangent to Re: [Tutor] CGI and Python




>In contrast, i've been motivated with questions like yours which pop up
>every now and then to create a separate chapter entrily devoted to CGI pro-
>gramming and in it, to provide an example that starts out simple and builds
>to something a little more complex.  there will be lots of screen captures
>too so that you can see what's going on.  finally, there will be a more
>"advanced" section towards the end which does the complicated stuff that
>everyone wants to do, like cookies, multivalued fields, and file uploads
>with multipart data.  sorry that the book isn't out yet... trying to get
>the weeds out of it right NOW!	;-)
>

I'm looking forward to seeing the book!

Got a question, that is almost relevant to the thread.  Does anybody know 
why cookie support isn't built in to the cgi module?  I had to dig around to 
find Cookie.py, which (excellent module that it is) should be in the cgi 
package somewhere.

Just a random thought from the middle of my workday...

Later,
Brian Wisti
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com


_______________________________________________
Tutor maillist  -  Tutor at python.org
http://www.python.org/mailman/listinfo/tutor




From mal at lemburg.com  Wed Aug  2 11:12:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 02 Aug 2000 11:12:01 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>
Message-ID: <3987E5E1.A2B20241@lemburg.com>

Guido van Rossum wrote:
> 
> We still don't have a new license for Python 1.6; Bob Kahn and Richard
> Stallman need to talk before a decision can be made about how to deal
> with the one remaining GPL incompatibility.  While we're all waiting,
> we're preparing the CNRI 1.6 release at SourceForge (part of the deal
> is that the PythonLabs group finishes the 1.6 release for CNRI).  The
> last thing I committed today was the text (dictated by Bob Kahn) for
> the new LICENSE file that will be part of the 1.6 beta 1 release.
> (Modulo any changes that will be made to the license text to ensure
> GPL compatibility.)
> 
> Since anyone with an anonymous CVS setup can now read the license
> anyway, I might as well post a copy here so that you can all get used
> to it...

Is the license on 2.0 going to look the same ? I mean we now
already have two seperate licenses and if BeOpen adds another
two or three paragraphs will end up with a license two pages
long.

Oh, how I loved the old CWI license...

Some comments on the new version:
 
> A. HISTORY OF THE SOFTWARE
> 
> Python originated in 1991 at Stichting Mathematisch Centrum (CWI) in
> the Netherlands as an outgrowth of a language called ABC.  Its
> principal author was Guido van Rossum, although it included smaller
> contributions from others at CWI and elsewhere.  The last version of
> Python issued by CWI was Python 1.2.  In 1995, Mr. van Rossum
> continued his work on Python at the Corporation for National Research
> Initiatives (CNRI) in Reston, Virginia where several versions of the
> software were generated.  Python 1.6 is the last of the versions
> developed at CNRI.
> 
> B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING Python 1.6, beta 1
> 
> 1. CNRI LICENSE AGREEMENT
> 
>         PYTHON 1.6, beta 1
> 
>         CNRI OPEN SOURCE LICENSE AGREEMENT
> 
> IMPORTANT: PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY.
> 
> BY CLICKING ON "ACCEPT" WHERE INDICATED BELOW, OR BY COPYING,
> INSTALLING OR OTHERWISE USING PYTHON 1.6, beta 1 SOFTWARE, YOU ARE
> DEEMED TO HAVE AGREED TO THE TERMS AND CONDITIONS OF THIS LICENSE
> AGREEMENT.
> 
> 1. This LICENSE AGREEMENT is between the Corporation for National
> Research Initiatives, having an office at 1895 Preston White Drive,
> Reston, VA 20191 ("CNRI"), and the Individual or Organization
> ("Licensee") accessing and otherwise using Python 1.6, beta 1 software
> in source or binary form and its associated documentation, as released
> at the www.python.org Internet site on August 5, 2000 ("Python
> 1.6b1").
> 
> 2. Subject to the terms and conditions of this License Agreement, CNRI
> hereby grants Licensee a nonexclusive, royalty-free, world-wide
> license to reproduce, analyze, test, perform and/or display publicly,
> prepare derivative works, distribute, and otherwise use Python 1.6b1
> alone or in any derivative version, provided, however, that CNRI's
> License Agreement is retained in Python 1.6b1, alone or in any
> derivative version prepared by Licensee.

I don't the latter (retaining the CNRI license alone) is not
possible: you always have to include the CWI license.
 
> Alternately, in lieu of CNRI's License Agreement, Licensee may
> substitute the following text (omitting the quotes): "Python 1.6, beta
> 1, is made available subject to the terms and conditions in CNRI's
> License Agreement.  This Agreement may be located on the Internet
> using the following unique, persistent identifier (known as a handle):
> 1895.22/1011.  This Agreement may also be obtained from a proxy server
> on the Internet using the URL:http://hdl.handle.net/1895.22/1011".

Do we really need this in the license text ? It's nice to have
the text available on the Internet, but why add long descriptions
about where to get it from to the license text itself ?
 
> 3. In the event Licensee prepares a derivative work that is based on
> or incorporates Python 1.6b1or any part thereof, and wants to make the
> derivative work available to the public as provided herein, then
> Licensee hereby agrees to indicate in any such work the nature of the
> modifications made to Python 1.6b1.

In what way would those indications have to be made ? A patch
or just text describing the new features ?
 
What does "make available to the public" mean ? If I embed
Python in an application and make this application available
on the Internet for download would this fit the meaning ?

What about derived work that only uses the Python language
reference as basis for its task, e.g. new interpreters
or compilers which can read and execute Python programs ?

> 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> INFRINGE ANY THIRD PARTY RIGHTS.
> 
> 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.

I would make this "...SOME STATES AND COUNTRIES...". E.g. in
Germany the above text would only be valid after an initial
6 month period after installation, AFAIK (this period is
called "Gew?hrleistung"). Licenses from other vendors usually
add some extra license text to limit the liability in this period
to the carrier on which the software was received by the licensee,
e.g. the diskettes or CDs.
 
> 6. This License Agreement will automatically terminate upon a material
> breach of its terms and conditions.

Immediately ? Other licenses usually include a 30-60 day period
which allows the licensee to take actions. With the above text,
the license will put the Python copy in question into an illegal
state *prior* to having even been identified as conflicting with the
license.
 
> 7. This License Agreement shall be governed by and interpreted in all
> respects by the law of the State of Virginia, excluding conflict of
> law provisions.  Nothing in this License Agreement shall be deemed to
> create any relationship of agency, partnership, or joint venture
> between CNRI and Licensee.  This License Agreement does not grant
> permission to use CNRI trademarks or trade name in a trademark sense
> to endorse or promote products or services of Licensee, or any third
> party.

Would the name "Python" be considered a trademark in the above
sense ?
 
> 8. By clicking on the "ACCEPT" button where indicated, or by copying
> installing or otherwise using Python 1.6b1, Licensee agrees to be
> bound by the terms and conditions of this License Agreement.
> 
>         ACCEPT
> 
> 2. CWI PERMISSIONS STATEMENT AND DISCLAIMER
> 
> Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
> The Netherlands.  All rights reserved.
> 
> Permission to use, copy, modify, and distribute this software and its
> documentation for any purpose and without fee is hereby granted,
> provided that the above copyright notice appear in all copies and that
> both that copyright notice and this permission notice appear in
> supporting documentation, and that the name of Stichting Mathematisch
> Centrum or CWI not be used in advertising or publicity pertaining to
> distribution of the software without specific, written prior
> permission.
> 
> STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
> THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
> FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
> OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

...oh how I loved this one ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Wed Aug  2 11:43:05 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 02 Aug 2000 11:43:05 +0200
Subject: [Python-Dev] Winreg recap 
In-Reply-To: Message by Paul Prescod <paul@prescod.net> ,
	     Tue, 01 Aug 2000 12:52:45 -0400 , <3987005C.9C45D7B6@prescod.net> 
Message-ID: <20000802094305.C3006303181@snelboot.oratrix.nl>

> I specifically asked everyone here if an abstraction was a good idea. I
> got three + votes and no - votes. One of the + votes requested that we
> still ship the underlying module. Fine. I was actually pointed (on
> python-dev) to specs for an abstraction layer that AFAIK had been
> designed *on Python-dev*.

This point I very much agree to: if we can abstract 90% of the use cases of 
the registry (while still giving access to the other 10%) in a clean interface 
we can implement the same interface for Mac preference files, unix dot-files, 
X resources, etc.

A general mechanism whereby a Python program can get at a persistent setting 
that may have factory defaults, installation overrides and user overrides, and 
that is implemented in the logical way on each platform would be very powerful.

The initial call to open the preference database(s) and give identity 
information as to which app you are, etc is probably going to be machine 
dependent, but from that point on there should be a single API.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From moshez at math.huji.ac.il  Wed Aug  2 12:16:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 13:16:40 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
Message-ID: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>

Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me


--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez





From thomas at xs4all.net  Wed Aug  2 12:41:12 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 12:41:12 +0200
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>; from moshez@math.huji.ac.il on Wed, Aug 02, 2000 at 01:16:40PM +0300
References: <Pine.GSO.4.10.10008021157040.20425-100000@sundial>
Message-ID: <20000802124112.W266@xs4all.nl>

On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:

> Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

You can close bugs now, right, Moshe ? If not, you should be able to :P Just
do what I do: close them, assign them to yourself, set the status to 'Works
For Me', explain in the log message what you did to test it, and forward a
copy of the mail you get from SF to the original complainee.

A lot of the bugs are relatively old, so a fair lot of them are likely to be
fixed already. If they aren't fixed for the complainee (or someone else),
the bug can be re-opened and possibly updated at the same time.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Wed Aug  2 13:05:06 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 14:05:06 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <20000802124112.W266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>

On Wed, 2 Aug 2000, Thomas Wouters wrote:

> On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:
> 
> > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> 
> You can close bugs now, right, Moshe?

I can, but to tell the truth, after what Tim posted here about closing
bugs, I'd appreciate a few more eyeballs before I close them.

> A lot of the bugs are relatively old, so a fair lot of them are likely to be
> fixed already. If they aren't fixed for the complainee (or someone else),
> the bug can be re-opened and possibly updated at the same time.

Hmmmmm.....OK.
But I guess I'll still wait for a goahead from the PythonLabs team. 
BTW: Does anyone know if SF has an e-mail notification of bugs, similar
to that of patches? If so, enabling it to send mail to a mailing list
similar to patches at python.org would be cool -- it would enable much more
peer review.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Wed Aug  2 13:21:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 13:21:47 +0200
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>; from moshez@math.huji.ac.il on Wed, Aug 02, 2000 at 02:05:06PM +0300
References: <20000802124112.W266@xs4all.nl> <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
Message-ID: <20000802132147.L13365@xs4all.nl>

On Wed, Aug 02, 2000 at 02:05:06PM +0300, Moshe Zadka wrote:
> On Wed, 2 Aug 2000, Thomas Wouters wrote:
> > On Wed, Aug 02, 2000 at 01:16:40PM +0300, Moshe Zadka wrote:

> > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

> > You can close bugs now, right, Moshe?

> I can, but to tell the truth, after what Tim posted here about closing
> bugs, I'd appreciate a few more eyeballs before I close them.

That's why I forward the message to the original submittor. The list of bugs
is now so insanely large that it's pretty unlikely a large number of
eyeballs will caress them. Marking them closed (or atleast marking them
*something*, like moving them to the right catagory) and forwarding the
summary to the submittor is likely to have them re-check the bug.

Tim was talking about 'closing it without reason', without knowing why it
should be closed. 'Works for me' is a valid reason to close the bug, if you
have the same (kind of) platform, can't reproduce the bug and have a strong
suspicion it's already been fixed. (Which is pretty likely, if the bugreport
is old.)

> BTW: Does anyone know if SF has an e-mail notification of bugs, similar
> to that of patches? If so, enabling it to send mail to a mailing list
> similar to patches at python.org would be cool -- it would enable much more
> peer review.

I think not, but I'm not sure. It's probably up to the project admins to set
that, but I think if they did, they'd have set it before. (Then again, I'm
not sure if it's a good idea to set it, yet... I bet the current list is
going to be quickly cut down in size, and I'm not sure if I want to see all
the notifications! :) But once it's running, it would be swell.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Wed Aug  2 14:13:41 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Wed, 2 Aug 2000 14:13:41 +0200 (CEST)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021157040.20425-100000@sundial> from "Moshe Zadka" at Aug 02, 2000 01:16:40 PM
Message-ID: <200008021213.OAA06073@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me

You get a compiled SRE object, right? But SRE is the new 're' and the old
're' is 'pre'. Make the example: import pre; pre.compile('[\\200-\\400]')
and I suspect you'll get the segfault (I did).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Wed Aug  2 14:17:31 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 15:17:31 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <200008021213.OAA06073@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008021512180.8980-100000@sundial>

On Wed, 2 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> 
> You get a compiled SRE object, right?

Nope -- I tested it with pre. 

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From jeremy at beopen.com  Wed Aug  2 14:31:55 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 2 Aug 2000 08:31:55 -0400 (EDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021402041.20425-100000@sundial>
References: <20000802124112.W266@xs4all.nl>
	<Pine.GSO.4.10.10008021402041.20425-100000@sundial>
Message-ID: <14728.5307.820982.137908@bitdiddle.concentric.net>

>>>>> "MZ" == Moshe Zadka <moshez at math.huji.ac.il> writes:

  MZ> Hmmmmm.....OK.  But I guess I'll still wait for a goahead from
  MZ> the PythonLabs team.  BTW: Does anyone know if SF has an e-mail
  MZ> notification of bugs, similar to that of patches? If so,
  MZ> enabling it to send mail to a mailing list similar to
  MZ> patches at python.org would be cool -- it would enable much more
  MZ> peer review.

Go ahead and mark as closed bugs that are currently fixed.  If you can
figure out when they were fixed (e.g. what checkin), that would be
best.  If not, just be sure that it really is fixed -- and write a
test case that would have caught the bug.

SF will send out an email, but sending it to patches at python.org would
be a bad idea, I think.  Isn't that list attached to Jitterbug?

Jeremy



From moshez at math.huji.ac.il  Wed Aug  2 14:30:16 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 15:30:16 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10008021528410.8980-100000@sundial>

On Wed, 2 Aug 2000, Jeremy Hylton wrote:

> SF will send out an email, but sending it to patches at python.org would
> be a bad idea, I think.

I've no problem with having a seperate mailing list I can subscribe to.
Perhaps it should be a mailing list along the lines of Python-Checkins....

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Wed Aug  2 16:02:00 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:02:00 -0500
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: Your message of "Wed, 02 Aug 2000 08:12:54 +0200."
             <20000802081254.V266@xs4all.nl> 
References: <200008012122.OAA22327@slayer.i.sourceforge.net>  
            <20000802081254.V266@xs4all.nl> 
Message-ID: <200008021402.JAA02711@cj20424-a.reston1.va.home.com>

> > My fix to the URL accidentally also switched back to the "pre" module.
> > Undo that!
> 
> This kind of thing is one of the reasons I wish 'cvs commit' would give you
> the entire patch you're about to commit in the log-message-edit screen, as
> CVS: comments, rather than just the modified files. It would also help with
> remembering what the patch was supposed to do ;) Is this possible with CVS,
> other than an 'EDITOR' that does this for you ?

Actually, I have made it a habit to *always* do a cvs diff before I
commit, for exactly this reason.  That's why this doesn't happen more
often.  In this case I specifically remember reviewing the diff and
thinking that it was alright, but not scrolling towards the second
half of the diff. :(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Wed Aug  2 16:06:00 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:06:00 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 10:00:40 +0300."
             <Pine.GSO.4.10.10008020958590.20425-100000@sundial> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> 
Message-ID: <200008021406.JAA02743@cj20424-a.reston1.va.home.com>

> Do we have a procedure for putting more batteries in the core? I'm
> not talking about stuff like PEP-206, I'm talking about small, useful
> modules like Cookies.py.

Cookie support in the core would be a good thing.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Wed Aug  2 15:20:52 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:20:52 -0400 (EDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>
References: <20000802124112.W266@xs4all.nl>
	<Pine.GSO.4.10.10008021402041.20425-100000@sundial>
	<14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <14728.8244.745008.301891@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > SF will send out an email, but sending it to patches at python.org would
 > be a bad idea, I think.  Isn't that list attached to Jitterbug?

  No, but Barry is working on getting a new list set up for
SourceForge to send bug messages to.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gvwilson at nevex.com  Wed Aug  2 15:22:01 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Wed, 2 Aug 2000 09:22:01 -0400 (EDT)
Subject: [Python-Dev] CVS headaches / Subversion reminder
Message-ID: <Pine.LNX.4.10.10008020913180.7103-100000@akbar.nevex.com>

Those of you who are having troubles with (or have complaints about) CVS
on SourceForge might want to check out Subversion, a "better CVS" being
developed as part of Tigris:

    subversion.tigris.org

Jason Robbins (project manager, jrobbins at collab.net) told me in Monterey
that they are still interested in feature requests, alternatives, etc.
There may still be room to add features like showing the full patch during
checkin (as per Thomas Wouters' earlier mail).

Greg

p.s. I'd be interested in hearing from anyone who's ever re-written a
medium-sized (40,000 lines) C app in Python --- how did you decide how
much of the structure to keep, and how much to re-think, etc.  Please mail
me directly to conserve bandwidth; I'll post a summary if there's enough
interest.





From fdrake at beopen.com  Wed Aug  2 15:26:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:26:28 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <200008021406.JAA02743@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
Message-ID: <14728.8580.460583.760620@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > > Do we have a procedure for putting more batteries in the core? I'm
 > > not talking about stuff like PEP-206, I'm talking about small, useful
 > > modules like Cookies.py.
 > 
 > Cookie support in the core would be a good thing.

  There's also some cookie support in Grail (limited); that uses a
Netscape-style client-side database.
  Note that the Netscape format is insufficient for the most recent
cookie specifications (don't know the RFC #), but I understood from
AMK that browser writers are expecting to actually implement that
(unlike RFC 2109).  If we stick to an in-process database, that
wouldn't matter, but I'm not sure if that solves the problem for
everyone.
  Regardless of the format, there's a little bit of work to do here.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Wed Aug  2 16:32:02 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 09:32:02 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 09:26:28 -0400."
             <14728.8580.460583.760620@cj42289-a.reston1.va.home.com> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com>  
            <14728.8580.460583.760620@cj42289-a.reston1.va.home.com> 
Message-ID: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>

> Guido van Rossum writes:
>  > > Do we have a procedure for putting more batteries in the core? I'm
>  > > not talking about stuff like PEP-206, I'm talking about small, useful
>  > > modules like Cookies.py.
>  > 
>  > Cookie support in the core would be a good thing.
> 
>   There's also some cookie support in Grail (limited); that uses a
> Netscape-style client-side database.
>   Note that the Netscape format is insufficient for the most recent
> cookie specifications (don't know the RFC #), but I understood from
> AMK that browser writers are expecting to actually implement that
> (unlike RFC 2109).  If we stick to an in-process database, that
> wouldn't matter, but I'm not sure if that solves the problem for
> everyone.
>   Regardless of the format, there's a little bit of work to do here.

I think Cookie.py is for server-side management of cookies, not for
client-side.  Do we need client-side cookies too????

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Wed Aug  2 15:34:29 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 2 Aug 2000 16:34:29 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor]
 CGI and Python (fwd))
In-Reply-To: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008021632340.13078-100000@sundial>

On Wed, 2 Aug 2000, Guido van Rossum wrote:

> I think Cookie.py is for server-side management of cookies, not for
> client-side.  Do we need client-side cookies too????

Not until we write a high-level interface to urllib which is similar
to the Perlish UserAgent module -- which is something that should
be done if Python wants to be a viable clients-side langugage.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From fdrake at beopen.com  Wed Aug  2 15:37:50 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 09:37:50 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
References: <200008021432.JAA02937@cj20424-a.reston1.va.home.com>
	<Pine.GSO.4.10.10008021632340.13078-100000@sundial>
	<Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
Message-ID: <14728.9262.635980.220234@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > I think Cookie.py is for server-side management of cookies, not for
 > client-side.  Do we need client-side cookies too????

  I think this would be highly desirable; we've seen enough requests
for it on c.l.py.

Moshe Zadka writes:
 > Not until we write a high-level interface to urllib which is similar
 > to the Perlish UserAgent module -- which is something that should
 > be done if Python wants to be a viable clients-side langugage.

  Exactly!  It has become very difficult to get anything done on the
Web without enabling cookies, and simple "screen scraping" tools need
to have this support as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Wed Aug  2 16:05:41 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 10:05:41 -0400 (EDT)
Subject: [Python-Dev] test_parser.py
Message-ID: <14728.10933.534904.378463@cj42289-a.reston1.va.home.com>

  At some point I received a message/bug report referring to
test_parser.py, which doesn't exist in the CVS repository (and never
has as far as I know).  If someone has a regression test for the
parser module hidden away, I'd love to add it to the CVS repository!
It's time to update the parser module, and a good time to cover it in
the regression test!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Wed Aug  2 17:11:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 10:11:20 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Wed, 02 Aug 2000 11:12:01 +0200."
             <3987E5E1.A2B20241@lemburg.com> 
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>  
            <3987E5E1.A2B20241@lemburg.com> 
Message-ID: <200008021511.KAA03049@cj20424-a.reston1.va.home.com>

> Is the license on 2.0 going to look the same ? I mean we now
> already have two seperate licenses and if BeOpen adds another
> two or three paragraphs will end up with a license two pages
> long.

Good question.  We can't really keep the license the same because the
old license is very specific to CNRI.  I would personally be in favor
of using the BSD license for 2.0.

> Oh, how I loved the old CWI license...

Ditto!

> Some comments on the new version:

> > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > license to reproduce, analyze, test, perform and/or display publicly,
> > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > alone or in any derivative version, provided, however, that CNRI's
> > License Agreement is retained in Python 1.6b1, alone or in any
> > derivative version prepared by Licensee.
> 
> I don't the latter (retaining the CNRI license alone) is not
> possible: you always have to include the CWI license.

Wow.  I hadn't even noticed this!  It seems you can prepare a
derivative version of the license.  Well, maybe.

> > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > substitute the following text (omitting the quotes): "Python 1.6, beta
> > 1, is made available subject to the terms and conditions in CNRI's
> > License Agreement.  This Agreement may be located on the Internet
> > using the following unique, persistent identifier (known as a handle):
> > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> 
> Do we really need this in the license text ? It's nice to have
> the text available on the Internet, but why add long descriptions
> about where to get it from to the license text itself ?

I'm not happy with this either, but CNRI can put anything they like in
their license, and they seem very fond of this particular bit of
advertising for their handle system.  I've never managed them to
convince them that it was unnecessary.

> > 3. In the event Licensee prepares a derivative work that is based on
> > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > derivative work available to the public as provided herein, then
> > Licensee hereby agrees to indicate in any such work the nature of the
> > modifications made to Python 1.6b1.
> 
> In what way would those indications have to be made ? A patch
> or just text describing the new features ?

Just text.  Bob Kahn told me that the list of "what's new" that I
always add to a release would be fine.

> What does "make available to the public" mean ? If I embed
> Python in an application and make this application available
> on the Internet for download would this fit the meaning ?

Yes, that's why he doesn't use the word "publish" -- such an action
would not be considered publication in the sense of the copyright law
(at least not in the US, and probably not according to the Bern
convention) but it is clearly making it available to the public.

> What about derived work that only uses the Python language
> reference as basis for its task, e.g. new interpreters
> or compilers which can read and execute Python programs ?

The language definition is not covered by the license at all.  Only
this particular code base.

> > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > INFRINGE ANY THIRD PARTY RIGHTS.
> > 
> > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> 
> I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> Germany the above text would only be valid after an initial
> 6 month period after installation, AFAIK (this period is
> called "Gew?hrleistung"). Licenses from other vendors usually
> add some extra license text to limit the liability in this period
> to the carrier on which the software was received by the licensee,
> e.g. the diskettes or CDs.

I'll mention this to Kahn.

> > 6. This License Agreement will automatically terminate upon a material
> > breach of its terms and conditions.
> 
> Immediately ? Other licenses usually include a 30-60 day period
> which allows the licensee to take actions. With the above text,
> the license will put the Python copy in question into an illegal
> state *prior* to having even been identified as conflicting with the
> license.

Believe it or not, this is necessary to ensure GPL compatibility!  An
earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
incompatible.  There's an easy workaround though: you fix your
compliance and download a new copy, which gives you all the same
rights again.

> > 7. This License Agreement shall be governed by and interpreted in all
> > respects by the law of the State of Virginia, excluding conflict of
> > law provisions.  Nothing in this License Agreement shall be deemed to
> > create any relationship of agency, partnership, or joint venture
> > between CNRI and Licensee.  This License Agreement does not grant
> > permission to use CNRI trademarks or trade name in a trademark sense
> > to endorse or promote products or services of Licensee, or any third
> > party.
> 
> Would the name "Python" be considered a trademark in the above
> sense ?

No, Python is not a CNRI trademark.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From trentm at ActiveState.com  Wed Aug  2 17:04:17 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 2 Aug 2000 08:04:17 -0700
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <20000802081254.V266@xs4all.nl>; from thomas@xs4all.net on Wed, Aug 02, 2000 at 08:12:54AM +0200
References: <200008012122.OAA22327@slayer.i.sourceforge.net> <20000802081254.V266@xs4all.nl>
Message-ID: <20000802080417.A16446@ActiveState.com>

On Wed, Aug 02, 2000 at 08:12:54AM +0200, Thomas Wouters wrote:
> On Tue, Aug 01, 2000 at 02:22:20PM -0700, Guido van Rossum wrote:
> > Update of /cvsroot/python/python/dist/src/Lib
> > In directory slayer.i.sourceforge.net:/tmp/cvs-serv22316
> 
> > Modified Files:
> > 	re.py 
> > Log Message:
> > My fix to the URL accidentally also switched back to the "pre" module.
> > Undo that!
> 
> This kind of thing is one of the reasons I wish 'cvs commit' would give you
> the entire patch you're about to commit in the log-message-edit screen, as
> CVS: comments, rather than just the modified files. It would also help with
> remembering what the patch was supposed to do ;) Is this possible with CVS,
> other than an 'EDITOR' that does this for you ?
> 
As Guido said, it is probably prefered that one does a cvs diff prior to
checking in. But to answer your question *unauthoritatively*, I know that CVS
allows you the change the checkin template and I *think* that it offers a
script hook to be able to generate it (not sure). If so then one could use
that script hook to put in the (commented) patch.

Trent



-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Wed Aug  2 17:14:16 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 2 Aug 2000 08:14:16 -0700
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <14728.5307.820982.137908@bitdiddle.concentric.net>; from jeremy@beopen.com on Wed, Aug 02, 2000 at 08:31:55AM -0400
References: <20000802124112.W266@xs4all.nl> <Pine.GSO.4.10.10008021402041.20425-100000@sundial> <14728.5307.820982.137908@bitdiddle.concentric.net>
Message-ID: <20000802081416.B16446@ActiveState.com>

On Wed, Aug 02, 2000 at 08:31:55AM -0400, Jeremy Hylton wrote:
> >>>>> "MZ" == Moshe Zadka <moshez at math.huji.ac.il> writes:
> 
>   MZ> Hmmmmm.....OK.  But I guess I'll still wait for a goahead from
>   MZ> the PythonLabs team.  BTW: Does anyone know if SF has an e-mail
>   MZ> notification of bugs, similar to that of patches? If so,
>   MZ> enabling it to send mail to a mailing list similar to
>   MZ> patches at python.org would be cool -- it would enable much more
>   MZ> peer review.
> 
> Go ahead and mark as closed bugs that are currently fixed.  If you can
> figure out when they were fixed (e.g. what checkin), that would be
> best.  If not, just be sure that it really is fixed -- and write a
> test case that would have caught the bug.

I think that unless

(1) you submitted the bug or can be sure that "works for me"
    is with the exact same configuration as the person who did; or
(2) you can identify where in the code the bug was and what checkin (or where
    in the code) fixed it

then you cannot close the bug.

This is the ideal case, with incomplete bug reports and extremely stale one
then these strict requirements are probably not always practical.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From jack at oratrix.nl  Wed Aug  2 17:16:06 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 02 Aug 2000 17:16:06 +0200
Subject: [Python-Dev] Still no new license -- but draft text available 
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
	     Wed, 02 Aug 2000 10:11:20 -0500 , <200008021511.KAA03049@cj20424-a.reston1.va.home.com> 
Message-ID: <20000802151606.753EF303181@snelboot.oratrix.nl>

I'm not sure I'm entirely happy with point 3. Depending on how you define 
"derivative work" and "make available" it could cause serious problems.

I assume that this clause is meant so that it is clear that MacPython and 
PythonWin and other such versions may be based on CNRI Python but are not the 
same. However, if you're building a commercial application that uses Python as 
its implementation language this "indication of modifications" becomes rather 
a long list. Just imagine that a C library came with such a license ("Anyone 
incorporating this C library or part thereof in their application should 
indicate the differences between their application and this C library":-).

Point 2 has the same problem to a lesser extent, the sentence starting with 
"Python ... is made available subject to the terms and conditions..." is fine 
for a product that is still clearly recognizable as Python, but would look 
silly if Python is just used as the implementation language.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From thomas at xs4all.net  Wed Aug  2 17:39:40 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 2 Aug 2000 17:39:40 +0200
Subject: [Python-Dev] CVS feature wish ? :)
In-Reply-To: <200008021402.JAA02711@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 09:02:00AM -0500
References: <200008012122.OAA22327@slayer.i.sourceforge.net> <20000802081254.V266@xs4all.nl> <200008021402.JAA02711@cj20424-a.reston1.va.home.com>
Message-ID: <20000802173940.X266@xs4all.nl>

On Wed, Aug 02, 2000 at 09:02:00AM -0500, Guido van Rossum wrote:
> > > My fix to the URL accidentally also switched back to the "pre" module.
> > > Undo that!

> > This kind of thing is one of the reasons I wish 'cvs commit' would give you
> > the entire patch you're about to commit in the log-message-edit screen, as
> > CVS: comments, rather than just the modified files. It would also help with
> > remembering what the patch was supposed to do ;) Is this possible with CVS,
> > other than an 'EDITOR' that does this for you ?

> Actually, I have made it a habit to *always* do a cvs diff before I
> commit, for exactly this reason.

Well, so do I, but none the less I'd like it if the patch was included in
the comment :-) I occasionally forget what I was doing (17 xterms, two of
which are running 20-session screens (6 of which are dedicated to Python,
and 3 to Mailman :), two irc channels with people asking for work-related
help or assistance, one telephone with a 'group' number of same, and enough
room around me for 5 or 6 people to stand around and ask questions... :)
Also, I sometimes wonder about the patch while I'm writing the comment. (Did
I do that right ? Didn't I forget about this ? etc.) Having it included as a
comment would be perfect, for me.

I guess I'll look at the hook thing Trent mailed about, and Subversion, if I
find the time for it :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Wed Aug  2 19:22:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 02 Aug 2000 19:22:06 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com>  
	            <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>
Message-ID: <398858BE.15928F47@lemburg.com>

Guido van Rossum wrote:
> 
> > Is the license on 2.0 going to look the same ? I mean we now
> > already have two seperate licenses and if BeOpen adds another
> > two or three paragraphs will end up with a license two pages
> > long.
> 
> Good question.  We can't really keep the license the same because the
> old license is very specific to CNRI.  I would personally be in favor
> of using the BSD license for 2.0.

If that's possible, I don't think we have to argue about the
1.6 license text at all ;-) ... but then: I seriously doubt that
CNRI is going to let you put 2.0 under a different license text :-( ...

> > Some comments on the new version:
> 
> > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > license to reproduce, analyze, test, perform and/or display publicly,
> > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > alone or in any derivative version, provided, however, that CNRI's
> > > License Agreement is retained in Python 1.6b1, alone or in any
> > > derivative version prepared by Licensee.
> >
> > I don't think the latter (retaining the CNRI license alone) is 
> > possible: you always have to include the CWI license.
> 
> Wow.  I hadn't even noticed this!  It seems you can prepare a
> derivative version of the license.  Well, maybe.

I think they mean "derivative version of Python 1.6b1", but in
court, the above wording could cause serious trouble for CNRI
... it seems 2.0 can reuse the CWI license after all ;-)
 
> > > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > > substitute the following text (omitting the quotes): "Python 1.6, beta
> > > 1, is made available subject to the terms and conditions in CNRI's
> > > License Agreement.  This Agreement may be located on the Internet
> > > using the following unique, persistent identifier (known as a handle):
> > > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> >
> > Do we really need this in the license text ? It's nice to have
> > the text available on the Internet, but why add long descriptions
> > about where to get it from to the license text itself ?
> 
> I'm not happy with this either, but CNRI can put anything they like in
> their license, and they seem very fond of this particular bit of
> advertising for their handle system.  I've never managed them to
> convince them that it was unnecessary.

Oh well... the above paragraph sure looks scary to a casual
license reader.

Also I'm not sure about the usefulness of this paragraph since
the mapping of a URL to a content cannot be considered a
legal binding. They would at least have to add some crypto
signature of the license text to make verification of the
origin possible.
 
> > > 3. In the event Licensee prepares a derivative work that is based on
> > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > derivative work available to the public as provided herein, then
> > > Licensee hereby agrees to indicate in any such work the nature of the
> > > modifications made to Python 1.6b1.
> >
> > In what way would those indications have to be made ? A patch
> > or just text describing the new features ?
> 
> Just text.  Bob Kahn told me that the list of "what's new" that I
> always add to a release would be fine.

Ok, should be made explicit in the license though...
 
> > What does "make available to the public" mean ? If I embed
> > Python in an application and make this application available
> > on the Internet for download would this fit the meaning ?
> 
> Yes, that's why he doesn't use the word "publish" -- such an action
> would not be considered publication in the sense of the copyright law
> (at least not in the US, and probably not according to the Bern
> convention) but it is clearly making it available to the public.

Ouch. That would mean I'd have to describe all additions,
i.e. the embedding application, in most details in order not to
breach the terms of the CNRI license.
 
> > What about derived work that only uses the Python language
> > reference as basis for its task, e.g. new interpreters
> > or compilers which can read and execute Python programs ?
> 
> The language definition is not covered by the license at all.  Only
> this particular code base.

Ok.
 
> > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > INFRINGE ANY THIRD PARTY RIGHTS.
> > >
> > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> >
> > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > Germany the above text would only be valid after an initial
> > 6 month period after installation, AFAIK (this period is
> > called "Gew?hrleistung"). Licenses from other vendors usually
> > add some extra license text to limit the liability in this period
> > to the carrier on which the software was received by the licensee,
> > e.g. the diskettes or CDs.
> 
> I'll mention this to Kahn.
> 
> > > 6. This License Agreement will automatically terminate upon a material
> > > breach of its terms and conditions.
> >
> > Immediately ? Other licenses usually include a 30-60 day period
> > which allows the licensee to take actions. With the above text,
> > the license will put the Python copy in question into an illegal
> > state *prior* to having even been identified as conflicting with the
> > license.
> 
> Believe it or not, this is necessary to ensure GPL compatibility!  An
> earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> incompatible.  There's an easy workaround though: you fix your
> compliance and download a new copy, which gives you all the same
> rights again.

Hmm, but what about the 100.000 copies of the embedding application
that have already been downloaded -- I would have to force them
to redownload the application (or even just a demo of it) in
order to reestablish the lawfulness of the copy action.

Not that I want to violate the license in any way, but there
seem to be quite a few pitfalls in the present text, some of
which are not clear at all (e.g. the paragraph 3).

> > > 7. This License Agreement shall be governed by and interpreted in all
> > > respects by the law of the State of Virginia, excluding conflict of
> > > law provisions.  Nothing in this License Agreement shall be deemed to
> > > create any relationship of agency, partnership, or joint venture
> > > between CNRI and Licensee.  This License Agreement does not grant
> > > permission to use CNRI trademarks or trade name in a trademark sense
> > > to endorse or promote products or services of Licensee, or any third
> > > party.
> >
> > Would the name "Python" be considered a trademark in the above
> > sense ?
> 
> No, Python is not a CNRI trademark.

I think you or BeOpen on behalf of you should consider
registering the mark before someone else does it. There are
quite a few "PYTHON" marks registered, yet all refer to non-
computer business.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From akuchlin at mems-exchange.org  Wed Aug  2 21:57:09 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 15:57:09 -0400
Subject: [Python-Dev] Python HOWTO project created
Message-ID: <20000802155709.D28691@kronos.cnri.reston.va.us>

[CC'ed to python-dev and doc-sig -- followups set to doc-sig]

I've created a py-howto project on SourceForge to hold the Python
HOWTO documents.  

http://sourceforge.net/projects/py-howto/

Currently me, Fred, Moshe, and ESR are listed as developers and have
write access to CVS; if you want write access, drop me a note.  Web
pages and a py-howto-checkins mailing list will be coming soon, after
a bit more administrative fiddling around on my part.

Should I also create a py-howto-discuss list for discussing revisions,
or is the doc-sig OK?  Fred, what's your ruling about this?

--amk



From guido at beopen.com  Wed Aug  2 23:54:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 16:54:47 -0500
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: Your message of "Wed, 02 Aug 2000 03:30:30 -0400."
             <3987CE16.DB3E72B8@prescod.net> 
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au> <3986794E.ADBB938C@prescod.net> <200008011820.NAA30284@cj20424-a.reston1.va.home.com>  
            <3987CE16.DB3E72B8@prescod.net> 
Message-ID: <200008022154.QAA04109@cj20424-a.reston1.va.home.com>

OK.  Fine.  You say your module is great.  The Windows weenies here
don't want to touch it with a ten-foot pole.  I'm not going to be able
to dig all the way to the truth here -- I don't understand the
Registry API at all.

I propose that you and Mark Hammond go off-line and deal with Mark's
criticism one-on-one, and come back with a compromise that you are
both happy with.  I don't care what the compromise is, but both of you
must accept it.

If you *can't* agree, or if I haven't heard from you by the time I'm
ready to release 2.0b1 (say, end of August), winreg.py bites the dust.

I realize that this gives Mark Hammond veto power over the module, but
he's a pretty reasonable guy, *and* he knows the Registry API better
than anyone.  It should be possible for one of you to convince the
other.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Wed Aug  2 23:05:20 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 17:05:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Doc-SIG] Python HOWTO project created
In-Reply-To: <20000802155709.D28691@kronos.cnri.reston.va.us>
References: <20000802155709.D28691@kronos.cnri.reston.va.us>
Message-ID: <14728.36112.584563.516268@cj42289-a.reston1.va.home.com>

Andrew Kuchling writes:
 > Should I also create a py-howto-discuss list for discussing revisions,
 > or is the doc-sig OK?  Fred, what's your ruling about this?

  It's your project, your choice.  ;)  I've no problem with using the
Doc-SIG for this if you like, but a separate list may be a good thing
since it would have fewer distractions!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Thu Aug  3 00:18:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:18:26 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Wed, 02 Aug 2000 19:22:06 +0200."
             <398858BE.15928F47@lemburg.com> 
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>  
            <398858BE.15928F47@lemburg.com> 
Message-ID: <200008022218.RAA04178@cj20424-a.reston1.va.home.com>

[MAL]
> > > Is the license on 2.0 going to look the same ? I mean we now
> > > already have two seperate licenses and if BeOpen adds another
> > > two or three paragraphs will end up with a license two pages
> > > long.

[GvR]
> > Good question.  We can't really keep the license the same because the
> > old license is very specific to CNRI.  I would personally be in favor
> > of using the BSD license for 2.0.

[MAL}
> If that's possible, I don't think we have to argue about the
> 1.6 license text at all ;-) ... but then: I seriously doubt that
> CNRI is going to let you put 2.0 under a different license text :-( ...

What will happen is that the licenses in effect all get concatenated
in the LICENSE file.  It's a drag.

> > > Some comments on the new version:
> > 
> > > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > > license to reproduce, analyze, test, perform and/or display publicly,
> > > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > > alone or in any derivative version, provided, however, that CNRI's
> > > > License Agreement is retained in Python 1.6b1, alone or in any
> > > > derivative version prepared by Licensee.
> > >
> > > I don't think the latter (retaining the CNRI license alone) is 
> > > possible: you always have to include the CWI license.
> > 
> > Wow.  I hadn't even noticed this!  It seems you can prepare a
> > derivative version of the license.  Well, maybe.
> 
> I think they mean "derivative version of Python 1.6b1", but in
> court, the above wording could cause serious trouble for CNRI

You're right of course, I misunderstood you *and* the license.  Kahn
explains it this way:

[Kahn]
| Ok. I take the point being made. The way english works with ellipsis or 
| anaphoric references is to link back to the last anchor point. In the above 
| case, the last referent is Python 1.6b1.
| 
| Thus, the last phrase refers to a derivative version of Python1.6b1 
| prepared by Licensee. There is no permission given to make a derivative 
| version of the License.

> ... it seems 2.0 can reuse the CWI license after all ;-)

I'm not sure why you think that: 2.0 is a derivative version and is
thus bound by the CNRI license as well as by the license that BeOpen
adds.

> > > > Alternately, in lieu of CNRI's License Agreement, Licensee may
> > > > substitute the following text (omitting the quotes): "Python 1.6, beta
> > > > 1, is made available subject to the terms and conditions in CNRI's
> > > > License Agreement.  This Agreement may be located on the Internet
> > > > using the following unique, persistent identifier (known as a handle):
> > > > 1895.22/1011.  This Agreement may also be obtained from a proxy server
> > > > on the Internet using the URL:http://hdl.handle.net/1895.22/1011".
> > >
> > > Do we really need this in the license text ? It's nice to have
> > > the text available on the Internet, but why add long descriptions
> > > about where to get it from to the license text itself ?
> > 
> > I'm not happy with this either, but CNRI can put anything they like in
> > their license, and they seem very fond of this particular bit of
> > advertising for their handle system.  I've never managed them to
> > convince them that it was unnecessary.
> 
> Oh well... the above paragraph sure looks scary to a casual
> license reader.

But it's really harmless.

> Also I'm not sure about the usefulness of this paragraph since
> the mapping of a URL to a content cannot be considered a
> legal binding. They would at least have to add some crypto
> signature of the license text to make verification of the
> origin possible.

Sure.  Just don't worry about it.  Kahn again:

| They always have the option of using the full text in that case.

So clearly he isn't interested in taking it out.  I'd let it go.

> > > > 3. In the event Licensee prepares a derivative work that is based on
> > > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > > derivative work available to the public as provided herein, then
> > > > Licensee hereby agrees to indicate in any such work the nature of the
> > > > modifications made to Python 1.6b1.
> > >
> > > In what way would those indications have to be made ? A patch
> > > or just text describing the new features ?
> > 
> > Just text.  Bob Kahn told me that the list of "what's new" that I
> > always add to a release would be fine.
> 
> Ok, should be made explicit in the license though...

It's hard to specify this precisely -- in fact, the more precise you
specify it the more scary it looks and the more likely they are to be
able to find fault with the details of how you do it.  In this case, I
believe (and so do lawyers) that vague is good!  If you write "ported
to the Macintosh" and that's what you did, they can hardly argue with
you, can they?

> > > What does "make available to the public" mean ? If I embed
> > > Python in an application and make this application available
> > > on the Internet for download would this fit the meaning ?
> > 
> > Yes, that's why he doesn't use the word "publish" -- such an action
> > would not be considered publication in the sense of the copyright law
> > (at least not in the US, and probably not according to the Bern
> > convention) but it is clearly making it available to the public.
> 
> Ouch. That would mean I'd have to describe all additions,
> i.e. the embedding application, in most details in order not to
> breach the terms of the CNRI license.

No, additional modules aren't modifications to CNRI's work.  A change
to the syntax to support curly braces is.

> > > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > > INFRINGE ANY THIRD PARTY RIGHTS.
> > > >
> > > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> > >
> > > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > > Germany the above text would only be valid after an initial
> > > 6 month period after installation, AFAIK (this period is
> > > called "Gew?hrleistung"). Licenses from other vendors usually
> > > add some extra license text to limit the liability in this period
> > > to the carrier on which the software was received by the licensee,
> > > e.g. the diskettes or CDs.
> > 
> > I'll mention this to Kahn.

His response:

| Guido, Im not willing to do a study of international law here. If you
| can have the person identify one country other than the US that does
| not allow the above limitation or exclusion of liability and provide a
| copy of the section of their law, ill be happy to change this to read
| ".... SOME STATES OR COUNTRIES MAY NOT ALLOW ...." Otherwise, id just
| leave it alone (i.e. as is) for now.

Please mail this info directly to Kahn at CNRI.Reston.Va.US if you
believe you have the right information.  (You may CC me.)  Personally,
I wouldn't worry.  If the German law says that part of a license is
illegal, it doesn't make it any more or less illegal whether the
license warns you about this fact.

I believe that in the US, as a form of consumer protection, some
states not only disallow general disclaimers, but also require that
licenses containing such disclaimers notify the reader that the
disclaimer is not valid in their state, so that's where the language
comes from.  I don't know about German law.

> > > > 6. This License Agreement will automatically terminate upon a material
> > > > breach of its terms and conditions.
> > >
> > > Immediately ? Other licenses usually include a 30-60 day period
> > > which allows the licensee to take actions. With the above text,
> > > the license will put the Python copy in question into an illegal
> > > state *prior* to having even been identified as conflicting with the
> > > license.
> > 
> > Believe it or not, this is necessary to ensure GPL compatibility!  An
> > earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> > incompatible.  There's an easy workaround though: you fix your
> > compliance and download a new copy, which gives you all the same
> > rights again.
> 
> Hmm, but what about the 100.000 copies of the embedding application
> that have already been downloaded -- I would have to force them
> to redownload the application (or even just a demo of it) in
> order to reestablish the lawfulness of the copy action.

It's better not to violate the license.  But do you really think that
they would go after you immediately if you show good intentions to
rectify?

> Not that I want to violate the license in any way, but there
> seem to be quite a few pitfalls in the present text, some of
> which are not clear at all (e.g. the paragraph 3).

I've warned Kahn about this effect of making the license bigger, but
he simply disagrees (and we agree to disagree).  I don't know what
else I could do about it, apart from putting a FAQ about the license
on python.org -- which I intend to do.

> > > > 7. This License Agreement shall be governed by and interpreted in all
> > > > respects by the law of the State of Virginia, excluding conflict of
> > > > law provisions.  Nothing in this License Agreement shall be deemed to
> > > > create any relationship of agency, partnership, or joint venture
> > > > between CNRI and Licensee.  This License Agreement does not grant
> > > > permission to use CNRI trademarks or trade name in a trademark sense
> > > > to endorse or promote products or services of Licensee, or any third
> > > > party.
> > >
> > > Would the name "Python" be considered a trademark in the above
> > > sense ?
> > 
> > No, Python is not a CNRI trademark.
> 
> I think you or BeOpen on behalf of you should consider
> registering the mark before someone else does it. There are
> quite a few "PYTHON" marks registered, yet all refer to non-
> computer business.

Yes, I do intend to do this.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Wed Aug  2 23:37:52 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 2 Aug 2000 23:37:52 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
Message-ID: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>

Guido asked me to update my old SRE benchmarks, and
post them to python-dev.

Summary:

-- SRE is usually faster than the old RE module (PRE).

-- SRE is faster than REGEX on anything but very trivial
   patterns and short target strings.  And in some cases,
   it's even faster than a corresponding string.find...

-- on real-life benchmarks like XML parsing and Python
   tokenizing, SRE is 2-3 times faster than PRE.

-- using Unicode strings instead of 8-bit strings doesn't hurt
   performance (for some tests, the Unicode version is 30-40%
   faster on my machine.  Go figure...)

-- PRE is still faster for some patterns, especially when using
   long target strings.  I know why, and I plan to fix that before
   2.0 final.

enjoy /F

--------------------------------------------------------------------
These tests were made on a P3/233 MHz running Windows 95,
using a local build of the 0.9.8 release (this will go into 1.6b1,
I suppose).

--------------------------------------------------------------------
parsing xml:

running xmllib.py on hamlet.xml (280k):

sre8             7.14 seconds
sre16            7.82 seconds
pre             17.17 seconds

(for the sre16 test, the xml file was converted to unicode before
it was fed to the unmodified parser).

for comparision, here's the results for a couple of fast pure-Python
parsers:

rex/pre          2.44 seconds
rex/sre          0.59 seconds
srex/sre         0.16 seconds

(rex is a shallow XML parser, based on code by Robert Cameron.  srex
is an even simpler shallow parser, using sre's template mode).

--------------------------------------------------------------------
parsing python:

running tokenize.py on Tkinter.py (156k):

sre8             3.23 seconds
pre              7.57 seconds

--------------------------------------------------------------------
searching for literal text:

searching for "spam" in a string padded with "spaz" (1000 bytes on
each side of the target):

string.find     0.112 ms
sre8.search     0.059
pre.search      0.122

unicode.find    0.130
sre16.search    0.065

(yes, regular expressions can run faster than optimized C code -- as
long as we don't take compilation time into account ;-)

same test, without any false matches:

string.find     0.035 ms
sre8.search     0.050
pre.search      0.116

unicode.find    0.031
sre16.search    0.055

--------------------------------------------------------------------
compiling regular expressions

compiling the 480 tests in the standard test suite:

sre             1.22 seconds
pre             0.05 seconds

or in other words, pre (using a compiler written in C) can
compile just under 10,000 patterns per second.  sre can only
compile about 400 pattern per second.  do we care? ;-)

(footnote: sre's pattern cache stores 100 patterns.  pre's
cache hold 20 patterns, iirc).

--------------------------------------------------------------------
benchmark suite

to round off this report, here's a couple of "micro benchmarks".
all times are in milliseconds.

n=        0     5    50   250  1000  5000
----- ----- ----- ----- ----- ----- -----

pattern 'Python|Perl', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.013 0.013 0.016 0.027 0.079
sre16 0.014 0.014 0.015 0.018 0.025 0.076
pre   0.107 0.109 0.114 0.116 0.135 0.259
regex 0.011 0.011 0.012 0.016 0.033 0.122

pattern 'Python|Perl', string 'P'*n+'Perl'+'P'*n
sre8  0.013 0.016 0.030 0.100 0.358 1.716
sre16 0.014 0.015 0.030 0.094 0.347 1.649
pre   0.115 0.112 0.158 0.351 1.085 5.002
regex 0.010 0.016 0.060 0.271 1.022 5.162

(false matches causes problems for pre and regex)

pattern '(Python|Perl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.016 0.030 0.099 0.362 1.684
sre16 0.015 0.016 0.030 0.094 0.340 1.623
pre   0.110 0.111 0.112 0.119 0.143 0.267
regex 0.012 0.012 0.013 0.017 0.034 0.124

(in 0.9.8, sre's optimizer doesn't grok named groups, and
it doesn't realize that this pattern has to start with a "P")

pattern '(?:Python|Perl)', string '-'*n+'Perl'+'-'*n
sre8  0.013 0.013 0.014 0.016 0.027 0.079
sre16 0.015 0.014 0.016 0.018 0.026 0.075
pre   0.108 0.135 0.113 0.137 0.140 0.275
regex skip

(anonymous groups work better)

pattern 'Python', string '-'*n+'Python'+'-'*n
sre8  0.013 0.013 0.014 0.019 0.039 0.148
sre16 0.013 0.013 0.014 0.020 0.043 0.187
pre   0.129 0.105 0.109 0.117 0.191 0.277
regex 0.011 0.025 0.018 0.016 0.037 0.127

pattern 'Python', string 'P'*n+'Python'+'P'*n
sre8  0.040 0.012 0.021 0.026 0.080 0.248
sre16 0.012 0.013 0.015 0.025 0.061 0.283
pre   0.110 0.148 0.153 0.338 0.925 4.355
regex 0.013 0.013 0.041 0.155 0.535 2.628

(as we saw in the string.find test, sre is very fast when
there are lots of false matches)

pattern '.*Python', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.026 0.067 0.217 1.039
sre16 0.016 0.017 0.026 0.067 0.218 1.076
pre   0.111 0.112 0.124 0.180 0.386 1.494
regex 0.015 0.022 0.073 0.408 1.669 8.489

pattern '.*Python.*', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.030 0.089 0.315 1.499
sre16 0.016 0.018 0.032 0.090 0.314 1.537
pre   0.112 0.113 0.129 0.186 0.413 1.605
regex 0.016 0.023 0.076 0.387 1.674 8.519

pattern '.*(Python)', string '-'*n+'Python'+'-'*n
sre8  0.020 0.021 0.044 0.147 0.542 2.630
sre16 0.019 0.021 0.044 0.154 0.541 2.681
pre   0.115 0.117 0.141 0.245 0.636 2.690
regex 0.019 0.026 0.097 0.467 2.007 10.264

pattern '.*(?:Python)', string '-'*n+'Python'+'-'*n
sre8  0.016 0.017 0.027 0.065 0.220 1.037
sre16 0.016 0.017 0.026 0.070 0.221 1.066
pre   0.112 0.119 0.136 0.223 0.566 2.377
regex skip

pattern 'Python|Perl|Tcl', string '-'*n+'Perl'+'-'*n
sre8  0.013 0.015 0.034 0.114 0.407 1.985
sre16 0.014 0.016 0.034 0.109 0.392 1.915
pre   0.107 0.108 0.117 0.124 0.167 0.393
regex 0.012 0.012 0.013 0.017 0.033 0.123

(here's another sre compiler problem: it fails to realize
that this pattern starts with characters from a given set
[PT].  pre and regex both use bitmaps...)

pattern 'Python|Perl|Tcl', string 'P'*n+'Perl'+'P'*n
sre8  0.013 0.018 0.055 0.228 0.847 4.165
sre16 0.015 0.027 0.055 0.218 0.821 4.061
pre   0.111 0.116 0.172 0.415 1.354 6.302
regex 0.011 0.019 0.085 0.374 1.467 7.261

(but when there are lots of false matches, sre is faster
anyway.  interesting...)

pattern '(Python|Perl|Tcl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.018 0.042 0.152 0.575 2.798
sre16 0.015 0.019 0.042 0.148 0.556 2.715
pre   0.112 0.111 0.116 0.129 0.172 0.408
regex 0.012 0.013 0.014 0.018 0.035 0.124

pattern '(?:Python|Perl|Tcl)', string '-'*n+'Perl'+'-'*n
sre8  0.014 0.016 0.034 0.113 0.405 1.987
sre16 0.016 0.016 0.033 0.112 0.393 1.918
pre   0.109 0.109 0.112 0.128 0.177 0.397
regex skip

pattern '(Python)\\1', string '-'*n+'PythonPython'+'-'*n
sre8  0.014 0.018 0.030 0.096 0.342 1.673
sre16 0.015 0.016 0.031 0.094 0.330 1.625
pre   0.112 0.111 0.112 0.119 0.141 0.268
regex 0.011 0.012 0.013 0.017 0.033 0.123

pattern '(Python)\\1', string 'P'*n+'PythonPython'+'P'*n
sre8  0.013 0.016 0.035 0.111 0.411 1.976
sre16 0.015 0.016 0.034 0.112 0.416 1.992
pre   0.110 0.116 0.160 0.355 1.051 4.797
regex 0.011 0.017 0.047 0.200 0.737 3.680

pattern '([0a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.084 0.091 0.143 0.371 1.160 6.165
sre16 0.086 0.090 0.142 0.470 1.258 7.827
pre   0.155 0.140 0.185 0.200 0.280 0.523
regex 0.018 0.018 0.020 0.024 0.137 0.240

(again, sre's lack of "fastmap" is rather costly)

pattern '(?:[0a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.028 0.033 0.077 0.303 1.433 7.140
sre16 0.021 0.027 0.073 0.277 1.031 5.053
pre   0.131 0.131 0.174 0.183 0.227 0.461
regex skip

pattern '([a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.032 0.038 0.083 0.288 1.109 5.404
sre16 0.033 0.038 0.083 0.292 1.035 5.802
pre   0.195 0.135 0.176 0.187 0.233 0.468
regex 0.018 0.018 0.019 0.023 0.041 0.131

pattern '(?:[a-z][a-z0-9]*,)+', string '-'*n+'a5,b7,c9,'+'-'*n
sre8  0.022 0.025 0.067 0.302 1.011 8.245
sre16 0.021 0.026 0.066 0.302 1.103 5.372
pre   0.262 0.397 0.178 0.193 0.250 0.817
regex skip

pattern '.*P.*y.*t.*h.*o.*n.*', string '-'*n+'Python'+'-'*n
sre8  0.021 0.084 0.118 0.251 0.965 5.414
sre16 0.021 0.025 0.063 0.366 1.192 4.639
pre   0.123 0.147 0.225 0.568 1.899 9.336
regex 0.028 0.060 0.258 1.269 5.497 28.334

--------------------------------------------------------------------




From bwarsaw at beopen.com  Wed Aug  2 23:40:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 2 Aug 2000 17:40:59 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
Message-ID: <14728.38251.289986.857417@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    >> Do we have a procedure for putting more batteries in the core?
    >> I'm not talking about stuff like PEP-206, I'm talking about
    >> small, useful modules like Cookies.py.

    GvR> Cookie support in the core would be a good thing.

I use Tim O'Malley's LGPL'd version (not as contagious as GPL'd) in
Mailman with one important patch.  I've uploaded it to SF as patch
#101055.  If you like it, I'm happy to check it in.

-Barry



From bwarsaw at beopen.com  Wed Aug  2 23:42:26 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 2 Aug 2000 17:42:26 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.8580.460583.760620@cj42289-a.reston1.va.home.com>
	<200008021432.JAA02937@cj20424-a.reston1.va.home.com>
Message-ID: <14728.38338.92481.102493@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> I think Cookie.py is for server-side management of cookies,
    GvR> not for client-side.  Do we need client-side cookies too????

Ah.  AFAIK, Tim's Cookie.py is server side only.  Still very useful --
and already written!

-Barry



From guido at beopen.com  Thu Aug  3 00:44:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:44:03 -0500
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: Your message of "Wed, 02 Aug 2000 23:37:52 +0200."
             <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> 
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> 
Message-ID: <200008022244.RAA04388@cj20424-a.reston1.va.home.com>

> Guido asked me to update my old SRE benchmarks, and
> post them to python-dev.

Thanks, Fredrik!  This (plus the fact that SRE now passes all PRE
tests) makes me very happy with using SRE as the regular expression
engine of choice for 1.6 and 2.0.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug  3 00:46:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 17:46:35 -0500
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:40:59 -0400."
             <14728.38251.289986.857417@anthem.concentric.net> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com>  
            <14728.38251.289986.857417@anthem.concentric.net> 
Message-ID: <200008022246.RAA04405@cj20424-a.reston1.va.home.com>

>     GvR> Cookie support in the core would be a good thing.

[Barry]
> I use Tim O'Malley's LGPL'd version (not as contagious as GPL'd) in
> Mailman with one important patch.  I've uploaded it to SF as patch
> #101055.  If you like it, I'm happy to check it in.

I don't have the time to judge this code myself, but hope that others
in this group do.

Are you sure it's a good thing to add LGPL'ed code to the Python
standard library though?  AFAIK it is still more restrictive than the
old CWI license and probably also more restrictive than the new CNRI
license; so it could come under scrutiny and prevent closed,
proprietary software development using Python...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From barry at scottb.demon.co.uk  Wed Aug  2 23:50:43 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Wed, 2 Aug 2000 22:50:43 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBIENBDCAA.MarkH@ActiveState.com>
Message-ID: <020901bffccb$b4bf4da0$060210ac@private>



> -----Original Message-----
> From: Mark Hammond [mailto:MarkH at activestate.com]
> Sent: 02 August 2000 00:14
> To: Barry Scott; python-dev at python.org
> Subject: RE: [Python-Dev] Preventing 1.5 extensions crashing under
> 1.6/2.0 Python
> 
> 
> > If someone in the core of Python thinks a patch implementing
> > what I've outlined is useful please let me know and I will
> > generate the patch.
> 
> Umm - I'm afraid that I dont keep my python-dev emils for that long, and
> right now I'm too lazy/busy to dig around the archives.
> 
> Exactly what did you outline?  I know it went around a few times, and I
> can't remember who said what.  For my money, I liked Fredrik's solution
> best (check Py_IsInitialized() in Py_InitModule4()), but as mentioned that
> only solves for the next version of Python; it doesnt solve the fact 1.5
> modules will crash under 1.6/2.0

	This is not a good way to solve the problem as it only works in a
	limited number of cases. 

	Attached is my proposal which works for all new and old python
	and all old and new extensions.

> 
> It would definately be excellent to get _something_ in the CNRI 1.6
> release, so the BeOpen 2.0 release can see the results.

> But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,

	Yes indeed once the story of 1.6 and 2.0 is out I expect folks
	will skip 1.6. For example, if your win32 stuff is not ported
	then Python 1.6 is not usable on Windows/NT.
	
> 
> Mark.

		Barry
-------------- next part --------------
An embedded message was scrubbed...
From: "Barry Scott" <barry at scottb.demon.co.uk>
Subject: RE: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
Date: Tue, 18 Jul 2000 23:36:15 +0100
Size: 2085
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000802/743a2eaf/attachment-0001.eml>

From akuchlin at mems-exchange.org  Wed Aug  2 23:55:53 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 17:55:53 -0400
Subject: [Python-Dev] Cookies.py in the core 
In-Reply-To: <200008022246.RAA04405@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 05:46:35PM -0500
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>
Message-ID: <20000802175553.A30340@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 05:46:35PM -0500, Guido van Rossum wrote:
>Are you sure it's a good thing to add LGPL'ed code to the Python
>standard library though?  AFAIK ... it could come under scrutiny and
>prevent closed, proprietary software development using Python...

Licence discussions are a conversational black hole...  Why not just
ask Tim O'Malley to change the licence in return for getting it added
to the core?

--amk



From akuchlin at mems-exchange.org  Thu Aug  3 00:00:59 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 2 Aug 2000 18:00:59 -0400
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>; from effbot@telia.com on Wed, Aug 02, 2000 at 11:37:52PM +0200
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <20000802180059.B30340@kronos.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 11:37:52PM +0200, Fredrik Lundh wrote:
>-- SRE is usually faster than the old RE module (PRE).

Once the compiler is translated to C, it might be worth considering
making SRE available as a standalone library for use outside of
Python.  Most other regex libraries either don't do Perl's extensions,
or they don't do Unicode.  Bonus points if you can get the Perl6 team
interested in it.

Hmm... here's an old problem that's returned (recursion on repeated
group matches, I expect):

>>> p=re.compile('(x)*')
>>> p
<SRE_Pattern object at 0x8127048>
>>> p.match(500000*'x')
Segmentation fault (core dumped)

--amk



From guido at beopen.com  Thu Aug  3 01:10:14 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:10:14 -0500
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:55:53 -0400."
             <20000802175553.A30340@kronos.cnri.reston.va.us> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>  
            <20000802175553.A30340@kronos.cnri.reston.va.us> 
Message-ID: <200008022310.SAA04508@cj20424-a.reston1.va.home.com>



From guido at beopen.com  Thu Aug  3 01:10:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:10:33 -0500
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: Your message of "Wed, 02 Aug 2000 17:55:53 -0400."
             <20000802175553.A30340@kronos.cnri.reston.va.us> 
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com>  
            <20000802175553.A30340@kronos.cnri.reston.va.us> 
Message-ID: <200008022310.SAA04518@cj20424-a.reston1.va.home.com>

> Licence discussions are a conversational black hole...  Why not just
> ask Tim O'Malley to change the licence in return for getting it added
> to the core?

Excellent idea.  Go for it!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug  3 01:11:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:11:39 -0500
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: Your message of "Wed, 02 Aug 2000 18:00:59 -0400."
             <20000802180059.B30340@kronos.cnri.reston.va.us> 
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>  
            <20000802180059.B30340@kronos.cnri.reston.va.us> 
Message-ID: <200008022311.SAA04529@cj20424-a.reston1.va.home.com>

> Hmm... here's an old problem that's returned (recursion on repeated
> group matches, I expect):
> 
> >>> p=re.compile('(x)*')
> >>> p
> <SRE_Pattern object at 0x8127048>
> >>> p.match(500000*'x')
> Segmentation fault (core dumped)

Ouch.

Andrew, would you mind adding a test case for that to the re test
suite?  It's important that this doesn't come back!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug  3 01:18:04 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 18:18:04 -0500
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: Your message of "Wed, 02 Aug 2000 22:50:43 +0100."
             <020901bffccb$b4bf4da0$060210ac@private> 
References: <020901bffccb$b4bf4da0$060210ac@private> 
Message-ID: <200008022318.SAA04558@cj20424-a.reston1.va.home.com>

> > But-I-doubt-anyone-will-release-extension-modules-for-1.6-anyway ly,
> 
> 	Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> 	will skip 1.6. For example, if your win32 stuff is not ported
> 	then Python 1.6 is not usable on Windows/NT.

I expect to be releasing a 1.6 Windows installer -- but I can't
control Mark Hammond.  Yet, it shouldn't be hard for him to create a
1.6 version of win32all, should it?

> Change the init function name to a new name PythonExtensionInit_ say.
> Pass in the API version for the extension writer to check. If the
> version is bad for this extension returns without calling any python
> functions. Add a return code that is true if compatible, false if not.
> If compatible the extension can use python functions and report and
> problems it wishes.
> 
> int PythonExtensionInit_XXX( int invoking_python_api_version )
> 	{
> 	if( invoking_python_api_version != PYTHON_API_VERSION )
> 		{
> 		/* python will report that the module is incompatible */
> 		return 0;
> 		}
> 
> 	/* setup module for XXX ... */
> 
> 	/* say this extension is compatible with the invoking python */
> 	return 1;
> 	}
> 
> All 1.5 extensions fail to load on later python 2.0 and later.
> All 2.0 extensions fail to load on python 1.5.
> 
> All new extensions work only with python of the same API version.
> 
> Document that failure to setup a module could mean the extension is
> incompatible with this version of python.
> 
> Small code change in python core. But need to tell extension writers
> what the new interface is and update all extensions within the python
> CVS tree.

I sort-of like this idea -- at least at the +0 level.

I would choose a shorter name: PyExtInit_XXX().

Could you (or someone else) prepare a patch that changes this?  It
would be great if the patch were relative to the 1.6 branch of the
source tree; unfortunately this is different because of the
ANSIfication.

Unfortunately we only have two days to get this done for 1.6 -- I plan
to release 1.6b1 this Friday!  If you don't get to it, prepare a patch
for 2.0 would be the next best thing.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Thu Aug  3 01:13:30 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 01:13:30 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <020901bffccb$b4bf4da0$060210ac@private>  <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <01d701bffcd7$46a74a00$f2a6b5d4@hagrid>

> Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> will skip 1.6.   For example, if your win32 stuff is not ported then
> Python 1.6 is not usable on Windows/NT.

"not usable"?

guess you haven't done much cross-platform development lately...

> Change the init function name to a new name PythonExtensionInit_ say.
> Pass in the API version for the extension writer to check. If the
> version is bad for this extension returns without calling any python

huh?  are you seriously proposing to break every single C extension
ever written -- on each and every platform -- just to trap an error
message caused by extensions linked against 1.5.2 on your favourite
platform?

> Small code change in python core. But need to tell extension writers
> what the new interface is and update all extensions within the python
> CVS tree.

you mean "update the source code for all extensions ever written."

-1




From DavidA at ActiveState.com  Thu Aug  3 02:33:02 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 2 Aug 2000 17:33:02 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
In-Reply-To: <013901bff821$55dd02e0$8119fea9@neil>
Message-ID: <Pine.WNT.4.21.0008021732140.980-100000@loom>

>    IIRC ActiveState contributed to Perl a version of fork that works on
> Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> help heal one of the more difficult platform rifts. Emulating fork for Win32
> looks quite difficult to me but if its already done...

I've talked to Sarathy about it, and it's messy, as Perl manages PIDs
above and beyond what Windows does, among other things.  If anyone is
interested in doing that work, I can make the introduction.

--david




From DavidA at ActiveState.com  Thu Aug  3 02:35:01 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 2 Aug 2000 17:35:01 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
In-Reply-To: <013901bff821$55dd02e0$8119fea9@neil>
Message-ID: <Pine.WNT.4.21.0008021734040.980-100000@loom>

>    IIRC ActiveState contributed to Perl a version of fork that works on
> Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> help heal one of the more difficult platform rifts. Emulating fork for Win32
> looks quite difficult to me but if its already done...

Sigh. Me tired.

The message I posted a few minutes ago was actually referring to the
system() work, not the fork() work.  I agree that the fork() emulation
isn't Pythonic.

--david




From skip at mojam.com  Thu Aug  3 04:32:29 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 2 Aug 2000 21:32:29 -0500 (CDT)
Subject: [Python-Dev] METH_VARARGS
Message-ID: <14728.55741.477399.196240@beluga.mojam.com>

I noticed Andrew Kuchling's METH_VARARGS submission:

    Use METH_VARARGS instead of numeric constant 1 in method def. tables

While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
potential conflicts with other packages?

Skip



From akuchlin at cnri.reston.va.us  Thu Aug  3 04:41:02 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Wed, 2 Aug 2000 22:41:02 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008022310.SAA04518@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 02, 2000 at 06:10:33PM -0500
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com>
Message-ID: <20000802224102.A25837@newcnri.cnri.reston.va.us>

On Wed, Aug 02, 2000 at 06:10:33PM -0500, Guido van Rossum wrote:
>> Why not just
>> ask Tim O'Malley to change the licence in return for getting it added
>> to the core?
>Excellent idea.  Go for it!

Mail to timo at bbn.com bounces; does anyone have a more recent e-mail
address?  What we do if he can't be located?  Add the module anyway,
abandon the idea, or write a new version?

--amk



From fdrake at beopen.com  Thu Aug  3 04:51:23 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 2 Aug 2000 22:51:23 -0400 (EDT)
Subject: [Python-Dev] METH_VARARGS
In-Reply-To: <14728.55741.477399.196240@beluga.mojam.com>
References: <14728.55741.477399.196240@beluga.mojam.com>
Message-ID: <14728.56875.996310.790872@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
 > METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
 > potential conflicts with other packages?

  I think so, but there are too many third party extension modules
that would have to be changed to not also offer the old symbols as
well.  I see two options: leave things as they are, and provide both
versions of the symbols through at least Python 2.1.  For the later,
all examples in the code and documentation would need to be changed
and the non-PY_ versions strongly labelled as deprecated and going
away in Python version 2.2 (or whatever version it would be).
  It would *not* hurt to provide both symbols and change all the
examples, at any rate.  Aside from deleting all the checkin email,
that is!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gstein at lyra.org  Thu Aug  3 05:06:20 2000
From: gstein at lyra.org (Greg Stein)
Date: Wed, 2 Aug 2000 20:06:20 -0700
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <20000802224102.A25837@newcnri.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Wed, Aug 02, 2000 at 10:41:02PM -0400
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us>
Message-ID: <20000802200620.G19525@lyra.org>

On Wed, Aug 02, 2000 at 10:41:02PM -0400, Andrew Kuchling wrote:
> On Wed, Aug 02, 2000 at 06:10:33PM -0500, Guido van Rossum wrote:
> >> Why not just
> >> ask Tim O'Malley to change the licence in return for getting it added
> >> to the core?
> >Excellent idea.  Go for it!
> 
> Mail to timo at bbn.com bounces; does anyone have a more recent e-mail
> address?  What we do if he can't be located?  Add the module anyway,
> abandon the idea, or write a new version?

If we can't contact him, then I'd be quite happy to assist in designing and
writing a new one under a BSD-ish or Public Domain license. I was
considering doing exactly that just last week :-)

[ I want to start using cookies in ViewCVS; while the LGPL is "fine" for me,
  it would be nice if the whole ViewCVS package was BSD-ish ]


Of course, I'd much rather get a hold of Tim.

Cheers,
-g


-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Thu Aug  3 06:11:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:11:59 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.38251.289986.857417@anthem.concentric.net>
	<200008022246.RAA04405@cj20424-a.reston1.va.home.com>
Message-ID: <14728.61711.859894.972939@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Are you sure it's a good thing to add LGPL'ed code to the
    GvR> Python standard library though?  AFAIK it is still more
    GvR> restrictive than the old CWI license and probably also more
    GvR> restrictive than the new CNRI license; so it could come under
    GvR> scrutiny and prevent closed, proprietary software development
    GvR> using Python...

I don't know, however I have a version of the file with essentially no
license on it:

# Id: Cookie.py,v 2.4 1998/02/13 16:42:30 timo Exp
#  by  Timothy O'Malley <timo at bbn.com> Date: 1998/02/13 16:42:30
#
#  Cookie.py is an update for the old nscookie.py module.
#    Under the old module, it was not possible to set attributes,
#    such as "secure" or "Max-Age" on key,value granularity.  This
#    shortcoming has been addressed in Cookie.py but has come at
#    the cost of a slightly changed interface.  Cookie.py also
#    requires Python-1.5, for the re and cPickle modules.
#
#  The original idea to treat Cookies as a dictionary came from
#  Dave Mitchel (davem at magnet.com) in 1995, when he released the
#  first version of nscookie.py.

Is that better or worse? <wink>.  Back in '98, I actually asked him to
send me an LGPL'd copy because that worked better for Mailman.  We
could start with Tim's pre-LGPL'd version and backport the minor mods
I've made.

BTW, I've recently tried to contact Tim, but the address in the file
bounces.

-Barry



From guido at beopen.com  Thu Aug  3 06:39:47 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 02 Aug 2000 23:39:47 -0500
Subject: [Python-Dev] METH_VARARGS
In-Reply-To: Your message of "Wed, 02 Aug 2000 21:32:29 EST."
             <14728.55741.477399.196240@beluga.mojam.com> 
References: <14728.55741.477399.196240@beluga.mojam.com> 
Message-ID: <200008030439.XAA05445@cj20424-a.reston1.va.home.com>

> While METH_VARARGS is obviously a lot better than a hardcoded 1, shouldn't
> METH_VARARGS be something like Py_METH_VARARGS or PY_METH_VARARGS to avoid
> potential conflicts with other packages?

Unless someone knows of a *real* conflict, I'd leave this one alone.
Yes, it should be Py_*, but no, it's not worth the effort of changing
all that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gstein at lyra.org  Thu Aug  3 06:26:48 2000
From: gstein at lyra.org (Greg Stein)
Date: Wed, 2 Aug 2000 21:26:48 -0700
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
In-Reply-To: <14728.61711.859894.972939@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 03, 2000 at 12:11:59AM -0400
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <14728.61711.859894.972939@anthem.concentric.net>
Message-ID: <20000802212648.J19525@lyra.org>

On Thu, Aug 03, 2000 at 12:11:59AM -0400, Barry A. Warsaw wrote:
>...
> I don't know, however I have a version of the file with essentially no
> license on it:

That implies "no license" which means "no rights to redistribute, use, or
whatever." Very incompatible :-)

>...
> Is that better or worse? <wink>.  Back in '98, I actually asked him to
> send me an LGPL'd copy because that worked better for Mailman.  We
> could start with Tim's pre-LGPL'd version and backport the minor mods
> I've made.

Wouldn't help. We need a relicensed version, to use the LGPL'd version, or
to rebuild it from scratch.

> BTW, I've recently tried to contact Tim, but the address in the file
> bounces.

I just sent mail to Andrew Smith who has been working with Tim for several
years on various projects (RSVP, RAP, etc). Hopefully, he has a current
email address for Tim. I'll report back when I hear something from Andrew.
Of course, if somebody else can track him down faster...

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Thu Aug  3 06:41:14 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:41:14 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
Message-ID: <14728.63466.263123.434708@anthem.concentric.net>

>>>>> "GM" == Gareth McCaughan <Gareth.McCaughan at pobox.com> writes:

    GM> Consider the following piece of code, which takes a file
    GM> and prepares a concordance saying on which lines each word
    GM> in the file appears. (For real use it would need to be
    GM> made more sophisticated.)

    |     line_number = 0
    |     for line in open(filename).readlines():
    |       line_number = line_number+1
    |       for word in map(string.lower, string.split(line)):
    |         existing_lines = word2lines.get(word, [])   |
    |         existing_lines.append(line_number)          | ugh!
    |         word2lines[word] = existing_lines           |

I've run into this same situation many times myself.  I agree it's
annoying.  Annoying enough to warrant a change?  Maybe -- I'm not
sure.

    GM> I suggest a minor change: another optional argument to
    GM> "get" so that

    GM>     dict.get(item,default,flag)

Good idea, not so good solution.  Let's make it more explicit by
adding a new method instead of a flag.  I'll use `put' here since this
seems (in a sense) opposite of get() and my sleep addled brain can't
think of anything more clever.  Let's not argue about the name of this
method though -- if Guido likes the extension, he'll pick a good name
and I go on record as agreeing with his name choice, just to avoid a
protracted war.

A trivial patch to UserDict (see below) will let you play with this.

>>> d = UserDict()
>>> word = 'hello'
>>> d.get(word, [])
[]
>>> d.put(word, []).append('world')
>>> d.get(word)
['world']
>>> d.put(word, []).append('gareth')
>>> d.get(word)
['world', 'gareth']

Shouldn't be too hard to add equivalent C code to the dictionary
object.

-Barry

-------------------- snip snip --------------------
Index: UserDict.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/UserDict.py,v
retrieving revision 1.7
diff -u -r1.7 UserDict.py
--- UserDict.py	2000/02/02 15:10:14	1.7
+++ UserDict.py	2000/08/03 04:35:11
@@ -34,3 +34,7 @@
                 self.data[k] = v
     def get(self, key, failobj=None):
         return self.data.get(key, failobj)
+    def put(self, key, failobj=None):
+        if not self.data.has_key(key):
+            self.data[key] = failobj
+        return self.data[key]



From bwarsaw at beopen.com  Thu Aug  3 06:45:33 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:45:33 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.38251.289986.857417@anthem.concentric.net>
	<200008022246.RAA04405@cj20424-a.reston1.va.home.com>
	<20000802175553.A30340@kronos.cnri.reston.va.us>
	<200008022310.SAA04518@cj20424-a.reston1.va.home.com>
	<20000802224102.A25837@newcnri.cnri.reston.va.us>
	<20000802200620.G19525@lyra.org>
Message-ID: <14728.63725.390053.65213@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> If we can't contact him, then I'd be quite happy to assist in
    GS> designing and writing a new one under a BSD-ish or Public
    GS> Domain license. I was considering doing exactly that just last
    GS> week :-)

I don't think that's necessary; see my other post.  We should still
try to contact him if possible though.

My request for an LGPL'd copy was necessary because Mailman is GPL'd
(and Stallman suggested this as an acceptable solution).  It would be
just as fine for Mailman if an un-LGPL'd Cookie.py were part of the
standard Python distribution.

-Barry



From bwarsaw at beopen.com  Thu Aug  3 06:50:51 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 00:50:51 -0400 (EDT)
Subject: [Python-Dev] Cookies.py in the core (was Tangent to Re: [Tutor] CGI and Python (fwd))
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial>
	<200008021406.JAA02743@cj20424-a.reston1.va.home.com>
	<14728.38251.289986.857417@anthem.concentric.net>
	<200008022246.RAA04405@cj20424-a.reston1.va.home.com>
	<14728.61711.859894.972939@anthem.concentric.net>
	<20000802212648.J19525@lyra.org>
Message-ID: <14728.64043.155392.32408@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> Wouldn't help. We need a relicensed version, to use the LGPL'd
    GS> version, or to rebuild it from scratch.

    >> BTW, I've recently tried to contact Tim, but the address in the
    >> file bounces.

    GS> I just sent mail to Andrew Smith who has been working with Tim
    GS> for several years on various projects (RSVP, RAP,
    GS> etc). Hopefully, he has a current email address for Tim. I'll
    GS> report back when I hear something from Andrew.  Of course, if
    GS> somebody else can track him down faster...

Cool.  Tim was exceedingly helpful in giving me a version of the file
I could use.  I have no doubt that if we can contact him, he'll
relicense it in a way that makes sense for the standard distro.  That
would be the best outcome.

-Barry



From tim_one at email.msn.com  Thu Aug  3 06:52:07 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 00:52:07 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <14728.63725.390053.65213@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>

Guys, these are cookies, not brain surgery!  If people like this API,
couldn't someone have done a clean-room reimplementation of it in less time
than we've spent jockeying over the freaking license?

tolerance-for-license-discussions-at-an-all-time-low-ly y'rs  - tim





From effbot at telia.com  Thu Aug  3 10:03:53 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 10:03:53 +0200
Subject: [Python-Dev] Cookies.py in the core
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us>
Message-ID: <002601bffd21$5f23e800$f2a6b5d4@hagrid>

andrew wrote:

> Mail to timo at bbn.com bounces; does anyone have a more recent e-mail
> address?  What we do if he can't be located?  Add the module anyway,
> abandon the idea, or write a new version?

readers of the daily URL might have noticed that he posted
a socket timeout wrapper a few days ago:

    Timothy O'Malley <timo at alum.mit.edu>

it's the same name and the same signature, so I assume it's
the same guy ;-)

</F>




From wesc at alpha.ece.ucsb.edu  Thu Aug  3 09:56:59 2000
From: wesc at alpha.ece.ucsb.edu (Wesley J. Chun)
Date: Thu, 3 Aug 2000 00:56:59 -0700 (PDT)
Subject: [Python-Dev] Re: Bookstand at LA Python conference
Message-ID: <200008030756.AAA23434@alpha.ece.ucsb.edu>

    > From: Guido van Rossum <guido at python.org>
    > Date: Sat, 29 Jul 2000 12:39:01 -0500
    > 
    > The next Python conference will be in Long Beach (Los Angeles).  We're
    > looking for a bookstore to set up a bookstand like we had at the last
    > conference.  Does anybody have a suggestion?


the most well-known big independent technical bookstore that also
does mail order and has been around for about 20 years is OpAmp:

OpAmp Technical Books
1033 N. Sycamore Ave
Los Angeles, CA  90038
800-468-4322
http://www.opamp.com

there really isn't a "2nd place" since OpAmp owns the market,
but if there was a #3, it would be Technical Book Company:

Technical Book Company
2056 Westwood Blvd
Los Angeles, CA  90025
800-233-5150


the above 2 stores are listed in the misc.books.technical FAQ:

http://www.faqs.org/faqs/books/technical/

there's a smaller bookstore that's also known to have a good
technical book selection:

Scholar's Bookstore 
El Segundo, CA  90245
310-322-3161

(and of course, the standbys are always the university bookstores
for UCLA, CalTech, UC Irvine, Cal State Long Beach, etc.)

as to be expected, someone has collated a list of bookstores
in the LA area:

http://www.geocities.com/Athens/4824/na-la.htm

hope this helps!!

-wesley

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

"Core Python Programming", Prentice Hall PTR, TBP Summer/Fall 2000
    http://www.phptr.com/ptrbooks/ptr_0130260363.html

Python Books:   http://www.softpro.com/languages-python.html

wesley.j.chun :: wesc at alpha.ece.ucsb.edu
cyberweb.consulting :: silicon.valley, ca
http://www.roadkill.com/~wesc/cyberweb/



From tim_one at email.msn.com  Thu Aug  3 10:05:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 04:05:31 -0400
Subject: [Python-Dev] Go \x yourself
Message-ID: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>

Offline, Guido and /F and I had a mighty battle about the meaning of \x
escapes in Python.  In the end we agreed to change the meaning of \x in a
backward-*in*compatible way.  Here's the scoop:

In 1.5.2 and before, the Reference Manual implies that an \x escape takes
two or more hex digits following, and has the value of the last byte.  In
reality it also accepted just one hex digit, or even none:

>>> "\x123465"  # same as "\x65"
'e'
>>> "\x65"
'e'
>>> "\x1"
'\001'
>>> "\x\x"
'\\x\\x'
>>>

I found no instances of the 0- or 1-digit forms in the CVS tree or in any of
the Python packages on my laptop.  Do you have any in your code?

And, apart from some deliberate abuse in the test suite, I found no
instances of more-than-two-hex-digits \x escapes either.  Similarly, do you
have any?  As Guido said and all agreed, it's probably a bug if you do.

The new rule is the same as Perl uses for \x escapes in -w mode, except that
Python will raise ValueError at compile-time for an invalid \x escape:  an
\x escape is of the form

    \xhh

where h is a hex digit.  That's it.  Guido reports that the O'Reilly books
(probably due to their Perl editing heritage!) already say Python works this
way.  It's the same rule for 8-bit and Unicode strings (in Perl too, at
least wrt the syntax).  In a Unicode string \xij has the same meaning as
\u00ij, i.e. it's the obvious Latin-1 character.  Playing back the above
pretending the new rule is in place:

>>> "\x123465" # \x12 -> \022, "3456" left alone
'\0223456'
>>> "\x65"
'e'
>>> "\x1"
ValueError
>>> "\x\x"
ValueError
>>>

We all support this:  the open-ended gobbling \x used to do lost information
without warning, and had no benefit whatsoever.  While there was some
attraction to generalizing \x in Unicode strings, \u1234 is already
perfectly adequate for specifying Unicode characters in hex form, and the
new rule for \x at least makes consistent Unicode sense now (and in a way
JPython should be able to adopt easily too).  The new rule gets rid of the
unPythonic TMTOWTDI introduced by generalizing Unicode \x to "the last 4
bytes".  That generalization also didn't make sense in light of the desire
to add \U12345678 escapes too (i.e., so then how many trailing hex digits
should a generalized \x suck up?  2?  4?  8?).  The only actual use for \x
in 8-bit strings (i.e., a way to specify a byte in hex) is still supported
with the same meaning as in 1.5.2, and \x in a Unicode string means
something as close to that as is possible.

Sure feels right to me.  Gripe quick if it doesn't to you.

as-simple-as-possible-is-a-nice-place-to-rest-ly y'rs  - tim





From gstein at lyra.org  Thu Aug  3 10:16:37 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 01:16:37 -0700
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 12:52:07AM -0400
References: <14728.63725.390053.65213@anthem.concentric.net> <LNBBLJKPBEHFEDALKOLCCELIGNAA.tim_one@email.msn.com>
Message-ID: <20000803011637.K19525@lyra.org>

On Thu, Aug 03, 2000 at 12:52:07AM -0400, Tim Peters wrote:
> Guys, these are cookies, not brain surgery!  If people like this API,
> couldn't someone have done a clean-room reimplementation of it in less time
> than we've spent jockeying over the freaking license?

No.


-- 
Greg Stein, http://www.lyra.org/



From gstein at lyra.org  Thu Aug  3 10:18:38 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 01:18:38 -0700
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 04:05:31AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <20000803011838.L19525@lyra.org>

On Thu, Aug 03, 2000 at 04:05:31AM -0400, Tim Peters wrote:
>...
> Sure feels right to me.  Gripe quick if it doesn't to you.

+1

-- 
Greg Stein, http://www.lyra.org/



From effbot at telia.com  Thu Aug  3 10:27:39 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 10:27:39 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>
Message-ID: <006601bffd24$e25a9360$f2a6b5d4@hagrid>

andrew wrote:
>
> >-- SRE is usually faster than the old RE module (PRE).
> 
> Once the compiler is translated to C, it might be worth considering
> making SRE available as a standalone library for use outside of
> Python.

if it will ever be translated, that is...

> Hmm... here's an old problem that's returned (recursion on repeated
> group matches, I expect):
> 
> >>> p=re.compile('(x)*')
> >>> p
> <SRE_Pattern object at 0x8127048>
> >>> p.match(500000*'x')
> Segmentation fault (core dumped)

fwiw, that pattern isn't portable:

$ jpython test.py
File "test.py", line 3, in ?
java.lang.StackOverflowError

and neither is:

def nest(level):
    if level:
        nest(level-1)
nest(500000)

...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
has already taken care of the other one ;-).  but 0.9.9 won't be
out before the 1.6b1 release...

(and to avoid scaring the hell out of the beta testers, it's probably
better to leave the test out of the regression suite until the bug is
fixed...)

</F>




From Vladimir.Marangozov at inrialpes.fr  Thu Aug  3 10:44:58 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 3 Aug 2000 10:44:58 +0200 (CEST)
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <20000803011637.K19525@lyra.org> from "Greg Stein" at Aug 03, 2000 01:16:37 AM
Message-ID: <200008030844.KAA12666@python.inrialpes.fr>

Greg Stein wrote:
> 
> On Thu, Aug 03, 2000 at 12:52:07AM -0400, Tim Peters wrote:
> > Guys, these are cookies, not brain surgery!  If people like this API,
> > couldn't someone have done a clean-room reimplementation of it in less time
> > than we've spent jockeying over the freaking license?
> 
> No.


Sorry for asking this, but what "cookies in the core" means to you in
the first place?  A library module.py, C code or both?


PS: I can hardly accept the idea that cookies are necessary for normal
Web usage. I'm not against them, though. IMO, it is important to keep
control on whether they're enabled or disabled.
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Thu Aug  3 10:43:04 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 11:43:04 +0300 (IDT)
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <200008030844.KAA12666@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008031140300.7196-100000@sundial>

On Thu, 3 Aug 2000, Vladimir Marangozov wrote:

> Sorry for asking this, but what "cookies in the core" means to you in
> the first place?  A library module.py, C code or both?

I think Python is good enough for that. (Python is a great language!)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                        Moshe preaching to the choir

> PS: I can hardly accept the idea that cookies are necessary for normal
> Web usage. I'm not against them, though. IMO, it is important to keep
> control on whether they're enabled or disabled.

Yes, but that all happens client-side -- we were talking server-side
cookies. Cookies are a state-management mechanism for a loosely-coupled
protocols, and are almost essential in today's web. Not giving support
means that Python is not as good a server-side language as it can be.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From Vladimir.Marangozov at inrialpes.fr  Thu Aug  3 11:11:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 3 Aug 2000 11:11:36 +0200 (CEST)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <Pine.GSO.4.10.10008021512180.8980-100000@sundial> from "Moshe Zadka" at Aug 02, 2000 03:17:31 PM
Message-ID: <200008030911.LAA12747@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> On Wed, 2 Aug 2000, Vladimir Marangozov wrote:
> 
> > Moshe Zadka wrote:
> > > 
> > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> > 
> > You get a compiled SRE object, right?
> 
> Nope -- I tested it with pre. 

As of yesterday's CVS (I saw AMK checking in an escape patch since then):

~/python/dev>python
Python 2.0b1 (#1, Aug  3 2000, 09:01:35)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> import pre
>>> pre.compile('[\\200-\\400]')
Segmentation fault (core dumped)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Thu Aug  3 11:06:23 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 12:06:23 +0300 (IDT)
Subject: [Python-Dev] More Non-Bugs
In-Reply-To: <200008030911.LAA12747@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008031206000.7196-100000@sundial>

On Thu, 3 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > On Wed, 2 Aug 2000, Vladimir Marangozov wrote:
> > 
> > > Moshe Zadka wrote:
> > > > 
> > > > Bug 110651 -- re.compile('[\\200-\\400]') segfaults -- it doens't for me
> > > 
> > > You get a compiled SRE object, right?
> > 
> > Nope -- I tested it with pre. 
> 
> As of yesterday's CVS (I saw AMK checking in an escape patch since then):

Hmmmmm....I ought to be more careful then.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Thu Aug  3 11:14:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 11:14:24 +0200
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 03, 2000 at 04:05:31AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <20000803111424.Z266@xs4all.nl>

On Thu, Aug 03, 2000 at 04:05:31AM -0400, Tim Peters wrote:

> Sure feels right to me.  Gripe quick if it doesn't to you.

+1 if it's a compile-time error, +0 if it isn't and won't be made one. The
compile-time error makes it a lot easier to track down the issues, if any.
(Okay, so everyone should have proper unit testing -- not everyone actually
has it ;)

I suspect it would be a compile-time error, but I haven't looked at
compiling string literals yet ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Thu Aug  3 11:17:46 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 11:17:46 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid>
Message-ID: <398938BA.CA54A98E@lemburg.com>

> 
> searching for literal text:
> 
> searching for "spam" in a string padded with "spaz" (1000 bytes on
> each side of the target):
> 
> string.find     0.112 ms
> sre8.search     0.059
> pre.search      0.122
> 
> unicode.find    0.130
> sre16.search    0.065
> 
> (yes, regular expressions can run faster than optimized C code -- as
> long as we don't take compilation time into account ;-)
> 
> same test, without any false matches:
> 
> string.find     0.035 ms
> sre8.search     0.050
> pre.search      0.116
> 
> unicode.find    0.031
> sre16.search    0.055

Those results are probably due to the fact that string.find
does a brute force search. If it would do a last match char
first search or even Boyer-Moore (this only pays off for long
search targets) then it should be a lot faster than [s|p]re.

Just for compares: would you mind running the search 
routines in mxTextTools on the same machine ?

import TextTools
TextTools.find(text, what)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug  3 11:55:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 11:55:57 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <020901bffccb$b4bf4da0$060210ac@private> <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <398941AD.F47CA1C1@lemburg.com>

Guido van Rossum wrote:
> 
> > Change the init function name to a new name PythonExtensionInit_ say.
> > Pass in the API version for the extension writer to check. If the
> > version is bad for this extension returns without calling any python
> > functions. Add a return code that is true if compatible, false if not.
> > If compatible the extension can use python functions and report and
> > problems it wishes.
> >
> > int PythonExtensionInit_XXX( int invoking_python_api_version )
> >       {
> >       if( invoking_python_api_version != PYTHON_API_VERSION )
> >               {
> >               /* python will report that the module is incompatible */
> >               return 0;
> >               }
> >
> >       /* setup module for XXX ... */
> >
> >       /* say this extension is compatible with the invoking python */
> >       return 1;
> >       }
> >
> > All 1.5 extensions fail to load on later python 2.0 and later.
> > All 2.0 extensions fail to load on python 1.5.
> >
> > All new extensions work only with python of the same API version.
> >
> > Document that failure to setup a module could mean the extension is
> > incompatible with this version of python.
> >
> > Small code change in python core. But need to tell extension writers
> > what the new interface is and update all extensions within the python
> > CVS tree.
> 
> I sort-of like this idea -- at least at the +0 level.

I sort of dislike the idea ;-)

It introduces needless work for hundreds of extension writers
and effectively prevents binary compatibility for future
versions of Python: not all platforms have the problems of the
Windows platform and extensions which were compiled against a
different API version may very well still work with the
new Python version -- e.g. the dynamic loader on Linux is
very well capable of linking the new Python version against
an extension compiled for the previous Python version.

If all this is really necessary, I'd at least suggest adding macros
emulating the old Py_InitModule() APIs, so that extension writers
don't have to edit their code just to get it recompiled.

BTW, the subject line doesn't have anything to do with the
proposed solutions in this thread... they all crash Python
or the extensions in some way, some nicer, some not so nice ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Thu Aug  3 11:57:22 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 05:57:22 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <20000803111424.Z266@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMBGNAA.tim_one@email.msn.com>

[Thomas Wouters]
> +1 if it's a compile-time error, +0 if it isn't and won't be
> made one. ...

Quoting back from the original msg:

> ... will raise ValueError at compile-time for an invalid \x escape
                            ^^^^^^^^^^^^^^^

The pseudo-example was taken from a pseudo interactive prompt, and just as
in a real example at a real interactive prompt, each (pseduo)input line was
(pseudo)compiled one at a time <wink>.





From mal at lemburg.com  Thu Aug  3 12:04:53 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 12:04:53 +0200
Subject: [Python-Dev] Fork on Win32 - was (test_fork1 failing...)
References: <Pine.WNT.4.21.0008021734040.980-100000@loom>
Message-ID: <398943C4.AFECEE36@lemburg.com>

David Ascher wrote:
> 
> >    IIRC ActiveState contributed to Perl a version of fork that works on
> > Win32. Has anyone looked at this? Could it be grabbed for Python? This would
> > help heal one of the more difficult platform rifts. Emulating fork for Win32
> > looks quite difficult to me but if its already done...
> 
> Sigh. Me tired.
> 
> The message I posted a few minutes ago was actually referring to the
> system() work, not the fork() work.  I agree that the fork() emulation
> isn't Pythonic.

What about porting os.kill() to Windows (see my other post
with changed subject line in this thread) ? Wouldn't that
make sense ? (the os.spawn() APIs do return PIDs of spawned
processes, so calling os.kill() to send signals to these
seems like a feasable way to control them)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug  3 12:11:24 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 12:11:24 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>
Message-ID: <3989454C.5C9EF39B@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "GM" == Gareth McCaughan <Gareth.McCaughan at pobox.com> writes:
> 
>     GM> Consider the following piece of code, which takes a file
>     GM> and prepares a concordance saying on which lines each word
>     GM> in the file appears. (For real use it would need to be
>     GM> made more sophisticated.)
> 
>     |     line_number = 0
>     |     for line in open(filename).readlines():
>     |       line_number = line_number+1
>     |       for word in map(string.lower, string.split(line)):
>     |         existing_lines = word2lines.get(word, [])   |
>     |         existing_lines.append(line_number)          | ugh!
>     |         word2lines[word] = existing_lines           |
> 
> I've run into this same situation many times myself.  I agree it's
> annoying.  Annoying enough to warrant a change?  Maybe -- I'm not
> sure.
> 
>     GM> I suggest a minor change: another optional argument to
>     GM> "get" so that
> 
>     GM>     dict.get(item,default,flag)
> 
> Good idea, not so good solution.  Let's make it more explicit by
> adding a new method instead of a flag.  I'll use `put' here since this
> seems (in a sense) opposite of get() and my sleep addled brain can't
> think of anything more clever.  Let's not argue about the name of this
> method though -- if Guido likes the extension, he'll pick a good name
> and I go on record as agreeing with his name choice, just to avoid a
> protracted war.
> 
> A trivial patch to UserDict (see below) will let you play with this.
> 
> >>> d = UserDict()
> >>> word = 'hello'
> >>> d.get(word, [])
> []
> >>> d.put(word, []).append('world')
> >>> d.get(word)
> ['world']
> >>> d.put(word, []).append('gareth')
> >>> d.get(word)
> ['world', 'gareth']
> 
> Shouldn't be too hard to add equivalent C code to the dictionary
> object.

The following one-liner already does what you want:

	d[word] = d.get(word, []).append('world')

... and it's in no way more readable than your proposed
.put() line ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From m.favas at per.dem.csiro.au  Thu Aug  3 12:54:05 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 03 Aug 2000 18:54:05 +0800
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
Message-ID: <39894F4D.FB11F098@per.dem.csiro.au>

[Guido]
>> Hmm... here's an old problem that's returned (recursion on repeated
>> group matches, I expect):
>> 
>> >>> p=re.compile('(x)*')
>> >>> p
>> <SRE_Pattern object at 0x8127048>
>> >>> p.match(500000*'x')
>> Segmentation fault (core dumped)
>
>Ouch.
>
>Andrew, would you mind adding a test case for that to the re test
>suite?  It's important that this doesn't come back!

In fact, on my machine with the default stacksize of 2048kb, test_re.py
already exercises this bug. (Goes away if I do an "unlimit", of course.)
So testing for this deterministically is always going to be dependent on
the platform. How large do you want to go (reasonably)? - although I
guess core dumps should be avoided...

Mark

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From effbot at telia.com  Thu Aug  3 13:10:24 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 13:10:24 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <398938BA.CA54A98E@lemburg.com>
Message-ID: <00eb01bffd3b$8324fb80$f2a6b5d4@hagrid>

mal wrote:

> Just for compares: would you mind running the search 
> routines in mxTextTools on the same machine ?

> > searching for "spam" in a string padded with "spaz" (1000 bytes on
> > each side of the target):
> > 
> > string.find     0.112 ms

texttools.find    0.080 ms

> > sre8.search     0.059
> > pre.search      0.122
> > 
> > unicode.find    0.130
> > sre16.search    0.065
> > 
> > same test, without any false matches (padded with "-"):
> > 
> > string.find     0.035 ms

texttools.find    0.083 ms

> > sre8.search     0.050
> > pre.search      0.116
> > 
> > unicode.find    0.031
> > sre16.search    0.055
> 
> Those results are probably due to the fact that string.find
> does a brute force search. If it would do a last match char
> first search or even Boyer-Moore (this only pays off for long
> search targets) then it should be a lot faster than [s|p]re.

does the TextTools algorithm work with arbitrary character
set sizes, btw?

</F>




From effbot at telia.com  Thu Aug  3 13:25:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 13:25:45 +0200
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
References: <39894F4D.FB11F098@per.dem.csiro.au>
Message-ID: <00fc01bffd3d$91a36460$f2a6b5d4@hagrid>

mark favas wrote:
> >> >>> p.match(500000*'x')
> >> Segmentation fault (core dumped)
> >
> >Andrew, would you mind adding a test case for that to the re test
> >suite?  It's important that this doesn't come back!
> 
> In fact, on my machine with the default stacksize of 2048kb, test_re.py
> already exercises this bug. (Goes away if I do an "unlimit", of course.)
> So testing for this deterministically is always going to be dependent on
> the platform. How large do you want to go (reasonably)? - although I
> guess core dumps should be avoided...

afaik, there was no test in the standard test suite that
included run-away recursion...

what test is causing this error?

(adding a print statement to sre._compile should help you
figure that out...)

</F>




From MarkH at ActiveState.com  Thu Aug  3 13:19:50 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 3 Aug 2000 21:19:50 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <398943C4.AFECEE36@lemburg.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEBGDDAA.MarkH@ActiveState.com>

> What about porting os.kill() to Windows (see my other post
> with changed subject line in this thread) ? Wouldn't that
> make sense ? (the os.spawn() APIs do return PIDs of spawned
> processes, so calling os.kill() to send signals to these
> seems like a feasable way to control them)

Signals are a bit of a problem on Windows.  We can terminate the thread
mid-execution, but a clean way of terminating a thread isn't obvious.

I admit I didnt really read the long manpage when you posted it, but is a
terminate-without-prejudice option any good?

Mark.




From MarkH at ActiveState.com  Thu Aug  3 13:34:09 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 3 Aug 2000 21:34:09 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEBGDDAA.MarkH@ActiveState.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEBGDDAA.MarkH@ActiveState.com>

eek - a bit quick off the mark here ;-]

> Signals are a bit of a problem on Windows.  We can terminate the thread
> mid-execution, but a clean way of terminating a thread isn't obvious.

thread = process - you get the idea!

> terminate-without-prejudice option any good?

really should say

> terminate-without-prejudice only version any good?

Mark.




From m.favas at per.dem.csiro.au  Thu Aug  3 13:35:48 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 03 Aug 2000 19:35:48 +0800
Subject: [Python-Dev] (s)re crashing in regrtest (was SRE 0.9.8 benchmarks)
References: <39894F4D.FB11F098@per.dem.csiro.au> <00fc01bffd3d$91a36460$f2a6b5d4@hagrid>
Message-ID: <39895914.133D52A4@per.dem.csiro.au>

Fredrik Lundh wrote:
> 
> mark favas wrote:
> > In fact, on my machine with the default stacksize of 2048kb, test_re.py
> > already exercises this bug.> 
> afaik, there was no test in the standard test suite that
> included run-away recursion...
> 
> what test is causing this error?
> 
> (adding a print statement to sre._compile should help you
> figure that out...)
> 
> </F>

The stack overflow is caused by the test (in test_re.py):

# Try nasty case that overflows the straightforward recursive
# implementation of repeated groups.
assert re.match('(x)*', 50000*'x').span() == (0, 50000)

(changing 50000 to 18000 works, 19000 overflows...)

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From guido at beopen.com  Thu Aug  3 14:56:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 07:56:38 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Thu, 03 Aug 2000 12:11:24 +0200."
             <3989454C.5C9EF39B@lemburg.com> 
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>  
            <3989454C.5C9EF39B@lemburg.com> 
Message-ID: <200008031256.HAA06107@cj20424-a.reston1.va.home.com>

> "Barry A. Warsaw" wrote:
> > Good idea, not so good solution.  Let's make it more explicit by
> > adding a new method instead of a flag.

You're learning to channel me. :-)

> > I'll use `put' here since this
> > seems (in a sense) opposite of get() and my sleep addled brain can't
> > think of anything more clever.  Let's not argue about the name of this
> > method though -- if Guido likes the extension, he'll pick a good name
> > and I go on record as agreeing with his name choice, just to avoid a
> > protracted war.

But I'll need input.  My own idea was dict.getput(), but that's ugly
as hell; dict.put() doesn't suggest that it also returns the value.

Protocol: if you have a suggestion for a name for this function, mail
it to me.  DON'T MAIL THE LIST.  (If you mail it to the list, that
name is disqualified.)  Don't explain me why the name is good -- if
it's good, I'll know, if it needs an explanation, it's not good.  From
the suggestions I'll pick one if I can, and the first person to
suggest it gets a special mention in the implementation.  If I can't
decide, I'll ask the PythonLabs folks to help.

Marc-Andre writes:
> The following one-liner already does what you want:
> 
> 	d[word] = d.get(word, []).append('world')

Are you using a patch to the list object so that append() returns the
list itself?  Or was it just late?  For me, this makes d[word] = None.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at cnri.reston.va.us  Thu Aug  3 14:06:49 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:06:49 -0400
Subject: [Python-Dev] Cookies.py in the core
In-Reply-To: <002601bffd21$5f23e800$f2a6b5d4@hagrid>; from effbot@telia.com on Thu, Aug 03, 2000 at 10:03:53AM +0200
References: <Pine.GSO.4.10.10008020958590.20425-100000@sundial> <200008021406.JAA02743@cj20424-a.reston1.va.home.com> <14728.38251.289986.857417@anthem.concentric.net> <200008022246.RAA04405@cj20424-a.reston1.va.home.com> <20000802175553.A30340@kronos.cnri.reston.va.us> <200008022310.SAA04518@cj20424-a.reston1.va.home.com> <20000802224102.A25837@newcnri.cnri.reston.va.us> <002601bffd21$5f23e800$f2a6b5d4@hagrid>
Message-ID: <20000803080649.A27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 10:03:53AM +0200, Fredrik Lundh wrote:
>readers of the daily URL might have noticed that he posted
>a socket timeout wrapper a few days ago:

Noted; thanks!  I've sent him an e-mail...

--amk



From akuchlin at cnri.reston.va.us  Thu Aug  3 14:14:56 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:14:56 -0400
Subject: [Python-Dev] (s)re crashing in regrtest
In-Reply-To: <39895914.133D52A4@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Thu, Aug 03, 2000 at 07:35:48PM +0800
References: <39894F4D.FB11F098@per.dem.csiro.au> <00fc01bffd3d$91a36460$f2a6b5d4@hagrid> <39895914.133D52A4@per.dem.csiro.au>
Message-ID: <20000803081456.B27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 07:35:48PM +0800, Mark Favas wrote:
>The stack overflow is caused by the test (in test_re.py):
># Try nasty case that overflows the straightforward recursive
># implementation of repeated groups.

That would be the test I added last night to trip this problem, per
GvR's instructions.  I'll comment out the test for now, so that it can
be restored once the bug is fixed.

--amk



From mal at lemburg.com  Thu Aug  3 14:14:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 14:14:55 +0200
Subject: [Python-Dev] Still no new license -- but draft text available
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com>  
	            <398858BE.15928F47@lemburg.com> <200008022218.RAA04178@cj20424-a.reston1.va.home.com>
Message-ID: <3989623F.2AB4C00C@lemburg.com>

Guido van Rossum wrote:
>
> [...]
>
> > > > Some comments on the new version:
> > >
> > > > > 2. Subject to the terms and conditions of this License Agreement, CNRI
> > > > > hereby grants Licensee a nonexclusive, royalty-free, world-wide
> > > > > license to reproduce, analyze, test, perform and/or display publicly,
> > > > > prepare derivative works, distribute, and otherwise use Python 1.6b1
> > > > > alone or in any derivative version, provided, however, that CNRI's
> > > > > License Agreement is retained in Python 1.6b1, alone or in any
> > > > > derivative version prepared by Licensee.
> > > >
> > > > I don't think the latter (retaining the CNRI license alone) is
> > > > possible: you always have to include the CWI license.
> > >
> > > Wow.  I hadn't even noticed this!  It seems you can prepare a
> > > derivative version of the license.  Well, maybe.
> >
> > I think they mean "derivative version of Python 1.6b1", but in
> > court, the above wording could cause serious trouble for CNRI
> 
> You're right of course, I misunderstood you *and* the license.  Kahn
> explains it this way:
> 
> [Kahn]
> | Ok. I take the point being made. The way english works with ellipsis or
> | anaphoric references is to link back to the last anchor point. In the above
> | case, the last referent is Python 1.6b1.
> |
> | Thus, the last phrase refers to a derivative version of Python1.6b1
> | prepared by Licensee. There is no permission given to make a derivative
> | version of the License.
>
> > ... it seems 2.0 can reuse the CWI license after all ;-)
> 
> I'm not sure why you think that: 2.0 is a derivative version and is
> thus bound by the CNRI license as well as by the license that BeOpen
> adds.

If you interpret the above wording in the sense of "preparing
a derivative version of the License Agreement", BeOpen (or
anyone else) could just remove the CNRI License text. I
understand that this is not intended (that's why I put the smiley
there ;-).

> [...] 
>
> > > > > 3. In the event Licensee prepares a derivative work that is based on
> > > > > or incorporates Python 1.6b1or any part thereof, and wants to make the
> > > > > derivative work available to the public as provided herein, then
> > > > > Licensee hereby agrees to indicate in any such work the nature of the
> > > > > modifications made to Python 1.6b1.
> > > >
> > > > In what way would those indications have to be made ? A patch
> > > > or just text describing the new features ?
> > >
> > > Just text.  Bob Kahn told me that the list of "what's new" that I
> > > always add to a release would be fine.
> >
> > Ok, should be made explicit in the license though...
> 
> It's hard to specify this precisely -- in fact, the more precise you
> specify it the more scary it looks and the more likely they are to be
> able to find fault with the details of how you do it.  In this case, I
> believe (and so do lawyers) that vague is good!  If you write "ported
> to the Macintosh" and that's what you did, they can hardly argue with
> you, can they?

True.
 
> > > > What does "make available to the public" mean ? If I embed
> > > > Python in an application and make this application available
> > > > on the Internet for download would this fit the meaning ?
> > >
> > > Yes, that's why he doesn't use the word "publish" -- such an action
> > > would not be considered publication in the sense of the copyright law
> > > (at least not in the US, and probably not according to the Bern
> > > convention) but it is clearly making it available to the public.
> >
> > Ouch. That would mean I'd have to describe all additions,
> > i.e. the embedding application, in most details in order not to
> > breach the terms of the CNRI license.
> 
> No, additional modules aren't modifications to CNRI's work.  A change
> to the syntax to support curly braces is.

Ok, thanks for clarifying this.

(I guess the "vague is good" argument fits here as well.)
 
> > > > > 4. CNRI is making Python 1.6b1 available to Licensee on an "AS IS"
> > > > > basis.  CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
> > > > > IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
> > > > > DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
> > > > > FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6b1 WILL NOT
> > > > > INFRINGE ANY THIRD PARTY RIGHTS.
> > > > >
> > > > > 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
> > > > > SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
> > > > > AS A RESULT OF USING, MODIFYING OR DISTRIBUTING PYTHON 1.6b1, OR ANY
> > > > > DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.  SOME
> > > > > STATES DO NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY SO THE
> > > > > ABOVE DISCLAIMER MAY NOT APPLY TO LICENSEE.
> > > >
> > > > I would make this "...SOME STATES AND COUNTRIES...". E.g. in
> > > > Germany the above text would only be valid after an initial
> > > > 6 month period after installation, AFAIK (this period is
> > > > called "Gew?hrleistung"). Licenses from other vendors usually
> > > > add some extra license text to limit the liability in this period
> > > > to the carrier on which the software was received by the licensee,
> > > > e.g. the diskettes or CDs.
> > >
> > > I'll mention this to Kahn.
> 
> His response:
> 
> | Guido, Im not willing to do a study of international law here. If you
> | can have the person identify one country other than the US that does
> | not allow the above limitation or exclusion of liability and provide a
> | copy of the section of their law, ill be happy to change this to read
> | ".... SOME STATES OR COUNTRIES MAY NOT ALLOW ...." Otherwise, id just
> | leave it alone (i.e. as is) for now.
> 
> Please mail this info directly to Kahn at CNRI.Reston.Va.US if you
> believe you have the right information.  (You may CC me.)  Personally,
> I wouldn't worry.  If the German law says that part of a license is
> illegal, it doesn't make it any more or less illegal whether the
> license warns you about this fact.
> 
> I believe that in the US, as a form of consumer protection, some
> states not only disallow general disclaimers, but also require that
> licenses containing such disclaimers notify the reader that the
> disclaimer is not valid in their state, so that's where the language
> comes from.  I don't know about German law.

I haven't found an English version of the German law text,
but this is the title of the law which handles German
business conditions:

"Gesetz zur Regelung des Rechts der Allgemeinen Gesch?ftsbedingungen
AGBG) - Act Governing Standard Business Conditions"
 
The relevant paragraph is no. 11 (10).

I'm not a lawyer, but from what I know:
terms generally excluding liability are invalid; liability
may be limited during the first 6 months after license
agreement and excluded after this initial period.

Anyway, you're right in that the notice about the paragraph
not necessarily applying to the licensee only has informational
character and that it doesn't do any harm otherwise.

> > > > > 6. This License Agreement will automatically terminate upon a material
> > > > > breach of its terms and conditions.
> > > >
> > > > Immediately ? Other licenses usually include a 30-60 day period
> > > > which allows the licensee to take actions. With the above text,
> > > > the license will put the Python copy in question into an illegal
> > > > state *prior* to having even been identified as conflicting with the
> > > > license.
> > >
> > > Believe it or not, this is necessary to ensure GPL compatibility!  An
> > > earlier draft had 30-60 days.  But the GPL doesn't, so this was deemed
> > > incompatible.  There's an easy workaround though: you fix your
> > > compliance and download a new copy, which gives you all the same
> > > rights again.
> >
> > Hmm, but what about the 100.000 copies of the embedding application
> > that have already been downloaded -- I would have to force them
> > to redownload the application (or even just a demo of it) in
> > order to reestablish the lawfulness of the copy action.
> 
> It's better not to violate the license.  But do you really think that
> they would go after you immediately if you show good intentions to
> rectify?

I don't intend to violate the license, but customers of 
an application embedding Python will have to agree to the
Python license to be able to legally use the Python engine
embedded in the application -- that is: if the application
unintensionally fails to meet the CNRI license terms
then the application as a whole would immediately become
unusable by the customer.

Now just think of an eCommerce application which produces
some $100k USD revenue each day... such a customer wouldn't
like these license terms at all :-(

BTW, I think that section 6. can be removed altogether, if
it doesn't include any reference to such a 30-60 day period:
the permissions set forth in a license are only valid in case
the license terms are adhered to whether it includes such
a section or not.

> > Not that I want to violate the license in any way, but there
> > seem to be quite a few pitfalls in the present text, some of
> > which are not clear at all (e.g. the paragraph 3).
> 
> I've warned Kahn about this effect of making the license bigger, but
> he simply disagrees (and we agree to disagree).  I don't know what
> else I could do about it, apart from putting a FAQ about the license
> on python.org -- which I intend to do.

Good (or bad ? :-()
 
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From akuchlin at cnri.reston.va.us  Thu Aug  3 14:22:44 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 3 Aug 2000 08:22:44 -0400
Subject: [Python-Dev] SRE 0.9.8 benchmarks
In-Reply-To: <006601bffd24$e25a9360$f2a6b5d4@hagrid>; from effbot@telia.com on Thu, Aug 03, 2000 at 10:27:39AM +0200
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us> <006601bffd24$e25a9360$f2a6b5d4@hagrid>
Message-ID: <20000803082244.C27333@newcnri.cnri.reston.va.us>

On Thu, Aug 03, 2000 at 10:27:39AM +0200, Fredrik Lundh wrote:
>if it will ever be translated, that is...

I'll agree to take a shot at it (which carries no implication of
actually finisihing :) ) post-2.0.  It's silly for all of Tcl, Python,
Perl to grow their own implementations, when a common implementation
could benefit from having 3x the number of eyes looking at it and
optimizing it.

>fwiw, that pattern isn't portable:

No, it isn't; the straightforward implementation of repeated groups is
recursive, and fixing this requires jumping through hoops to make it
nonrecursive (or adopting Python's solution and only recursing up to
some upper limit).  re had to get this right because regex didn't
crash on this pattern, and neither do recent Perls.  The vast bulk of
my patches to PCRE were to fix this problem.

--amk



From guido at beopen.com  Thu Aug  3 15:31:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 08:31:16 -0500
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
In-Reply-To: Your message of "Thu, 03 Aug 2000 10:27:39 +0200."
             <006601bffd24$e25a9360$f2a6b5d4@hagrid> 
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>  
            <006601bffd24$e25a9360$f2a6b5d4@hagrid> 
Message-ID: <200008031331.IAA06319@cj20424-a.reston1.va.home.com>

> andrew wrote:
> 
> > Hmm... here's an old problem that's returned (recursion on repeated
> > group matches, I expect):
> > 
> > >>> p=re.compile('(x)*')
> > >>> p
> > <SRE_Pattern object at 0x8127048>
> > >>> p.match(500000*'x')
> > Segmentation fault (core dumped)

Effbot:
> fwiw, that pattern isn't portable:

Who cares -- it shouldn't dump core!

> ...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
> has already taken care of the other one ;-).  but 0.9.9 won't be
> out before the 1.6b1 release...

I assume you are planning to put the backtracking stack back in, as
you mentioned in the checkin message?

> (and to avoid scaring the hell out of the beta testers, it's probably
> better to leave the test out of the regression suite until the bug is
> fixed...)

Even better, is it possible to put a limit on the recursion level
before 1.6b1 is released (tomorrow if we get final agreement on the
license) so at least it won't dump core?  Otherwise you'll get reports
of this from people who write this by accident...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug  3 14:57:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 14:57:14 +0200
Subject: [Python-Dev] Buglist
Message-ID: <20000803145714.B266@xs4all.nl>

Just a little FYI and 'is this okay' message; I've been browsing the buglist
the last few days, doing a quick mark & message sweep over the bugs that I
can understand. I've mostly been closing bugs that look closed, and
assigning them when it's very obvious who it should be assigned to.

Should I be doing this already ? Is the bug-importing 'done', or is Jeremy
still busy with importing and fixing bug status (stati ?) and such ? Is
there something better to use as a guideline than my 'best judgement' ? I
think it's a good idea to 'resolve' most of the bugs on the list, because a
lot of them are really non-issues or no-longer-issues, and the sheer size of
the list prohibits a proper overview of the real issues :P However, it's
entirely possible we're going to miss out on a few bugs this way. I'm trying
my best to be careful, but I think overlooking a few bugs is better than
overlooking all of them because of the size of the list :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Thu Aug  3 15:05:07 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 3 Aug 2000 09:05:07 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <1246814587-81974994@hypernet.com>

[Tim sez]
> The new rule is ...
>...  an \x escape is of the form
> 
>     \xhh
> 
> where h is a hex digit.  That's it.  

> >>> "\x123465" # \x12 -> \022, "3465" left alone
> '\0223465'

Hooray! I got bit often enough by that one ('e') that I forced 
myself to always use the wholly unnatural octal.

god-gave-us-sixteen-fingers-for-a-reason-ly y'rs


- Gordon



From fdrake at beopen.com  Thu Aug  3 15:06:51 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 09:06:51 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
Message-ID: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>

  At various points, there have been comments that xrange objects
should not print as lists but as xrange objects.  Taking a look at the
implementation, I noticed that if you call repr() (by name or by
backtick syntax), you get "the right thing"; the list representation
comes up when you print the object on a real file object.  The
tp_print slot of the xrange type produces the list syntax.  There is
no tp_str handler, so str(xrange(...)) is the same as
repr(xrange(...)).
  I propose ripping out the tp_print handler completely.  (And I've
already tested my patch. ;)
  Comments?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Thu Aug  3 15:09:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 16:09:40 +0300 (IDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008031609130.26290-100000@sundial>

On Thu, 3 Aug 2000, Fred L. Drake, Jr. wrote:

>   I propose ripping out the tp_print handler completely.  (And I've
> already tested my patch. ;)
>   Comments?

+1. Like I always say: less code, less bugs.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Thu Aug  3 15:31:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:31:34 +0200
Subject: [Python-Dev] SRE 0.9.8 benchmarks
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <398938BA.CA54A98E@lemburg.com> <00eb01bffd3b$8324fb80$f2a6b5d4@hagrid>
Message-ID: <39897436.E42F1C3C@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> 
> > Just for compares: would you mind running the search
> > routines in mxTextTools on the same machine ?
> 
> > > searching for "spam" in a string padded with "spaz" (1000 bytes on
> > > each side of the target):
> > >
> > > string.find     0.112 ms
> 
> texttools.find    0.080 ms
> 
> > > sre8.search     0.059
> > > pre.search      0.122
> > >
> > > unicode.find    0.130
> > > sre16.search    0.065
> > >
> > > same test, without any false matches (padded with "-"):
> > >
> > > string.find     0.035 ms
> 
> texttools.find    0.083 ms
> 
> > > sre8.search     0.050
> > > pre.search      0.116
> > >
> > > unicode.find    0.031
> > > sre16.search    0.055
> >
> > Those results are probably due to the fact that string.find
> > does a brute force search. If it would do a last match char
> > first search or even Boyer-Moore (this only pays off for long
> > search targets) then it should be a lot faster than [s|p]re.
> 
> does the TextTools algorithm work with arbitrary character
> set sizes, btw?

The find function creates a Boyer-Moore search object
for the search string (on every call). It compares 1-1
or using a translation table which is applied
to the searched text prior to comparing it to the search
string (this enables things like case insensitive
search and character sets, but is about 45% slower). Real-life
usage would be to create the search objects once per process
and then reuse them. The Boyer-Moore table calcuation takes
some time...

But to answer your question: mxTextTools is 8-bit throughout.
A Unicode aware version will follow by the end of this year.

Thanks for checking,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug  3 15:40:05 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:40:05 +0200
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 
 failing...)
References: <ECEPKNMJLHAPFFJHDOJBMEBGDDAA.MarkH@ActiveState.com>
Message-ID: <39897635.6C9FB82D@lemburg.com>

Mark Hammond wrote:
> 
> eek - a bit quick off the mark here ;-]
> 
> > Signals are a bit of a problem on Windows.  We can terminate the thread
> > mid-execution, but a clean way of terminating a thread isn't obvious.
> 
> thread = process - you get the idea!
> 
> > terminate-without-prejudice option any good?
> 
> really should say
> 
> > terminate-without-prejudice only version any good?

Well for one you can use signals for many other things than
just terminating a process (e.g. to have it reload its configuration
files). That's why os.kill() allows you to specify a signal.

The usual way of terminating a process on Unix from the outside
is to send it a SIGTERM (and if that doesn't work a SIGKILL).
I use this strategy a lot to control runaway client processes
and safely shut them down:

On Unix you can install a signal
handler in the Python program which then translates the SIGTERM
signal into a normal Python exception. Sending the signal then
causes the same as e.g. hitting Ctrl-C in a program: an
exception is raised asynchronously, but it can be handled
properly by the Python exception clauses to enable safe
shutdown of the process.

For background: the client processes in my application server
can execute arbitrary Python scripts written by users, i.e.
potentially buggy code which could effectively hose the server.
To control this, I use client processes which do the actual
exec code and watch them using a watchdog process. If the processes
don't return anything useful within a certain timeout limit,
the watchdog process sends them a SIGTERM and restarts a new
client.

Threads would not support this type of strategy, so I'm looking
for something similar on Windows, Win2k to be more specific.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Thu Aug  3 16:50:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 09:50:26 -0500
Subject: [Python-Dev] Still no new license -- but draft text available
In-Reply-To: Your message of "Thu, 03 Aug 2000 14:14:55 +0200."
             <3989623F.2AB4C00C@lemburg.com> 
References: <200008020409.XAA01355@cj20424-a.reston1.va.home.com> <3987E5E1.A2B20241@lemburg.com> <200008021511.KAA03049@cj20424-a.reston1.va.home.com> <398858BE.15928F47@lemburg.com> <200008022218.RAA04178@cj20424-a.reston1.va.home.com>  
            <3989623F.2AB4C00C@lemburg.com> 
Message-ID: <200008031450.JAA06505@cj20424-a.reston1.va.home.com>

> > > ... it seems 2.0 can reuse the CWI license after all ;-)
> > 
> > I'm not sure why you think that: 2.0 is a derivative version and is
> > thus bound by the CNRI license as well as by the license that BeOpen
> > adds.
> 
> If you interpret the above wording in the sense of "preparing
> a derivative version of the License Agreement", BeOpen (or
> anyone else) could just remove the CNRI License text. I
> understand that this is not intended (that's why I put the smiley
> there ;-).

Please forget this interpretation! :-)

> I haven't found an English version of the German law text,
> but this is the title of the law which handles German
> business conditions:
> 
> "Gesetz zur Regelung des Rechts der Allgemeinen Gesch?ftsbedingungen
> AGBG) - Act Governing Standard Business Conditions"
>  
> The relevant paragraph is no. 11 (10).
> 
> I'm not a lawyer, but from what I know:
> terms generally excluding liability are invalid; liability
> may be limited during the first 6 months after license
> agreement and excluded after this initial period.
> 
> Anyway, you're right in that the notice about the paragraph
> not necessarily applying to the licensee only has informational
> character and that it doesn't do any harm otherwise.

OK, we'll just let this go.

> > It's better not to violate the license.  But do you really think that
> > they would go after you immediately if you show good intentions to
> > rectify?
> 
> I don't intend to violate the license, but customers of 
> an application embedding Python will have to agree to the
> Python license to be able to legally use the Python engine
> embedded in the application -- that is: if the application
> unintensionally fails to meet the CNRI license terms
> then the application as a whole would immediately become
> unusable by the customer.
> 
> Now just think of an eCommerce application which produces
> some $100k USD revenue each day... such a customer wouldn't
> like these license terms at all :-(

That depends.  Unintentional failure to meet the license terms seems
unlikely to me considering that the license doesn't impose a lot of
requirments.  It's vague in its definitions, but I think that works in
your advantage.

> BTW, I think that section 6. can be removed altogether, if
> it doesn't include any reference to such a 30-60 day period:
> the permissions set forth in a license are only valid in case
> the license terms are adhered to whether it includes such
> a section or not.

Try to explain that to a lawyer. :)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Thu Aug  3 15:55:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 15:55:28 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net>  
	            <3989454C.5C9EF39B@lemburg.com> <200008031256.HAA06107@cj20424-a.reston1.va.home.com>
Message-ID: <398979D0.5AF80126@lemburg.com>

Guido van Rossum wrote:
> 
> Marc-Andre writes:
> > The following one-liner already does what you want:
> >
> >       d[word] = d.get(word, []).append('world')
> 
> Are you using a patch to the list object so that append() returns the
> list itself?  Or was it just late?  For me, this makes d[word] = None.

Ouch... looks like I haven't had enough coffee today. I'll
fix that immediately ;-)

How about making this a method:

def inplace(dict, key, default):

    value = dict.get(key, default)
    dict[key] = value
    return value

>>> d = {}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world']}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world', 'world']}
>>> inplace(d, 'hello', []).append('world')
>>> d
{'hello': ['world', 'world', 'world']}

(Hope I got it right this time ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Thu Aug  3 16:14:13 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 10:14:13 -0400 (EDT)
Subject: [Python-Dev] Buglist
In-Reply-To: <20000803145714.B266@xs4all.nl>
References: <20000803145714.B266@xs4all.nl>
Message-ID: <14729.32309.807363.345594@bitdiddle.concentric.net>

I am done moving old bugs from Jitterbug to SF.  There are still some
new bugs being submitted to Jitterbug, which I'll need to move one at
a time.

In principle, it's okay to mark bugs as closed, as long as you are
*sure* that the bug has been fixed.  If you try to reproduce a bug on
your system and can't, it's not clear that it has been fixed.  It
might be a platform-specific bug, for example.  I would prefer it if
you only closed bugs where you can point to the CVS checkin that fixed
it.

Whenever you fix a bug, you should add a test case to the regression
test that would have caught the bug.  Have you done that for any of
the bugs you've marked as closed?

You should also add a comment at any bug you're closing explaining why
it is closed.

It is good to assign bugs to people -- probably even if we end up
playing hot potato for a while.  If a bug is assigned to you, you
should either try to fix it, diagnose it, or assign it to someone
else.

> I think overlooking a few bugs is better than overlooking all of
> them because of the size of the list :P 

You seem to be arguing that the sheer number of bug reports bothers
you and that it's better to have a shorter list of bugs regardless of
whether they're actually fixed.  Come on! I don't want to overlook any
bugs.

Jeremy



From bwarsaw at beopen.com  Thu Aug  3 16:25:20 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 10:25:20 -0400 (EDT)
Subject: [Python-Dev] Go \x yourself
References: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <14729.32976.819777.292096@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> The new rule is the same as Perl uses for \x escapes in -w
    TP> mode, except that Python will raise ValueError at compile-time
    TP> for an invalid \x escape: an \x escape is of the form

    TP>     \xhh

    TP> where h is a hex digit.  That's it.

+1



From bwarsaw at beopen.com  Thu Aug  3 16:41:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 10:41:10 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
	<14728.63466.263123.434708@anthem.concentric.net>
	<3989454C.5C9EF39B@lemburg.com>
Message-ID: <14729.33926.145263.296629@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> The following one-liner already does what you want:

    M> 	d[word] = d.get(word, []).append('world')

    M> ... and it's in no way more readable than your proposed
    M> .put() line ;-)

Does that mean it's less readable?  :)

-Barry



From mal at lemburg.com  Thu Aug  3 16:49:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 03 Aug 2000 16:49:01 +0200
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
		<14728.63466.263123.434708@anthem.concentric.net>
		<3989454C.5C9EF39B@lemburg.com> <14729.33926.145263.296629@anthem.concentric.net>
Message-ID: <3989865D.A52964D6@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> The following one-liner already does what you want:
> 
>     M>  d[word] = d.get(word, []).append('world')
> 
>     M> ... and it's in no way more readable than your proposed
>     M> .put() line ;-)
> 
> Does that mean it's less readable?  :)

I find these .go_home().get_some_cheese().and_eat()...
constructions rather obscure.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Thu Aug  3 16:49:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 16:49:49 +0200
Subject: [Python-Dev] Buglist
In-Reply-To: <14729.32309.807363.345594@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 03, 2000 at 10:14:13AM -0400
References: <20000803145714.B266@xs4all.nl> <14729.32309.807363.345594@bitdiddle.concentric.net>
Message-ID: <20000803164949.D13365@xs4all.nl>

On Thu, Aug 03, 2000 at 10:14:13AM -0400, Jeremy Hylton wrote:

> In principle, it's okay to mark bugs as closed, as long as you are
> *sure* that the bug has been fixed.  If you try to reproduce a bug on
> your system and can't, it's not clear that it has been fixed.  It
> might be a platform-specific bug, for example.  I would prefer it if
> you only closed bugs where you can point to the CVS checkin that fixed
> it.

This is tricky for some bugreports, as they don't say *anything* about the
platform in question. However, I have been conservative, and haven't done
anything if I didn't either have the same platform as mentioned and could
reproduce the bug with 1.6a2 and/or Python 1.5.2 (very handy to have them
lying around) but not with current CVS, OR could find the CVS checkin that
fixed them. For instance, the incorrect usage of PyMem_Del() in some modules
(bug #110638) *seems* to be fixed, but I can't really test it and the CVS
checkin(s) that seem to fix it don't even mention the bug or the reason for
the change.

> Whenever you fix a bug, you should add a test case to the regression
> test that would have caught the bug.  Have you done that for any of
> the bugs you've marked as closed?

No, because all the bugs I've closed so far are 'obviously fixed', by
someone other than me. I would write one if I fixed the bug myself, I guess.
Also, most of these are more 'issues' rather than 'bugs', like someone
complaining about installing Python without Tcl/Tk and Tkinter not working,
threads misbehaving on some systems (didn't close that one, just added a
remark), etc.

> You should also add a comment at any bug you're closing explaining why
> it is closed.

Of course. I also forward the SF excerpt to the original submittor, since
they are not likely to browse the SF buglist and spot their own bug.

> It is good to assign bugs to people -- probably even if we end up
> playing hot potato for a while.  If a bug is assigned to you, you
> should either try to fix it, diagnose it, or assign it to someone
> else.

Hm, I did that for a few, but it's not very easy to find the right person,
in some cases. Bugs in the 're' module, should they go to amk or to /F ? XML
stuff, should it go to Paul Prescod or some of the other people who seem to
be doing something with XML ? A 'capabilities' list would be pretty neat!

> > I think overlooking a few bugs is better than overlooking all of
> > them because of the size of the list :P 

> You seem to be arguing that the sheer number of bug reports bothers
> you and that it's better to have a shorter list of bugs regardless of
> whether they're actually fixed.  Come on! I don't want to overlook any
> bugs.

No, that wasn't what I meant :P It's just that some bugs are vague, and
*seem* fixed, but are still an issue on some combination of compiler,
libraries, OS, etc. Also, there is the question on whether something is a
bug or a feature, or an artifact of compiler, library or design. A quick
pass over the bugs will either have to draw a firm line somewhere, or keep
most of the bugs and hope someone will look at them.

Having 9 out of 10 bugs waiting in the buglist without anyone looking at
them because it's too vague and everyone thinks not 'their' field of
expertise and expect someone else to look at them, defeats the purpose of
the buglist. But closing those bugreports, explaining the problem and even
forwarding the excerpt to the submittor *might* result in the original
submittor, who still has the bug, to forget about explaining it further,
whereas a couple of hours trying to duplicate the bug might locate it. I
personally just wouldn't want to be the one doing all that effort ;)

Just-trying-to-help-you-do-your-job---not-taking-it-over-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Aug  3 17:00:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 17:00:03 +0200
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 03, 2000 at 09:06:51AM -0400
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
Message-ID: <20000803170002.C266@xs4all.nl>

On Thu, Aug 03, 2000 at 09:06:51AM -0400, Fred L. Drake, Jr. wrote:

> There is no tp_str handler, so str(xrange(...)) is the same as
> repr(xrange(...)).
>   I propose ripping out the tp_print handler completely.  (And I've
> already tested my patch. ;)
>   Comments?

+0... I would say 'swap str and repr', because str(xrange) does what
repr(xrange) should do, and the other way 'round:

>>> x = xrange(1000)
>>> repr(x)
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
... ... ... 
... 998, 999)

>>> str(x)
'(xrange(0, 1000, 1) * 1)'

But I don't really care either way.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug  3 17:14:57 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 11:14:57 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <20000803170002.C266@xs4all.nl>
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
	<20000803170002.C266@xs4all.nl>
Message-ID: <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > >>> x = xrange(1000)
 > >>> repr(x)
 > (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
 > ... ... ... 
 > ... 998, 999)
 > 
 > >>> str(x)
 > '(xrange(0, 1000, 1) * 1)'

  What version is this with?  1.5.2 gives me:

Python 1.5.2 (#1, May  9 2000, 15:05:56)  [GCC 2.95.3 19991030 (prerelease)] on linux-i386
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = xrange(2)
>>> str(x)
'(xrange(0, 2, 1) * 1)'
>>> repr(x)
'(xrange(0, 2, 1) * 1)'
>>> x
(0, 1)

  The 1.6b1 that's getting itself ready says this:

Python 1.6b1 (#19, Aug  2 2000, 01:11:29)  [GCC 2.95.3 19991030 (prerelease)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
Module readline not available.
>>> x = xrange(2)
>>> str(x)
'(xrange(0, 2, 1) * 1)'
>>> repr(x)
'(xrange(0, 2, 1) * 1)'
>>> x
(0, 1)

  What I'm proposing is:

Python 2.0b1 (#116, Aug  2 2000, 15:35:35)  [GCC 2.95.3 19991030 (prerelease)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> x = xrange(2)
>>> str(x)
'xrange(0, 2, 1)'
>>> repr(x)
'xrange(0, 2, 1)'
>>> x
xrange(0, 2, 1)

  (Where the outer (... * n) is added only when n != 1, 'cause I think
that's just ugly.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug  3 17:30:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 3 Aug 2000 17:30:23 +0200
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 03, 2000 at 11:14:57AM -0400
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com> <20000803170002.C266@xs4all.nl> <14729.35953.19610.61905@cj42289-a.reston1.va.home.com>
Message-ID: <20000803173023.D266@xs4all.nl>

On Thu, Aug 03, 2000 at 11:14:57AM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > >>> x = xrange(1000)
>  > >>> repr(x)
>  > (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
>  > ... ... ... 
>  > ... 998, 999)
>  > 
>  > >>> str(x)
>  > '(xrange(0, 1000, 1) * 1)'

>   What version is this with?  1.5.2 gives me:
> 
> Python 1.5.2 (#1, May  9 2000, 15:05:56)  [GCC 2.95.3 19991030 (prerelease)] on linux-i386
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> x = xrange(2)
> >>> str(x)
> '(xrange(0, 2, 1) * 1)'
> >>> repr(x)
> '(xrange(0, 2, 1) * 1)'
> >>> x
> (0, 1)

Sorry, my bad. I just did 'x', and assumed it called repr(). I guess my
newbiehood shows in that I thought 'print x' always called 'str(x)'. Like I
replied to Tim this morning, after he caught me in the same kind of
ebmarrasing thinko:

Sigh, that's what I get for getting up when my GF had to and being at the
office at 8am. Don't mind my postings today, they're likely 99% brainfart.

Seeing as how 'print "range: %s"%x' did already use the 'str' and 'repr'
output, I see no reason not to make 'print x' do the same. So +1.

> >>> x
> xrange(0, 2, 1)
> 
>   (Where the outer (... * n) is added only when n != 1, 'cause I think
> that's just ugly.)

Why not remove the first and last argument, if they are respectively 0 and 1?

>>> xrange(100)
xrange(100)
>>> xrange(10,100)
xrange(10, 100)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug  3 17:48:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 11:48:28 -0400 (EDT)
Subject: [Python-Dev] printing xrange objects
In-Reply-To: <20000803173023.D266@xs4all.nl>
References: <14729.28267.517936.331801@cj42289-a.reston1.va.home.com>
	<20000803170002.C266@xs4all.nl>
	<14729.35953.19610.61905@cj42289-a.reston1.va.home.com>
	<20000803173023.D266@xs4all.nl>
Message-ID: <14729.37964.46818.653202@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Sorry, my bad. I just did 'x', and assumed it called repr(). I guess my
 > newbiehood shows in that I thought 'print x' always called 'str(x)'. Like I

  That's the evil beauty of tp_print -- nobody really expects it
because most types don't implement it (and don't need to); I seem to
recall Guido saying it was a performance issue for certain types, but
don't recall the specifics.

 > Why not remove the first and last argument, if they are respectively 0 and 1?

  I agree!  In fact, always removing the last argument if it == 1 is a
good idea as well.  Here's the output from the current patch:

>>> xrange(2)
xrange(2)
>>> xrange(2, 4)
xrange(2, 4)
>>> x = xrange(10, 4, -1)
>>> x
xrange(10, 4, -1)
>>> x.tolist()
[10, 9, 8, 7, 6, 5]
>>> x*3
(xrange(10, 4, -1) * 3)



  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From jeremy at beopen.com  Thu Aug  3 18:26:51 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 12:26:51 -0400 (EDT)
Subject: [Python-Dev] Buglist
In-Reply-To: <20000803164949.D13365@xs4all.nl>
References: <20000803145714.B266@xs4all.nl>
	<14729.32309.807363.345594@bitdiddle.concentric.net>
	<20000803164949.D13365@xs4all.nl>
Message-ID: <14729.40267.557470.612144@bitdiddle.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

  >> It is good to assign bugs to people -- probably even if we end up
  >> playing hot potato for a while.  If a bug is assigned to you, you
  >> should either try to fix it, diagnose it, or assign it to someone
  >> else.

  TW> Hm, I did that for a few, but it's not very easy to find the
  TW> right person, in some cases. Bugs in the 're' module, should
  TW> they go to amk or to /F ? XML stuff, should it go to Paul
  TW> Prescod or some of the other people who seem to be doing
  TW> something with XML ? A 'capabilities' list would be pretty neat!

I had the same problem when I was trying to assign bugs.  It is seldom
clear who should be assigned a bug.  I have used two rules when
processing open, uncategorized bugs:

    * If you have a reasonable guess about who to assign a bug to,
    it's better to assign to the wrong person than not to assign at
    all.  If the wrong person gets it, she can assign it to someone
    else. 

    * If you don't know who to assign it to, at least give it a
    category.  That allows someone who feels expert in a category
    (e.g. a Tkinter guru), to easily scan all the unassigned bugs in
    that category.

  >> You seem to be arguing that the sheer number of bug reports
  >> bothers you and that it's better to have a shorter list of bugs
  >> regardless of whether they're actually fixed.  Come on! I don't
  >> want to overlook any bugs.

  TW> No, that wasn't what I meant :P 

Sorry.  I didn't believe you really meant that, but you came off
sounding like you did :-).

  TW> Having 9 out of 10 bugs waiting in the buglist without anyone
  TW> looking at them because it's too vague and everyone thinks not
  TW> 'their' field of expertise and expect someone else to look at
  TW> them, defeats the purpose of the buglist. 

I still don't agree here.  If you're not fairly certain about the bug,
keep it on the list.  I don't see too much harm in having vague, open
bugs on the list.  

  TW>                                           But closing those
  TW> bugreports, explaining the problem and even forwarding the
  TW> excerpt to the submittor *might* result in the original
  TW> submittor, who still has the bug, to forget about explaining it
  TW> further, whereas a couple of hours trying to duplicate the bug
  TW> might locate it. I personally just wouldn't want to be the one
  TW> doing all that effort ;)

You can send mail to the person who reported the bug and ask her for
more details without closing it.

  TW> Just-trying-to-help-you-do-your-job---not-taking-it-over-ly

And I appreciate the help!! The more bugs we have categorized or
assigned, the better.

of-course-actually-fixing-real-bugs-is-good-too-ly y'rs,
Jeremy





From moshez at math.huji.ac.il  Thu Aug  3 18:44:28 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 19:44:28 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
Message-ID: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>

Suppose I'm fixing a bug in the library. I want peer review for my fix,
but I need none for my new "would have caught" test cases. Is it
considered alright to check-in right away the test case, breaking the test
suite, and to upload a patch to SF to fix it? Or should the patch include
the new test cases? 

The XP answer would be "hey, you have to checkin the breaking test case
right away", and I'm inclined to agree.

I really want to break the standard library, just because I'm a sadist --
but seriously, we need tests that break more often, so bugs will be easier
to fix.

waiting-for-fellow-sadists-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Thu Aug  3 19:54:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 12:54:55 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 19:44:28 +0300."
             <Pine.GSO.4.10.10008031940420.2575-100000@sundial> 
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> 
Message-ID: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>

> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases? 
> 
> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.
> 
> I really want to break the standard library, just because I'm a sadist --
> but seriously, we need tests that break more often, so bugs will be easier
> to fix.

In theory I'm with you.  In practice, each time the test suite breaks,
we get worried mail from people who aren't following the list closely,
did a checkout, and suddenly find that the test suite breaks.  That
just adds noise to the list.  So I'm against it.

-1

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From moshez at math.huji.ac.il  Thu Aug  3 18:55:41 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 3 Aug 2000 19:55:41 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008031954110.2575-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> In theory I'm with you.  In practice, each time the test suite breaks,
> we get worried mail from people who aren't following the list closely,
> did a checkout, and suddenly find that the test suite breaks.  That
> just adds noise to the list.  So I'm against it.
> 
> -1

In theory, theory and practice shouldn't differ. In practice, they do.
Guido, you're way too much realist <1.6 wink>
Oh, well.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From gstein at lyra.org  Thu Aug  3 19:04:01 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 10:04:01 -0700
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 03, 2000 at 07:44:28PM +0300
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <20000803100401.T19525@lyra.org>

On Thu, Aug 03, 2000 at 07:44:28PM +0300, Moshe Zadka wrote:
> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases?

If you're fixing a bug, then check in *both* pieces and call explicitly for
a peer reviewer (plus the people watching -checkins). If you don't quite fix
the bug, then a second checkin can smooth things out.

Let's not get too caught up in "process", to the exclusion of being
productive about bug fixing.

> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.
> 
> I really want to break the standard library, just because I'm a sadist --
> but seriously, we need tests that break more often, so bugs will be easier
> to fix.

I really want to see less process and discussion, and more code.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From effbot at telia.com  Thu Aug  3 19:19:03 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 19:19:03 +0200
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks)
References: <015a01bffcc9$ed942bc0$f2a6b5d4@hagrid> <20000802180059.B30340@kronos.cnri.reston.va.us>              <006601bffd24$e25a9360$f2a6b5d4@hagrid>  <200008031331.IAA06319@cj20424-a.reston1.va.home.com>
Message-ID: <007401bffd6e$ed9bbde0$f2a6b5d4@hagrid>

guido wrote:
> > ...but sure, I will fix that in 0.9.9 (SRE, not Python -- Christian
> > has already taken care of the other one ;-).  but 0.9.9 won't be
> > out before the 1.6b1 release...
> 
> I assume you are planning to put the backtracking stack back in, as
> you mentioned in the checkin message?

yup -- but that'll have to wait a few more days...

> > (and to avoid scaring the hell out of the beta testers, it's probably
> > better to leave the test out of the regression suite until the bug is
> > fixed...)
> 
> Even better, is it possible to put a limit on the recursion level
> before 1.6b1 is released (tomorrow if we get final agreement on the
> license) so at least it won't dump core?

shouldn't be too hard, given that I added a "recursion level
counter" in _sre.c revision 2.30.  I just added the necessary
if-statement.

</F>




From gstein at lyra.org  Thu Aug  3 20:39:08 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 3 Aug 2000 11:39:08 -0700
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008031754.MAA08812@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 03, 2000 at 12:54:55PM -0500
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <200008031754.MAA08812@cj20424-a.reston1.va.home.com>
Message-ID: <20000803113908.X19525@lyra.org>

On Thu, Aug 03, 2000 at 12:54:55PM -0500, Guido van Rossum wrote:
> > Suppose I'm fixing a bug in the library. I want peer review for my fix,
> > but I need none for my new "would have caught" test cases. Is it
> > considered alright to check-in right away the test case, breaking the test
> > suite, and to upload a patch to SF to fix it? Or should the patch include
> > the new test cases? 
> > 
> > The XP answer would be "hey, you have to checkin the breaking test case
> > right away", and I'm inclined to agree.
> > 
> > I really want to break the standard library, just because I'm a sadist --
> > but seriously, we need tests that break more often, so bugs will be easier
> > to fix.
> 
> In theory I'm with you.  In practice, each time the test suite breaks,
> we get worried mail from people who aren't following the list closely,
> did a checkout, and suddenly find that the test suite breaks.  That
> just adds noise to the list.  So I'm against it.

Tell those people to chill out for a few days and not be so jumpy. You're
talking about behavior that can easily be remedied.

It is a simple statement about the CVS repository: "CVS builds but may not
pass the test suite in certain cases" rather than "CVS is perfect"

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From tim_one at email.msn.com  Thu Aug  3 20:49:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 14:49:02 -0400
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> Suppose I'm fixing a bug in the library. I want peer review
> for my fix, but I need none for my new "would have caught"
> test cases. Is it considered alright to check-in right away
> the test case, breaking the test suite, and to upload a patch
> to SF to fix it? Or should the patch include the new test cases?
>
> The XP answer would be "hey, you have to checkin the breaking
> test case right away", and I'm inclined to agree.

It's abhorrent to me to ever leave the tree in a state where a test is
"expected to fail".  If it's left in a failing state for a brief period, at
best other developers will waste time wondering whether it's due to
something they did.  If it's left in a failing state longer than that,
people quickly stop paying attention to failures at all (the difference
between "all passed" and "something failed" is huge, the differences among 1
or 2 or 3 or ... failures get overlooked, and we've seen over and over that
when 1 failure is allowed to persist, others soon join it).

You can check in an anti-test right away, though:  a test that passes so
long as the code remains broken <wink>.





From jeremy at beopen.com  Thu Aug  3 20:58:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 14:58:15 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
Message-ID: <14729.49351.574550.48521@bitdiddle.concentric.net>

I'm Tim on this issue.  As officially appointed release manager for
2.0, I set some guidelines for checking in code.  One is that no
checkin should cause the regression test to fail.  If it does, I'll
back it out.

If you didn't review the contribution guidelines when they were posted
on this list, please look at PEP 200 now.

Jeremy



From jeremy at beopen.com  Thu Aug  3 21:00:23 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:00:23 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
	<14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <14729.49479.677157.957162@bitdiddle.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy at beopen.com> writes:

  JH> I'm Tim on this issue.

Make that "I'm with Tim on this issue."  I'm sure it would be fun to
channel Tim, but I don't have the skills for that.

Jeremy



From guido at beopen.com  Thu Aug  3 22:02:07 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:02:07 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 11:39:08 MST."
             <20000803113908.X19525@lyra.org> 
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <200008031754.MAA08812@cj20424-a.reston1.va.home.com>  
            <20000803113908.X19525@lyra.org> 
Message-ID: <200008032002.PAA17349@cj20424-a.reston1.va.home.com>

> Tell those people to chill out for a few days and not be so jumpy. You're
> talking about behavior that can easily be remedied.
> 
> It is a simple statement about the CVS repository: "CVS builds but may not
> pass the test suite in certain cases" rather than "CVS is perfect"

I would agree if it was only the python-dev crowd -- they are easily
trained.  But there are lots of others who check out the tree, so it
would be a continuing education process.  I don't see what good it does.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Thu Aug  3 21:13:08 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 3 Aug 2000 21:13:08 +0200
Subject: [Python-Dev] Breaking Test Cases on Purpose
References: <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
Message-ID: <00d501bffd7e$deb6ece0$f2a6b5d4@hagrid>

moshe:
> > The XP answer would be "hey, you have to checkin the breaking
> > test case right away", and I'm inclined to agree.

tim:
> It's abhorrent to me to ever leave the tree in a state where a test is
> "expected to fail".  If it's left in a failing state for a brief period, at
> best other developers will waste time wondering whether it's due to
> something they did

note that we've just seen this in action, in the SRE crash thread.

Andrew checked in a test that caused the test suite to bomb, and
sent me and Mark F. looking for an non-existing portability bug...

> You can check in an anti-test right away, though:  a test that passes so
> long as the code remains broken <wink>.

which is what the new SRE test script does -- the day SRE supports
unlimited recursion (soon), the test script will complain...

</F>




From tim_one at email.msn.com  Thu Aug  3 21:06:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 15:06:49 -0400
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENNGNAA.tim_one@email.msn.com>

[Jeremy Hylton]
> I'm Tim on this issue.

Then I'm Jeremy too.  Wow!  I needed a vacation <wink>.





From guido at beopen.com  Thu Aug  3 22:15:26 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:15:26 -0500
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Your message of "Thu, 03 Aug 2000 15:00:23 -0400."
             <14729.49479.677157.957162@bitdiddle.concentric.net> 
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial> <LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com> <14729.49351.574550.48521@bitdiddle.concentric.net>  
            <14729.49479.677157.957162@bitdiddle.concentric.net> 
Message-ID: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>

>   JH> I'm Tim on this issue.
> 
> Make that "I'm with Tim on this issue."  I'm sure it would be fun to
> channel Tim, but I don't have the skills for that.

Actually, in my attic there's a small door that leads to a portal into
Tim's brain.  Maybe we could get Tim to enter the portal -- it would
be fun to see him lying on a piano in a dress reciting a famous aria.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Thu Aug  3 21:19:18 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:19:18 -0400 (EDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCCENKGNAA.tim_one@email.msn.com>
	<14729.49351.574550.48521@bitdiddle.concentric.net>
	<14729.49479.677157.957162@bitdiddle.concentric.net>
	<200008032015.PAA17571@cj20424-a.reston1.va.home.com>
Message-ID: <14729.50614.806442.190962@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  JH> I'm Tim on this issue.
  >>  Make that "I'm with Tim on this issue."  I'm sure it would be
  >> fun to channel Tim, but I don't have the skills for that.

  GvR> Actually, in my attic there's a small door that leads to a
  GvR> portal into Tim's brain.  Maybe we could get Tim to enter the
  GvR> portal -- it would be fun to see him lying on a piano in a
  GvR> dress reciting a famous aria.

You should have been on the ride from Monterey to the San Jose airport
a couple of weeks ago.  There was no piano, but it was pretty close.

Jeremy



From jeremy at beopen.com  Thu Aug  3 21:31:50 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 3 Aug 2000 15:31:50 -0400 (EDT)
Subject: [Python-Dev] tests for standard library modules
Message-ID: <14729.51366.391122.131492@bitdiddle.concentric.net>

Most of the standard library is untested.

There are 148 top-level Python modules in the standard library, plus a
few packages that contain 50 or 60 more modules.  When we run the
regression test, we only touch 48 of those modules.  Only 18 of the
modules have their own test suite.  The other 30 modules at least get
imported, though sometimes none of the code gets executed.  (The
traceback module is an example.)

I would like to see much better test coverage in Python 2.0.  I would
welcome any test case submissions that improve the coverage of the
standard library.

Skip's trace.py code coverage tool is now available in Tools/script.
You can use it to examine how much of a particular module is covered
by existing tests.

Jeremy



From guido at beopen.com  Thu Aug  3 22:39:44 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 03 Aug 2000 15:39:44 -0500
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: Your message of "Thu, 03 Aug 2000 15:31:50 -0400."
             <14729.51366.391122.131492@bitdiddle.concentric.net> 
References: <14729.51366.391122.131492@bitdiddle.concentric.net> 
Message-ID: <200008032039.PAA17852@cj20424-a.reston1.va.home.com>

> Most of the standard library is untested.

Indeed.  I would suggest looking at the Tcl test suite.  It's very
thorough!  When I look at many of the test modules we *do* have, I
cringe at how little of the module the test actually covers.  Many
tests (not the good ones!) seem to be content with checking that all
functions in a module *exist*.  Much of this dates back to one
particular period in 1996-1997 when we (at CNRI) tried to write test
suites for all modules -- clearly we were in a hurry! :-(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Aug  4 00:25:38 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 4 Aug 2000 00:25:38 +0200 (CEST)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <14729.51366.391122.131492@bitdiddle.concentric.net> from "Jeremy Hylton" at Aug 03, 2000 03:31:50 PM
Message-ID: <200008032225.AAA27154@python.inrialpes.fr>

Jeremy Hylton wrote:
> 
> Skip's trace.py code coverage tool is now available in Tools/script.
> You can use it to examine how much of a particular module is covered
> by existing tests.

Hmm. Glancing quickly at trace.py, I see that half of it is guessing
line numbers. The same SET_LINENO problem again. This is unfortunate.
But fortunately <wink>, here's another piece of code, modeled after
its C counterpart, that comes to Skip's rescue and that works with -O.

Example:

>>> import codeutil
>>> co = codeutil.PyCode_Line2Addr.func_code   # some code object
>>> codeutil.PyCode_GetExecLines(co)
[20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
>>> codeutil.PyCode_Line2Addr(co, 29)
173
>>> codeutil.PyCode_Addr2Line(co, 173)
29
>>> codeutil.PyCode_Line2Addr(co, 10)
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  File "codeutil.py", line 26, in PyCode_Line2Addr
    raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
IndexError: line must be in range [20,36]

etc...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252

------------------------------[ codeutil.py ]-------------------------
import types

def PyCode_Addr2Line(co, addrq):
    assert type(co) == types.CodeType, \
           "1st arg must be a code object, %s given" % type(co).__name__
    if addrq < 0 or addrq > len(co.co_code):
        raise IndexError, "address must be in range [0,%d]" % len(co.co_code)
    addr = 0
    line = co.co_firstlineno
    lnotab = co.co_lnotab
    for i in range(0, len(lnotab), 2):
        addr_incr = ord(lnotab[i])
        line_incr = ord(lnotab[i+1])
        addr = addr + addr_incr
        if (addr > addrq):
            break
        line = line + line_incr
    return line

def PyCode_Line2Addr(co, lineq):
    assert type(co) == types.CodeType, \
           "1st arg must be a code object, %s given" % type(co).__name__
    line = co.co_firstlineno
    lastlineno = PyCode_Addr2Line(co, len(co.co_code))
    if lineq < line or lineq > lastlineno:
        raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
    addr = 0
    lnotab = co.co_lnotab
    for i in range(0, len(lnotab), 2):
        if line >= lineq:
            break
        addr_incr = ord(lnotab[i])
        line_incr = ord(lnotab[i+1])
        addr = addr + addr_incr
        line = line + line_incr
    return addr

def PyCode_GetExecLines(co):
    assert type(co) == types.CodeType, \
           "arg must be a code object, %s given" % type(co).__name__
    lastlineno = PyCode_Addr2Line(co, len(co.co_code))
    lines = range(co.co_firstlineno, lastlineno + 1)
    # remove void lines (w/o opcodes): comments, blank/escaped lines
    i = len(lines) - 1
    while i >= 0:
        if lines[i] != PyCode_Addr2Line(co, PyCode_Line2Addr(co, lines[i])):
            lines.pop(i)
        i = i - 1
    return lines



From mwh21 at cam.ac.uk  Fri Aug  4 00:19:51 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 03 Aug 2000 23:19:51 +0100
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: Moshe Zadka's message of "Thu, 3 Aug 2000 19:44:28 +0300 (IDT)"
References: <Pine.GSO.4.10.10008031940420.2575-100000@sundial>
Message-ID: <m31z063s3c.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez at math.huji.ac.il> writes:

> Suppose I'm fixing a bug in the library. I want peer review for my fix,
> but I need none for my new "would have caught" test cases. Is it
> considered alright to check-in right away the test case, breaking the test
> suite, and to upload a patch to SF to fix it? Or should the patch include
> the new test cases? 
> 
> The XP answer would be "hey, you have to checkin the breaking test case
> right away", and I'm inclined to agree.

I'm not so sure.  I can't find the bit I'm looking for in Beck's
book[1], but ISTR that you have two sorts of test, unit tests and
functional tests.  Unit tests always work, functional tests are more
what you want to work in the future, but may not now.  What goes in
Lib/test is definitely more of the unit test variety, and so if
something in there breaks it's a cause for alarm.  Checking in a test
you know will break just raises blood pressure for no good reason.

Also what if you're hacking on some bit of Python, run the test suite
and it fails?  You worry that you've broken something, when in fact
it's nothing to do with you.

-1. (like everyone else...)

Cheers,
M.

[1] Found it; p. 118 of "Extreme Programming Explained"

-- 
  I'm okay with intellegent buildings, I'm okay with non-sentient
  buildings. I have serious reservations about stupid buildings.
     -- Dan Sheppard, ucam.chat (from Owen Dunn's summary of the year)




From skip at mojam.com  Fri Aug  4 00:21:04 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:21:04 -0500 (CDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" 
 method
In-Reply-To: <398979D0.5AF80126@lemburg.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
	<14728.63466.263123.434708@anthem.concentric.net>
	<3989454C.5C9EF39B@lemburg.com>
	<200008031256.HAA06107@cj20424-a.reston1.va.home.com>
	<398979D0.5AF80126@lemburg.com>
Message-ID: <14729.61520.11958.530601@beluga.mojam.com>

    >> How about making this a method:

    >> def inplace(dict, key, default):

    >>     value = dict.get(key, default)
    >>     dict[key] = value
    >>     return value

eh... I don't like these do two things at once kind of methods.  I see
nothing wrong with

    >>> dict = {}
    >>> dict['hello'] = dict.get('hello', [])
    >>> dict['hello'].append('world')
    >>> print dict
    {'hello': ['world']}

or

    >>> d = dict['hello'] = dict.get('hello', [])
    >>> d.insert(0, 'cruel')
    >>> print dict
    {'hello': ['cruel', 'world']}

for the obsessively efficiency-minded folks.

Also, we're talking about a method that would generally only be useful when
dictionaries have values which were mutable objects.  Irregardless of how
useful instances and lists are, I still find that my predominant day-to-day
use of dictionaries is with strings as keys and values.  Perhaps that's just
the nature of my work.

In short, I don't think anything needs to be changed.

-1 (don't like the concept, so I don't care about the implementation)

Skip



From mal at lemburg.com  Fri Aug  4 00:36:33 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 04 Aug 2000 00:36:33 +0200
Subject: [Python-Dev] Line number tools (tests for standard library modules)
References: <200008032225.AAA27154@python.inrialpes.fr>
Message-ID: <3989F3F1.162A9766@lemburg.com>

Vladimir Marangozov wrote:
> 
> Jeremy Hylton wrote:
> >
> > Skip's trace.py code coverage tool is now available in Tools/script.
> > You can use it to examine how much of a particular module is covered
> > by existing tests.
> 
> Hmm. Glancing quickly at trace.py, I see that half of it is guessing
> line numbers. The same SET_LINENO problem again. This is unfortunate.
> But fortunately <wink>, here's another piece of code, modeled after
> its C counterpart, that comes to Skip's rescue and that works with -O.
> 
> Example:
> 
> >>> import codeutil
> >>> co = codeutil.PyCode_Line2Addr.func_code   # some code object
> >>> codeutil.PyCode_GetExecLines(co)
> [20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
> >>> codeutil.PyCode_Line2Addr(co, 29)
> 173
> >>> codeutil.PyCode_Addr2Line(co, 173)
> 29
> >>> codeutil.PyCode_Line2Addr(co, 10)
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
>   File "codeutil.py", line 26, in PyCode_Line2Addr
>     raise IndexError, "line must be in range [%d,%d]" % (line, lastlineno)
> IndexError: line must be in range [20,36]
> 
> etc...

Cool. 

With proper Python style names these utilities
would be nice additions for e.g. codeop.py or code.py.

BTW, I wonder why code.py includes Python console emulations:
there seems to be a naming bug there... I would have
named the module PythonConsole.py and left code.py what
it was previously: a collection of tools dealing with Python
code objects.

--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From skip at mojam.com  Fri Aug  4 00:53:58 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:53:58 -0500 (CDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <14729.51366.391122.131492@bitdiddle.concentric.net>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
Message-ID: <14729.63494.544079.516429@beluga.mojam.com>

    Jeremy> Skip's trace.py code coverage tool is now available in
    Jeremy> Tools/script.  You can use it to examine how much of a
    Jeremy> particular module is covered by existing tests.

Yes, though note that in the summary stuff on my web site there are obvious
bugs that I haven't had time to look at.  Sometimes modules are counted
twice.  Other times a module is listed as untested when right above it there
is a test coverage line...  

Skip



From skip at mojam.com  Fri Aug  4 00:59:34 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 3 Aug 2000 17:59:34 -0500 (CDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <200008032225.AAA27154@python.inrialpes.fr>
References: <14729.51366.391122.131492@bitdiddle.concentric.net>
	<200008032225.AAA27154@python.inrialpes.fr>
Message-ID: <14729.63830.894657.930184@beluga.mojam.com>

    Vlad> Hmm. Glancing quickly at trace.py, I see that half of it is
    Vlad> guessing line numbers. The same SET_LINENO problem again. This is
    Vlad> unfortunate.  But fortunately <wink>, here's another piece of
    Vlad> code, modeled after its C counterpart, that comes to Skip's rescue
    Vlad> and that works with -O.

Go ahead and check in any changes you see that need doing.  I haven't
fiddled with trace.py much in the past couple of years, so there are some
places that clearly do things different than currently accepted practice.

(I am going to be up to my ass in alligators pretty much from now through
Labor Day (early September for the furriners among us), so things I thought
I would get to probably will remain undone.  The most important thing is to
fix the list comprehensions patch to force expression tuples to be
parenthesized.  Guido says it's an easy fix, and the grammar changes seem
trivial, but fixing compile.c is beyond my rusty knowledge at the moment.
Someone want to pick this up?)

Skip



From MarkH at ActiveState.com  Fri Aug  4 01:13:06 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 09:13:06 +1000
Subject: [Python-Dev] (os.kill (was Fork) on Win32 - was (test_fork1 failing...)
In-Reply-To: <39897635.6C9FB82D@lemburg.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCEDODDAA.MarkH@ActiveState.com>

[Marc writes]
> On Unix you can install a signal
> handler in the Python program which then translates the SIGTERM
> signal into a normal Python exception. Sending the signal then
> causes the same as e.g. hitting Ctrl-C in a program: an
> exception is raised asynchronously, but it can be handled
> properly by the Python exception clauses to enable safe
> shutdown of the process.

I understand this.  This is why I was skeptical that a
"terminate-without-prejudice" only version would be useful.

I _think_ this fairly large email is agreeing that it isn't of much use.
If so, then I am afraid you are on your own :-(

Mark.




From Vladimir.Marangozov at inrialpes.fr  Fri Aug  4 01:27:39 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 4 Aug 2000 01:27:39 +0200 (CEST)
Subject: [Python-Dev] Removing the 16 bit arg limit
Message-ID: <200008032327.BAA27362@python.inrialpes.fr>

I've looked at this and the best compromise solution I ended up with
(before Py3K) is sketched here:

opcode.h:
#define EXTENDED_ARG	135	/* 16 higher bits of the next opcode arg */

ceval.c:
		case EXTENDED_ARG:
			do {
				oparg <<= 16;
				op = NEXTOP();
				oparg += NEXTARG();
			} while (op == EXTENDED_ARG);
			goto dispatch_opcode;

compile.c:
static void
com_addoparg(struct compiling *c, int op, int arg)
{
	if (arg < 0) {
		com_error(c, PyExc_SystemError,
			  "com_addoparg: argument out of range");
	}
	if (op == SET_LINENO) {
		com_set_lineno(c, arg);
		if (Py_OptimizeFlag)
			return;
	}
	do {
		int arg2 = arg & 0xffff;
		arg -= arg2;
		if (arg > 0)
			com_addbyte(c, EXTENDED_ARG);
		else
			com_addbyte(c, op);
		com_addint(c, arg2);
	} while (arg > 0);
}


But this is only difficulty level 0.

Difficulty level 1 is the jumps and their forward refs & backpatching in
compile.c.

There's no tricky solution to this (due to the absolute jumps). The only
reasonable, long-term useful solution I can think of is to build a table
of all anchors (delimiting the basic blocks of the code), then make a final
pass over the serialized basic blocks and update the anchors (with or
without EXTENDED_ARG jumps depending on the need).

However, I won't even think about it anymore whithout BDFL & Tim's
approval and strong encouragement <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gward at python.net  Fri Aug  4 03:24:44 2000
From: gward at python.net (Greg Ward)
Date: Thu, 3 Aug 2000 21:24:44 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
Message-ID: <20000803212444.A1237@beelzebub>

Hi all --

for building extensions with non-MS compilers, it sounds like a small
change to PC/config.h is needed.  Rene Liebscher suggests changing

  #ifndef USE_DL_EXPORT
  /* So nobody needs to specify the .lib in their Makefile any more */
  #ifdef _DEBUG
  #pragma comment(lib,"python20_d.lib")
  #else
  #pragma comment(lib,"python20.lib")
  #endif
  #endif /* USE_DL_EXPORT */

to

  #if !defined(USE_DL_EXPORT) && defined(_MSC_VER)
  ...

That way, the convenience pragma will still be there for MSVC users, but
it won't break building extensions with Borland C++.  (As I understand
it, Borland C++ understands the pragma, but then tries to use Python's
standard python20.lib, which of course is only for MSVC.)  Non-MSVC
users will have to explicitly supply the library, but that's OK: the
Distutils does it for them.  (Even with MSVC, because it's too much
bother *not* to specify python20.lib explicitly.)

Does this look like the right change to everyone?  I can check it in
(and on the 1.6 branch too) if it looks OK.

While I have your attention, Rene also suggests the convention of
"bcpp_python20.lib" for the Borland-format lib file, with other
compilers (library formats) supported in future similarly.  Works for me 
-- anyone have any problems with that?

        Greg
-- 
Greg Ward - programmer-at-big                           gward at python.net
http://starship.python.net/~gward/
Know thyself.  If you need help, call the CIA.



From moshez at math.huji.ac.il  Fri Aug  4 03:38:32 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:38:32 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <14729.49351.574550.48521@bitdiddle.concentric.net>
Message-ID: <Pine.GSO.4.10.10008040437320.9544-100000@sundial>

On Thu, 3 Aug 2000, Jeremy Hylton wrote:

> I'm Tim on this issue.  As officially appointed release manager for
> 2.0, I set some guidelines for checking in code.  One is that no
> checkin should cause the regression test to fail.  If it does, I'll
> back it out.
> 
> If you didn't review the contribution guidelines when they were posted
> on this list, please look at PEP 200 now.

Actually, I did. The thing is, it seems to me there's a huge difference
between breaking code, and manifesting that the code is wrong.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From moshez at math.huji.ac.il  Fri Aug  4 03:41:12 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:41:12 +0300 (IDT)
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <200008032015.PAA17571@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040440110.9544-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> >   JH> I'm Tim on this issue.
> > 
> > Make that "I'm with Tim on this issue."  I'm sure it would be fun to
> > channel Tim, but I don't have the skills for that.
> 
> Actually, in my attic there's a small door that leads to a portal into
> Tim's brain.  Maybe we could get Tim to enter the portal -- it would
> be fun to see him lying on a piano in a dress reciting a famous aria.

I think I need to get out more often. I just realized I think it would
be fun to. Anybody there has a video camera?
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From moshez at math.huji.ac.il  Fri Aug  4 03:45:52 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 04:45:52 +0300 (IDT)
Subject: [Python-Dev] tests for standard library modules
In-Reply-To: <200008032039.PAA17852@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040444020.9544-100000@sundial>

On Thu, 3 Aug 2000, Guido van Rossum wrote:

> > Most of the standard library is untested.
> 
> Indeed.  I would suggest looking at the Tcl test suite.  It's very
> thorough!  When I look at many of the test modules we *do* have, I
> cringe at how little of the module the test actually covers.  Many
> tests (not the good ones!) seem to be content with checking that all
> functions in a module *exist*.  Much of this dates back to one
> particular period in 1996-1997 when we (at CNRI) tried to write test
> suites for all modules -- clearly we were in a hurry! :-(

Here's a suggestion for easily getting hints about what test suites to
write: go through the list of open bugs, and write a "would have caught"
test. At worst, we will actually have to fix some bugs <wink>.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Aug  4 04:23:59 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:23:59 -0400
Subject: [Python-Dev] snprintf breaks build
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>

Fred checked in a new rangeobject.c with 3 calls to snprintf.  That isn't a
std C function, and the lack of it breaks the build at least under Windows.
In the absence of a checkin supplying snprintf on all platforms within the
next hour, I'll just replace the snprintf calls with something that's
portable.





From MarkH at ActiveState.com  Fri Aug  4 04:27:32 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 12:27:32 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000803212444.A1237@beelzebub>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>

> Does this look like the right change to everyone?  I can check it in
> (and on the 1.6 branch too) if it looks OK.

I have no problems with this (but am a little confused - see below)

> While I have your attention, Rene also suggests the convention of
> "bcpp_python20.lib" for the Borland-format lib file, with other
> compilers (library formats) supported in future similarly.  Works for me
> -- anyone have any problems with that?

I would prefer python20_bcpp.lib, but that is not an issue.

I am a little confused by the intention, tho.  Wouldnt it make sense to
have Borland builds of the core create a Python20.lib, then we could keep
the pragma in too?

If people want to use Borland for extensions, can't we ask them to use that
same compiler to build the core too?  That would seem to make lots of the
problems go away?

But assuming there are good reasons, I am happy.  It wont bother me for
some time yet ;-) <just deleted a rant about the fact that anyone on
Windows who values their time in more than cents-per-hour would use MSVC,
but deleted it ;->

Sometimes-the-best-things-in-life-arent-free ly,

Mark.




From MarkH at ActiveState.com  Fri Aug  4 04:30:22 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 12:30:22 +1000
Subject: [Python-Dev] Breaking Test Cases on Purpose
In-Reply-To: <Pine.GSO.4.10.10008040440110.9544-100000@sundial>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEEKDDAA.MarkH@ActiveState.com>

> Anybody there has a video camera?

Eeeuuugghhh - the concept of Tim's last threatened photo-essay turning into
a video-essay has made me physically ill ;-)

Just-dont-go-there ly,

Mark.




From fdrake at beopen.com  Fri Aug  4 04:34:34 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 3 Aug 2000 22:34:34 -0400 (EDT)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
Message-ID: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Fred checked in a new rangeobject.c with 3 calls to snprintf.  That isn't a
 > std C function, and the lack of it breaks the build at least under Windows.
 > In the absence of a checkin supplying snprintf on all platforms within the
 > next hour, I'll just replace the snprintf calls with something that's
 > portable.

  Hmm.  I think the issue with known existing snprintf()
implementations with Open Source licenses was that they were at least
somewhat contanimating.  I'll switch back to sprintf() until we have a
portable snprintf() implementation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Fri Aug  4 04:49:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:49:32 -0400
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEPEGNAA.tim_one@email.msn.com>

[Fred]
>   Hmm.  I think the issue with known existing snprintf()
> implementations with Open Source licenses was that they were at
> least somewhat contanimating.  I'll switch back to sprintf()
> until we have a portable snprintf() implementation.

Please don't bother!  Clearly, I've already fixed it on my machine so I
could make progress.  I'll simply check it in.  I didn't like the excessive
cleverness with the fmt vrbl anyway (your compiler may not complain that you
can end up passing more s[n]printf args than the format has specifiers to
convert, but it's a no-no anyway) ....





From tim_one at email.msn.com  Fri Aug  4 04:55:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 22:55:47 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000803212444.A1237@beelzebub>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPEGNAA.tim_one@email.msn.com>

[Greg Ward]
> for building extensions with non-MS compilers, it sounds like a small
> change to PC/config.h is needed.  Rene Liebscher suggests changing
>
>   #ifndef USE_DL_EXPORT
>   /* So nobody needs to specify the .lib in their Makefile any more */
>   #ifdef _DEBUG
>   #pragma comment(lib,"python20_d.lib")
>   #else
>   #pragma comment(lib,"python20.lib")
>   #endif
>   #endif /* USE_DL_EXPORT */
>
> to
>
>   #if !defined(USE_DL_EXPORT) && defined(_MSC_VER)
>   ...
>
> That way, the convenience pragma will still be there for MSVC users, but
> it won't break building extensions with Borland C++.

OK by me.

> ...
> While I have your attention,

You're pushing your luck, Greg <wink>.

> Rene also suggests the convention of "bcpp_python20.lib" for
> the Borland-format lib file, with other compilers (library
> formats) supported in future similarly.  Works for me -- anyone
> have any problems with that?

Nope, but I don't understand anything about how Borland differs from the
real <0.5 wink> Windows compiler, so don't know squat about the issues or
the goals.  If it works for Rene, I give up without a whimper.





From nhodgson at bigpond.net.au  Fri Aug  4 05:36:12 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Fri, 4 Aug 2000 13:36:12 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
References: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>
Message-ID: <00cf01bffdc5$246867f0$8119fea9@neil>

> But assuming there are good reasons, I am happy.  It wont bother me for
> some time yet ;-) <just deleted a rant about the fact that anyone on
> Windows who values their time in more than cents-per-hour would use MSVC,
> but deleted it ;->

   OK. Better cut my rates. Some people will be pleased ;)

   Borland C++ isn't that bad. With an optimiser and a decent debugger it'd
even be usable as my main compiler. What is good about Borland is that it
produces lots of meaningful warnings.

   I've never regretted ensuring that Scintilla/SciTE build on Windows with
each of MSVC, Borland and GCC. It wasn't much work and real problems have
been found by the extra checks done by Borland.

   You-should-try-it-sometime-ly y'rs,

   Neil




From bwarsaw at beopen.com  Fri Aug  4 05:46:02 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 3 Aug 2000 23:46:02 -0400 (EDT)
Subject: [Python-Dev] snprintf breaks build
References: <LNBBLJKPBEHFEDALKOLCCEPDGNAA.tim_one@email.msn.com>
	<14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <14730.15482.216054.249627@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    Fred>   Hmm.  I think the issue with known existing snprintf()
    Fred> implementations with Open Source licenses was that they were
    Fred> at least somewhat contanimating.  I'll switch back to
    Fred> sprintf() until we have a portable snprintf()
    Fred> implementation.

In Mailman, I used the one from GNU screen, which is obviously GPL'd.
But Apache also comes with an snprintf implementation which doesn't
have the infectious license.  I don't feel like searching the
archives, but I'd be surprised if Greg Stein /didn't/ suggest this a
while back.

-Barry



From tim_one at email.msn.com  Fri Aug  4 05:54:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 3 Aug 2000 23:54:47 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <00cf01bffdc5$246867f0$8119fea9@neil>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPHGNAA.tim_one@email.msn.com>

[Neil Hodgson]
> ...
>    I've never regretted ensuring that Scintilla/SciTE build on
> Windows with each of MSVC, Borland and GCC. It wasn't much work
> and real problems have been found by the extra checks done by
> Borland.
>
>    You-should-try-it-sometime-ly y'rs,

Indeed, the more compilers the better.  I've long wished that Guido would
leave CNRI, and find some situation in which he could hire people to work on
Python full-time.  If that ever happens, and he hires me, I'd like to do
serious work to free the Windows build config from such total dependence on
MSVC.





From MarkH at ActiveState.com  Fri Aug  4 05:52:58 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 13:52:58 +1000
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <00cf01bffdc5$246867f0$8119fea9@neil>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEEMDDAA.MarkH@ActiveState.com>

>    Borland C++ isn't that bad. With an optimiser and a decent
> debugger it'd even be usable as my main compiler.

>    You-should-try-it-sometime-ly y'rs,

OK - let me know when it has an optimiser and a decent debugger, and is
usable as a main compiler, and I will be happy to ;-)

Only-need-one-main-anything ly,

Mark.




From moshez at math.huji.ac.il  Fri Aug  4 06:30:59 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 07:30:59 +0300 (IDT)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <14730.11194.599976.438416@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008040728150.10236-100000@sundial>

On Thu, 3 Aug 2000, Fred L. Drake, Jr. wrote:

>   Hmm.  I think the issue with known existing snprintf()
> implementations with Open Source licenses was that they were at least
> somewhat contanimating.  I'll switch back to sprintf() until we have a
> portable snprintf() implementation.

Fred -- in your case, there is no need for sprintf -- a few sizeof(long)s
along the way would make sure that your buffers are large enough.  (For
extreme future-proofing, you might also sizeof() the messages you print)

(Tidbit: since sizeof(long) measures in bytes, and %d prints in decimals,
then a buffer of length sizeof(long) is enough to hold a decimal
represntation of a long).

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From greg at cosc.canterbury.ac.nz  Fri Aug  4 06:38:16 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 04 Aug 2000 16:38:16 +1200 (NZST)
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <Pine.GSO.4.10.10008040728150.10236-100000@sundial>
Message-ID: <200008040438.QAA11982@s454.cosc.canterbury.ac.nz>

Moshe Zadka:

> (Tidbit: since sizeof(long) measures in bytes, and %d prints in decimals,
> then a buffer of length sizeof(long) is enough to hold a decimal
> represntation of a long).

Pardon? I think you're a bit out in your calculation there!

3*sizeof(long) should be enough, though (unless some weird C
implementation measures sizes in units of more than 8 bits).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Fri Aug  4 07:22:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 01:22:23 -0400
Subject: [Python-Dev] snprintf breaks build
In-Reply-To: <200008040438.QAA11982@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPJGNAA.tim_one@email.msn.com>

[Moshe Zadka]
> (Tidbit: since sizeof(long) measures in bytes, and %d prints in
> decimals, then a buffer of length sizeof(long) is enough to hold
> a decimal represntation of a long).

[Greg Ewing]
> Pardon? I think you're a bit out in your calculation there!
>
> 3*sizeof(long) should be enough, though (unless some weird C
> implementation measures sizes in units of more than 8 bits).

Getting closer, but the sign bit can consume a character all by itself, so
3*sizeof(long) sitll isn't enough.  To do this correctly and minimally
requires that we implement an arbitrary-precision log10 function, use the
platform MIN/MIX #define's for longs and chars, and malloc the buffers at
runtime.

Note that instead I boosted the buffer sizes in the module from 80 to 250.
That's obviously way more than enough for 64-bit platforms, and "obviously
way more" is the correct thing to do for programmers <wink>.  If one of the
principled alternatives is ever checked in (be it an snprintf or /F's custom
solution (which I like better)), we can go back and use those instead.





From MarkH at ActiveState.com  Fri Aug  4 07:58:52 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 15:58:52 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <200008022318.SAA04558@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>

[Re forcing all extensions to use PythonExtensionInit_XXX]

> I sort-of like this idea -- at least at the +0 level.

Since this email there have been some strong objections to this.  I too
would weigh in at -1 for this, simply for the amount of work it would cost
me personally!


> Unfortunately we only have two days to get this done for 1.6 -- I plan
> to release 1.6b1 this Friday!  If you don't get to it, prepare a patch
> for 2.0 would be the next best thing.

It is now Friday afternoon for me.  Regardless of the outcome of this, the
patch Fredrik posted recently would still seem reasonable, and not have too
much impact on performance (ie, after locating and loading a .dll/.so, one
function call isnt too bad!):

I've even left his trailing comment, which I agree with too?

Shall this be checked in to the 1.6 and 2.0 trees?

Mark.

Index: Python/modsupport.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/modsupport.c,v
retrieving revision 2.48
diff -u -r2.48 modsupport.c
--- Python/modsupport.c 2000/07/09 03:09:56     2.48
+++ Python/modsupport.c 2000/07/18 07:55:03
@@ -51,6 +51,8 @@
 {
        PyObject *m, *d, *v;
        PyMethodDef *ml;
+       if (!Py_IsInitialized())
+               Py_FatalError("Interpreter not initialized (version
mismatch?)");
        if (module_api_version != PYTHON_API_VERSION)
                fprintf(stderr, api_version_warning,
                        name, PYTHON_API_VERSION, name,
module_api_version);

"Fatal Python error: Interpreter not initialized" might not be too helpful,
but it's surely better than "PyThreadState_Get: no current thread"...





From tim_one at email.msn.com  Fri Aug  4 09:06:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 03:06:21 -0400
Subject: [Python-Dev] FW: submitting patches against 1.6a2
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com>

Anyone competent with urrlib care to check out this fellow's complaint?
Thanks!

-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Paul Schreiber
Sent: Friday, August 04, 2000 2:20 AM
To: python-list at python.org
Subject: submitting patches against 1.6a2


I patched a number of bugs in urllib.py way back when -- in June, I
think. That was before the BeOpen announcement.

I emailed the patch to patches at python.org. I included the disclaimer. I
made the patch into a context diff.

I didn't hear back from anyone.

Should I resubmit? Where should I send the patch to?



Paul
--
http://www.python.org/mailman/listinfo/python-list





From esr at snark.thyrsus.com  Fri Aug  4 09:47:34 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Fri, 4 Aug 2000 03:47:34 -0400
Subject: [Python-Dev] curses progress
Message-ID: <200008040747.DAA02323@snark.thyrsus.com>

OK, I've added docs for curses.textpad and curses.wrapper.  Did we
ever settle on a final location in the distribution tree for the
curses HOWTO?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

According to the National Crime Survey administered by the Bureau of
the Census and the National Institute of Justice, it was found that
only 12 percent of those who use a gun to resist assault are injured,
as are 17 percent of those who use a gun to resist robbery. These
percentages are 27 and 25 percent, respectively, if they passively
comply with the felon's demands. Three times as many were injured if
they used other means of resistance.
        -- G. Kleck, "Policy Lessons from Recent Gun Control Research,"



From pf at artcom-gmbh.de  Fri Aug  4 09:47:17 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Fri, 4 Aug 2000 09:47:17 +0200 (MEST)
Subject: Vladimir's codeutil.py (was Re: [Python-Dev] tests for standard library modules)
In-Reply-To: <200008032225.AAA27154@python.inrialpes.fr> from Vladimir Marangozov at "Aug 4, 2000  0:25:38 am"
Message-ID: <m13KcCL-000DieC@artcom0.artcom-gmbh.de>

Hi,

Vladimir Marangozov:
> But fortunately <wink>, here's another piece of code, modeled after
> its C counterpart, that comes to Skip's rescue and that works with -O.
[...]
> ------------------------------[ codeutil.py ]-------------------------
[...]

Neat!  This seems to be very useful.
I think this could be added to standard library if it were documented.

Regards, Peter



From thomas at xs4all.net  Fri Aug  4 10:14:56 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 10:14:56 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 04, 2000 at 03:58:52PM +1000
References: <200008022318.SAA04558@cj20424-a.reston1.va.home.com> <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com>
Message-ID: <20000804101456.H266@xs4all.nl>

On Fri, Aug 04, 2000 at 03:58:52PM +1000, Mark Hammond wrote:

> It is now Friday afternoon for me.  Regardless of the outcome of this, the
> patch Fredrik posted recently would still seem reasonable, and not have too
> much impact on performance (ie, after locating and loading a .dll/.so, one
> function call isnt too bad!):

> +       if (!Py_IsInitialized())
> +               Py_FatalError("Interpreter not initialized (version

Wasn't there a problem with this, because the 'Py_FatalError()' would be the
one in the uninitialized library and thus result in the same tstate error ?
Perhaps it needs a separate error message, that avoids the usual Python
cleanup and trickery and just prints the error message and exits ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From MarkH at ActiveState.com  Fri Aug  4 10:20:04 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 4 Aug 2000 18:20:04 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <20000804101456.H266@xs4all.nl>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEFHDDAA.MarkH@ActiveState.com>

> Wasn't there a problem with this, because the 'Py_FatalError()'
> would be the
> one in the uninitialized library and thus result in the same
> tstate error ?
> Perhaps it needs a separate error message, that avoids the usual Python
> cleanup and trickery and just prints the error message and exits ?

I would obviously need to test this, but a cursory look at Py_FatalError()
implies it does not touch the thread lock - simply an fprintf, and an
abort() (and for debug builds on Windows, an offer to break into the
debugger)

Regardless, I'm looking for a comment on the concept, and I will make sure
that whatever I do actually works ;-)

Mark.




From effbot at telia.com  Fri Aug  4 10:30:25 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 4 Aug 2000 10:30:25 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <200008022318.SAA04558@cj20424-a.reston1.va.home.com> <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> <20000804101456.H266@xs4all.nl>
Message-ID: <012b01bffdee$3dadb020$f2a6b5d4@hagrid>

thomas wrote:
> > +       if (!Py_IsInitialized())
> > +               Py_FatalError("Interpreter not initialized (version
> 
> Wasn't there a problem with this, because the 'Py_FatalError()' would be the
> one in the uninitialized library and thus result in the same tstate error ?

you mean this one:

  Py_FatalError("PyThreadState_Get: no current thread");

> Perhaps it needs a separate error message, that avoids the usual Python
> cleanup and trickery and just prints the error message and exits ?

void
Py_FatalError(char *msg)
{
 fprintf(stderr, "Fatal Python error: %s\n", msg);
#ifdef macintosh
 for (;;);
#endif
#ifdef MS_WIN32
 OutputDebugString("Fatal Python error: ");
 OutputDebugString(msg);
 OutputDebugString("\n");
#ifdef _DEBUG
 DebugBreak();
#endif
#endif /* MS_WIN32 */
 abort();
}

</F>




From ping at lfw.org  Fri Aug  4 10:38:12 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 4 Aug 2000 01:38:12 -0700 (PDT)
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELNGNAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>

On Thu, 3 Aug 2000, Tim Peters wrote:
> 
> >>> "\x123465" # \x12 -> \022, "3456" left alone
> '\0223456'
> >>> "\x65"
> 'e'
> >>> "\x1"
> ValueError
> >>> "\x\x"
> ValueError
> >>>

I'm quite certain that this should be a SyntaxError, not a ValueError:

    >>> "\x1"
    SyntaxError: two hex digits are required after \x
    >>> "\x\x"
    SyntaxError: two hex digits are required after \x

Otherwise, +1.  Sounds great.


-- ?!ng




From tim_one at email.msn.com  Fri Aug  4 11:26:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 4 Aug 2000 05:26:29 -0400
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <Pine.LNX.4.10.10008040136490.5008-100000@localhost>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEPPGNAA.tim_one@email.msn.com>

[Tim Peters]
> >>> "\x123465" # \x12 -> \022, "3456" left alone
> '\0223456'
> >>> "\x65"
> 'e'
> >>> "\x1"
> ValueError
> >>> "\x\x"
> ValueError
> >>>

[?!ng]
> I'm quite certain that this should be a SyntaxError, not a
> ValueError:
>
>     >>> "\x1"
>     SyntaxError: two hex digits are required after \x
>     >>> "\x\x"
>     SyntaxError: two hex digits are required after \x
>
> Otherwise, +1.  Sounds great.

SyntaxError was my original pick too.  Guido picked ValueError instead
because the corresponding "not enough hex digits" error in Unicode strings
for damaged \u1234 escapes raises UnicodeError today, which is a subclass of
ValueError.

I couldn't care less, and remain +1 either way.  On the chance that the BDFL
may have changed his mind, I've copied him on this msg,  This is your one &
only chance to prevail <wink>.

just-so-long-as-it's-not-XEscapeError-ly y'rs  - tim





From mal at lemburg.com  Fri Aug  4 12:03:49 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 04 Aug 2000 12:03:49 +0200
Subject: [Python-Dev] Go \x yourself
References: <LNBBLJKPBEHFEDALKOLCIEPPGNAA.tim_one@email.msn.com>
Message-ID: <398A9505.A88D8F93@lemburg.com>

[Wow, 5:26 in the morning and still (or already) up and running...]

Tim Peters wrote:
> 
> [Tim Peters]
> > >>> "\x123465" # \x12 -> \022, "3456" left alone
> > '\0223456'
> > >>> "\x65"
> > 'e'
> > >>> "\x1"
> > ValueError
> > >>> "\x\x"
> > ValueError
> > >>>
> 
> [?!ng]
> > I'm quite certain that this should be a SyntaxError, not a
> > ValueError:
> >
> >     >>> "\x1"
> >     SyntaxError: two hex digits are required after \x
> >     >>> "\x\x"
> >     SyntaxError: two hex digits are required after \x
> >
> > Otherwise, +1.  Sounds great.
> 
> SyntaxError was my original pick too.  Guido picked ValueError instead
> because the corresponding "not enough hex digits" error in Unicode strings
> for damaged \u1234 escapes raises UnicodeError today, which is a subclass of
> ValueError.
> 
> I couldn't care less, and remain +1 either way.  On the chance that the BDFL
> may have changed his mind, I've copied him on this msg,  This is your one &
> only chance to prevail <wink>.

The reason for Unicode raising a UnicodeError is that the
string is passed through a codec in order to be converted to
Unicode. Codecs raise ValueErrors for encoding errors.

The "\x..." errors should probably be handled in the same
way to assure forward compatibility (they might be passed through
codecs as well in some future Python version in order to
implement source code encodings).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From akuchlin at cnri.reston.va.us  Fri Aug  4 14:45:06 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Fri, 4 Aug 2000 08:45:06 -0400
Subject: [Python-Dev] curses progress
In-Reply-To: <200008040747.DAA02323@snark.thyrsus.com>; from esr@snark.thyrsus.com on Fri, Aug 04, 2000 at 03:47:34AM -0400
References: <200008040747.DAA02323@snark.thyrsus.com>
Message-ID: <20000804084506.B5870@newcnri.cnri.reston.va.us>

On Fri, Aug 04, 2000 at 03:47:34AM -0400, Eric S. Raymond wrote:
>OK, I've added docs for curses.textpad and curses.wrapper.  Did we
>ever settle on a final location in the distribution tree for the
>curses HOWTO?

Fred and GvR thought a separate SF project would be better, so I
created http://sourceforge.net/projects/py-howto .  You've already
been added as a developer, as have Moshe and Fred.  Just check out the
CVS tree (directory, really) and put it in the Doc/ subdirectory of
the Python CVS tree.  Preparations for a checkin mailing list are
progressing, but still not complete.

--amk



From thomas at xs4all.net  Fri Aug  4 15:01:35 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:01:35 +0200
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: <200007270559.AAA04753@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Jul 27, 2000 at 12:59:15AM -0500
References: <20000725230322.N266@xs4all.nl> <200007270559.AAA04753@cj20424-a.reston1.va.home.com>
Message-ID: <20000804150134.J266@xs4all.nl>

[Don't be scared, I'm revisiting this thread for a purpose -- this isn't a
time jump ;-)]

On Thu, Jul 27, 2000 at 12:59:15AM -0500, Guido van Rossum wrote:

> I'm making up opcodes -- the different variants of LOAD and STORE
> don't matter.  On the right I'm displaying the stack contents after
> execution of the opcode (push appends to the end).  I'm writing
> 'result' to indicate the result of the += operator.

>   a[i] += b
> 
>       LOAD a			[a]
>       DUP			[a, a]
>       LOAD i			[a, a, i]
>       DUP			[a, a, i, i]
>       ROT3			[a, i, a, i]
>       GETITEM			[a, i, a[i]]
>       LOAD b			[a, i, a[i], b]
>       AUGADD			[a, i, result]
>       SETITEM			[]
> 
> I'm leaving the slice variant out; I'll get to that in a minute.

[ And later you gave an example of slicing using slice objects, rather than
the *SLICE+x opcodes ]

I have two tiny problems with making augmented assignment use the current
LOAD/STORE opcodes in the way Guido pointed out, above. One has to do with
the order of the arguments, and the other with ROT_FOUR. And they're closely
related, too :P

The question is in what order the expression

x += y

is evaluated. 

x = y

evaluates 'y' first, then 'x', but 

x + y

evaluates 'x' first, and then 'y'. 

x = x + y

Would thus evaluate 'x', then 'y', and then 'x' (for storing the result.)
(The problem isn't with single-variable expressions like these examples, of
course, but with expressions with sideeffects.)

I think it makes sense to make '+=' like '+', in that it evaluates the lhs
first. However, '+=' is as much '=' as it is '+', so it also makes sense to
evaluate the rhs first. There are plenty of arguments both ways, and both
sides of my brain have been beating eachother with spiked clubs for the
better part of a day now ;) On the other hand, how important is this issue ?
Does Python say anything about the order of argument evaluation ? Does it
need to ?

After making up your mind about the above issue, there's another problem,
and that's the generated bytecode.

If '+=' should be as '=' and evaluate the rhs first, here's what the
bytecode would have to look like for the most complicated case (two-argument
slicing.)

a[b:c] += i

LOAD i			[i]
LOAD a			[i, a]
DUP_TOP			[i, a, a]
LOAD b			[i, a, a, b]
DUP_TOP			[i, a, a, b, b]
ROT_THREE		[i, a, b, a, b]
LOAD c			[i, a, b, a, b, c]
DUP_TOP			[i, a, b, a, b, c, c]
ROT_FOUR		[i, a, b, c, a, b, c]
SLICE+3			[i, a, b, c, a[b:c]]
ROT_FIVE		[a[b:c], i, a, b, c]
ROT_FIVE		[c, a[b:c], i, a, b]
ROT_FIVE		[b, c, a[b:c], i, a]
ROT_FIVE		[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

So, *two* new bytecodes, 'ROT_FOUR' and 'ROT_FIVE', just to get the right
operands in the right place.

On the other hand, if the *left* hand side is evaluated first, it would look
like this:

a[b:c] += i

LOAD a			[a]
DUP_TOP			[a, a]
LOAD b			[a, a, b]
DUP_TOP			[a, a, b, b]
ROT_THREE		[a, b, a, b]
LOAD c			[a, b, a, b, c]
DUP_TOP			[a, b, a, b, c, c]
ROT_FOUR		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

A lot shorter, and it only needs ROT_FOUR, not ROT_FIVE. An alternative
solution is to drop ROT_FOUR too, and instead use a DUP_TOPX argument-opcode
that duplicates the top 'x' items:

LOAD a			[a]
LOAD b			[a, b]
LOAD c			[a, b, c]
DUP_TOPX 3		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
STORE_SLICE+3		[]

I think 'DUP_TOPX' makes more sense than ROT_FOUR, as DUP_TOPX could be used
in the bytecode for 'a[b] += i' and 'a.b += i' as well. (Guido's example
would become something like this:

a[b] += i

LOAD a			[a]
LOAD b			[a, b]
DUP_TOPX 2		[a, b, a, b]
BINARY_SUBSCR		[a, b, a[b]]
LOAD i			[a, b, a[b], i]
INPLACE_ADD		[a, b, result]
STORE_SUBSCR		[]

So, *bytecode* wise, evaluating the lhs of '+=' first is easiest. It
requires a lot more hacking of compile.c, but I think I can manage that.
However, one part of me is still yelling that '+=' should evaluate its
arguments like '=', not '+'. Which part should I lobotomize ? :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug  4 15:08:58 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:08:58 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <200008041259.FAA24651@slayer.i.sourceforge.net>; from moshez@users.sourceforge.net on Fri, Aug 04, 2000 at 05:59:43AM -0700
References: <200008041259.FAA24651@slayer.i.sourceforge.net>
Message-ID: <20000804150858.K266@xs4all.nl>

On Fri, Aug 04, 2000 at 05:59:43AM -0700, Moshe Zadka wrote:

> Log Message:
> The only error the test suite skips is currently ImportError -- so that's
> what we raise. If you see a problem with this patch, say so and I'll
> retract.

test_support creates a class 'TestSkipped', which has a docstring that
suggests it can be used in the same way as ImportError. However, it doesn't
work ! Is that intentional ? The easiest fix to make it work is probably
making TestSkipped a subclass of ImportError, rather than Error (which it
is, now.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Fri Aug  4 15:11:38 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 16:11:38 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test
 test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <20000804150858.K266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008041610180.16446-100000@sundial>

On Fri, 4 Aug 2000, Thomas Wouters wrote:

> On Fri, Aug 04, 2000 at 05:59:43AM -0700, Moshe Zadka wrote:
> 
> > Log Message:
> > The only error the test suite skips is currently ImportError -- so that's
> > what we raise. If you see a problem with this patch, say so and I'll
> > retract.
> 
> test_support creates a class 'TestSkipped', which has a docstring that
> suggests it can be used in the same way as ImportError. However, it doesn't
> work ! Is that intentional ? The easiest fix to make it work is probably
> making TestSkipped a subclass of ImportError, rather than Error (which it
> is, now.)

Thanks for the tip, Thomas! I didn't know about it -- but I just read
the regrtest.py code, and it seemed to be the only exception it catches.
Why not just add test_support.TestSkipped to the exception it catches
when it catches the ImportError?

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Fri Aug  4 15:19:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 4 Aug 2000 15:19:31 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_linuxaudiodev.py,1.1,1.2
In-Reply-To: <Pine.GSO.4.10.10008041610180.16446-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 04, 2000 at 04:11:38PM +0300
References: <20000804150858.K266@xs4all.nl> <Pine.GSO.4.10.10008041610180.16446-100000@sundial>
Message-ID: <20000804151931.L266@xs4all.nl>

On Fri, Aug 04, 2000 at 04:11:38PM +0300, Moshe Zadka wrote:

> > test_support creates a class 'TestSkipped', which has a docstring that
> > suggests it can be used in the same way as ImportError. However, it doesn't
> > work ! Is that intentional ? The easiest fix to make it work is probably
> > making TestSkipped a subclass of ImportError, rather than Error (which it
> > is, now.)

> Thanks for the tip, Thomas! I didn't know about it -- but I just read
> the regrtest.py code, and it seemed to be the only exception it catches.
> Why not just add test_support.TestSkipped to the exception it catches
> when it catches the ImportError?

Right. Done. Now to update all those tests that raise ImportError when they
*mean* 'TestSkipped' :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug  4 15:26:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 4 Aug 2000 09:26:53 -0400 (EDT)
Subject: [Python-Dev] curses progress
In-Reply-To: <200008040747.DAA02323@snark.thyrsus.com>
References: <200008040747.DAA02323@snark.thyrsus.com>
Message-ID: <14730.50333.391218.736370@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > OK, I've added docs for curses.textpad and curses.wrapper.  Did we
 > ever settle on a final location in the distribution tree for the
 > curses HOWTO?

  Andrew is creating a new project on SourceForge.
  I think this is the right thing to do.  We may want to discuss
packaging, to make it easier for users to get to the documentation
they need; this will have to wait until after 1.6.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Fri Aug  4 16:59:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 09:59:35 -0500
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: Your message of "Fri, 04 Aug 2000 15:58:52 +1000."
             <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBKEFBDDAA.MarkH@ActiveState.com> 
Message-ID: <200008041459.JAA01621@cj20424-a.reston1.va.home.com>

> [Re forcing all extensions to use PythonExtensionInit_XXX]

[GvR]
> > I sort-of like this idea -- at least at the +0 level.

[MH]
> Since this email there have been some strong objections to this.  I too
> would weigh in at -1 for this, simply for the amount of work it would cost
> me personally!

OK.  Dead it is.  -1.

> Shall this be checked in to the 1.6 and 2.0 trees?

Yes, I'll do so.

> "Fatal Python error: Interpreter not initialized" might not be too helpful,
> but it's surely better than "PyThreadState_Get: no current thread"...

Yes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug  4 17:06:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:06:33 -0500
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: Your message of "Fri, 04 Aug 2000 03:06:21 -0400."
             <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> 
Message-ID: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>

> Anyone competent with urrlib care to check out this fellow's complaint?

It arrived on June 14, so I probably ignored it -- with 1000s of other
messages received while I was on vacation.  This was before we started
using the SF PM.

But I still have his email.  Someone else please look at this!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)


Subject: [Patches] urllib.py patch
From: Paul Schreiber <paul at commerceflow.com>
To: patches at python.org
Date: Wed, 14 Jun 2000 16:52:02 -0700
Content-Type: multipart/mixed;
 boundary="------------3EE36A3787159ED881FD3EC3"

This is a multi-part message in MIME format.
--------------3EE36A3787159ED881FD3EC3
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I confirm that, to the best of my knowledge and belief, this
contribution is free of any claims of third parties under
copyright, patent or other rights or interests ("claims").  To
the extent that I have any such claims, I hereby grant to CNRI a
nonexclusive, irrevocable, royalty-free, worldwide license to
reproduce, distribute, perform and/or display publicly, prepare
derivative versions, and otherwise use this contribution as part
of the Python software and its related documentation, or any
derivative versions thereof, at no cost to CNRI or its licensed
users, and to authorize others to do so.

I acknowledge that CNRI may, at its sole discretion, decide
whether or not to incorporate this contribution in the Python
software and its related documentation.  I further grant CNRI
permission to use my name and other identifying information
provided to CNRI by me for use in connection with the Python
software and its related documentation.

Patch description
-----------------
This addresses four issues:

(1) usernames and passwords in urls with special characters are now
decoded properly. i.e. http://foo%2C:bar at www.whatever.com/

(2) Basic Auth support has been added to HTTPS, like it was in HTTP.

(3) Version 1.92 sent the POSTed data, but did not deal with errors
(HTTP responses other than 200) properly. HTTPS now behaves the same way
HTTP does.

(4) made URL-checking beahve the same way with HTTPS as it does with
HTTP (changed == to !=).


Paul Schreiber
--------------3EE36A3787159ED881FD3EC3
Content-Type: text/plain; charset=us-ascii;
 name="urllib-diff-2"
Content-Disposition: inline;
 filename="urllib-diff-2"
Content-Transfer-Encoding: 7bit

*** urllib.old	Tue Jun 13 18:27:02 2000
--- urllib.py	Tue Jun 13 18:33:27 2000
***************
*** 302,316 ****
          def open_https(self, url, data=None):
              """Use HTTPS protocol."""
              import httplib
              if type(url) is type(""):
                  host, selector = splithost(url)
!                 user_passwd, host = splituser(host)
              else:
                  host, selector = url
                  urltype, rest = splittype(selector)
!                 if string.lower(urltype) == 'https':
                      realhost, rest = splithost(rest)
!                     user_passwd, realhost = splituser(realhost)
                      if user_passwd:
                          selector = "%s://%s%s" % (urltype, realhost, rest)
                  #print "proxy via https:", host, selector
--- 302,325 ----
          def open_https(self, url, data=None):
              """Use HTTPS protocol."""
              import httplib
+             user_passwd = None
              if type(url) is type(""):
                  host, selector = splithost(url)
!                 if host:
!                     user_passwd, host = splituser(host)
!                     host = unquote(host)
!                 realhost = host
              else:
                  host, selector = url
                  urltype, rest = splittype(selector)
!                 url = rest
!                 user_passwd = None
!                 if string.lower(urltype) != 'https':
!                     realhost = None
!                 else:
                      realhost, rest = splithost(rest)
!                     if realhost:
!                         user_passwd, realhost = splituser(realhost)
                      if user_passwd:
                          selector = "%s://%s%s" % (urltype, realhost, rest)
                  #print "proxy via https:", host, selector
***************
*** 331,336 ****
--- 340,346 ----
              else:
                  h.putrequest('GET', selector)
              if auth: h.putheader('Authorization: Basic %s' % auth)
+             if realhost: h.putheader('Host', realhost)
              for args in self.addheaders: apply(h.putheader, args)
              h.endheaders()
              if data is not None:
***************
*** 340,347 ****
              if errcode == 200:
                  return addinfourl(fp, headers, url)
              else:
!                 return self.http_error(url, fp, errcode, errmsg, headers)
!   
      def open_gopher(self, url):
          """Use Gopher protocol."""
          import gopherlib
--- 350,360 ----
              if errcode == 200:
                  return addinfourl(fp, headers, url)
              else:
!                 if data is None:
!                     return self.http_error(url, fp, errcode, errmsg, headers)
!                 else:
!                     return self.http_error(url, fp, errcode, errmsg, headers, data)
! 
      def open_gopher(self, url):
          """Use Gopher protocol."""
          import gopherlib
***************
*** 872,878 ****
          _userprog = re.compile('^([^@]*)@(.*)$')
  
      match = _userprog.match(host)
!     if match: return match.group(1, 2)
      return None, host
  
  _passwdprog = None
--- 885,891 ----
          _userprog = re.compile('^([^@]*)@(.*)$')
  
      match = _userprog.match(host)
!     if match: return map(unquote, match.group(1, 2))
      return None, host
  
  _passwdprog = None


--------------3EE36A3787159ED881FD3EC3--


_______________________________________________
Patches mailing list
Patches at python.org
http://www.python.org/mailman/listinfo/patches



From guido at beopen.com  Fri Aug  4 17:11:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:11:03 -0500
Subject: [Python-Dev] Go \x yourself
In-Reply-To: Your message of "Fri, 04 Aug 2000 01:38:12 MST."
             <Pine.LNX.4.10.10008040136490.5008-100000@localhost> 
References: <Pine.LNX.4.10.10008040136490.5008-100000@localhost> 
Message-ID: <200008041511.KAA01925@cj20424-a.reston1.va.home.com>

> I'm quite certain that this should be a SyntaxError, not a ValueError:
> 
>     >>> "\x1"
>     SyntaxError: two hex digits are required after \x
>     >>> "\x\x"
>     SyntaxError: two hex digits are required after \x
> 
> Otherwise, +1.  Sounds great.

No, problems with literal interpretations traditionally raise
"runtime" exceptions rather than syntax errors.  E.g.

>>> 111111111111111111111111111111111111
OverflowError: integer literal too large
>>> u'\u123'
UnicodeError: Unicode-Escape decoding error: truncated \uXXXX
>>>

Note that UnicodeError is a subclass of ValueError.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Fri Aug  4 16:11:00 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 4 Aug 2000 17:11:00 +0300 (IDT)
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008041709450.16446-100000@sundial>

On Fri, 4 Aug 2000, Guido van Rossum wrote:

> > Anyone competent with urrlib care to check out this fellow's complaint?
> 
> It arrived on June 14, so I probably ignored it -- with 1000s of other
> messages received while I was on vacation.  This was before we started
> using the SF PM.
> 
> But I still have his email.  Someone else please look at this!

AFAIK, those are the two urllib patches assigned to Jeremy.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From akuchlin at mems-exchange.org  Fri Aug  4 16:13:05 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 4 Aug 2000 10:13:05 -0400
Subject: [Python-Dev] FW: submitting patches against 1.6a2
In-Reply-To: <200008041506.KAA01874@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 04, 2000 at 10:06:33AM -0500
References: <LNBBLJKPBEHFEDALKOLCIEPMGNAA.tim_one@email.msn.com> <200008041506.KAA01874@cj20424-a.reston1.va.home.com>
Message-ID: <20000804101305.A11929@kronos.cnri.reston.va.us>

On Fri, Aug 04, 2000 at 10:06:33AM -0500, Guido van Rossum wrote:
>It arrived on June 14, so I probably ignored it -- with 1000s of other
>messages received while I was on vacation.  This was before we started
>using the SF PM.

I think this is SF patch#100880 -- I entered it so it wouldn't get lost.

--amk



From guido at beopen.com  Fri Aug  4 17:26:45 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:26:45 -0500
Subject: [Python-Dev] PEP 203 Augmented Assignment
In-Reply-To: Your message of "Fri, 04 Aug 2000 15:01:35 +0200."
             <20000804150134.J266@xs4all.nl> 
References: <20000725230322.N266@xs4all.nl> <200007270559.AAA04753@cj20424-a.reston1.va.home.com>  
            <20000804150134.J266@xs4all.nl> 
Message-ID: <200008041526.KAA02071@cj20424-a.reston1.va.home.com>

[Thomas]
> The question is in what order the expression
> 
> x += y
> 
> is evaluated. 
> 
> x = y
> 
> evaluates 'y' first, then 'x', but 
> 
> x + y
> 
> evaluates 'x' first, and then 'y'. 
> 
> x = x + y
> 
> Would thus evaluate 'x', then 'y', and then 'x' (for storing the result.)
> (The problem isn't with single-variable expressions like these examples, of
> course, but with expressions with sideeffects.)

Yes.  And note that the Python reference manual specifies the
execution order (or at least tries to) -- I figured that in a
user-friendly interpreted language, predictability is more important
than some optimizer being able to speed your code up a tiny bit by
rearranging evaluation order.

> I think it makes sense to make '+=' like '+', in that it evaluates the lhs
> first. However, '+=' is as much '=' as it is '+', so it also makes sense to
> evaluate the rhs first. There are plenty of arguments both ways, and both
> sides of my brain have been beating eachother with spiked clubs for the
> better part of a day now ;) On the other hand, how important is this issue ?
> Does Python say anything about the order of argument evaluation ? Does it
> need to ?

I say that in x += y, x should be evaluated before y.

> After making up your mind about the above issue, there's another problem,
> and that's the generated bytecode.
[...]
> A lot shorter, and it only needs ROT_FOUR, not ROT_FIVE. An alternative
> solution is to drop ROT_FOUR too, and instead use a DUP_TOPX argument-opcode
> that duplicates the top 'x' items:

Sure.

> However, one part of me is still yelling that '+=' should evaluate its
> arguments like '=', not '+'. Which part should I lobotomize ? :)

That part.  If you see x+=y as shorthand for x=x+y, x gets evaluated
before y anyway!  We're saving the second evaluation of x, not the
first one!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug  4 17:46:57 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 10:46:57 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Thu, 03 Aug 2000 17:21:04 EST."
             <14729.61520.11958.530601@beluga.mojam.com> 
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local> <14728.63466.263123.434708@anthem.concentric.net> <3989454C.5C9EF39B@lemburg.com> <200008031256.HAA06107@cj20424-a.reston1.va.home.com> <398979D0.5AF80126@lemburg.com>  
            <14729.61520.11958.530601@beluga.mojam.com> 
Message-ID: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>

[Skip]
> eh... I don't like these do two things at once kind of methods.  I see
> nothing wrong with
> 
>     >>> dict = {}
>     >>> dict['hello'] = dict.get('hello', [])
>     >>> dict['hello'].append('world')
>     >>> print dict
>     {'hello': ['world']}
> 
> or
> 
>     >>> d = dict['hello'] = dict.get('hello', [])
>     >>> d.insert(0, 'cruel')
>     >>> print dict
>     {'hello': ['cruel', 'world']}
> 
> for the obsessively efficiency-minded folks.

Good!  Two lines instead of three, and only two dict lookups in the
latter one.

> Also, we're talking about a method that would generally only be useful when
> dictionaries have values which were mutable objects.  Irregardless of how
> useful instances and lists are, I still find that my predominant day-to-day
> use of dictionaries is with strings as keys and values.  Perhaps that's just
> the nature of my work.

Must be.  I have used the above two idioms many times -- a dict of
lists is pretty common.  I believe that the fact that you don't need
it is the reason why you don't like it.

I believe that as long as we agree that

  dict['hello'] += 1

is clearer (less strain on the reader's brain) than

  dict['hello'] = dict['hello'] + 1

we might as well look for a clearer way to spell the above idiom.

My current proposal (violating my own embargo against posting proposed
names to the list :-) would be

  dict.default('hello', []).append('hello')

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From paul at prescod.net  Fri Aug  4 17:52:11 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 04 Aug 2000 11:52:11 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <ECEPKNMJLHAPFFJHDOJBIEECDCAA.mhammond@skippinet.com.au>              <3986794E.ADBB938C@prescod.net>  <200008011820.NAA30284@cj20424-a.reston1.va.home.com> <004d01bffc50$522fa2a0$f2a6b5d4@hagrid>
Message-ID: <398AE6AB.9D8F943B@prescod.net>

Fredrik Lundh wrote:
> 
> ...
> 
> how about letting _winreg export all functions with their
> win32 names, and adding a winreg.py which looks some-
> thing like this:
> 
>     from _winreg import *
> 
>     class Key:
>         ....
> 
>     HKEY_CLASSES_ROOT = Key(...)
>     ...

To me, that would defeat the purpose. Have you looked at the "*"
exported by _winreg? The whole point is to impose some organization on
something that is totally disorganized (because that's how the C module
is).

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From skip at mojam.com  Fri Aug  4 20:07:28 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 4 Aug 2000 13:07:28 -0500 (CDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
References: <slrn8oh81f.1m8e.Gareth.McCaughan@g.local>
	<14728.63466.263123.434708@anthem.concentric.net>
	<3989454C.5C9EF39B@lemburg.com>
	<200008031256.HAA06107@cj20424-a.reston1.va.home.com>
	<398979D0.5AF80126@lemburg.com>
	<14729.61520.11958.530601@beluga.mojam.com>
	<200008041546.KAA02168@cj20424-a.reston1.va.home.com>
Message-ID: <14731.1632.44037.499807@beluga.mojam.com>

    >> Also, we're talking about a method that would generally only be
    >> useful when dictionaries have values which were mutable objects.
    >> Irregardless of how useful instances and lists are, I still find that
    >> my predominant day-to-day use of dictionaries is with strings as keys
    >> and values.  Perhaps that's just the nature of my work.

    Guido> Must be.  I have used the above two idioms many times -- a dict
    Guido> of lists is pretty common.  I believe that the fact that you
    Guido> don't need it is the reason why you don't like it.

I do use lists in dicts as well, it's just that it seems to me that using
strings as values (especially because I use bsddb a lot and often want to
map dictionaries to files) dominates.  The two examples I posted are what
I've used for a long time.  I guess I just don't find them to be big
limitations.

Skip



From barry at scottb.demon.co.uk  Sat Aug  5 01:19:52 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Sat, 5 Aug 2000 00:19:52 +0100
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <01d701bffcd7$46a74a00$f2a6b5d4@hagrid>
Message-ID: <000d01bffe6a$7e4bab60$060210ac@private>

> > Yes indeed once the story of 1.6 and 2.0 is out I expect folks
> > will skip 1.6.   For example, if your win32 stuff is not ported then
> > Python 1.6 is not usable on Windows/NT.
> 
> "not usable"?
> 
> guess you haven't done much cross-platform development lately...

	True. On Unix I have an ISDN status monitor, it depends on
	FReeBSD interfaces and PIL. On Windows I have an SCM
	solution that depends on COM to drive SourceSafe.

	Without Mark's COM support I cannot run any of my code on
	Windows.

> > Change the init function name to a new name PythonExtensionInit_ say.
> > Pass in the API version for the extension writer to check. If the
> > version is bad for this extension returns without calling any python
> 
> huh?  are you seriously proposing to break every single C extension
> ever written -- on each and every platform -- just to trap an error
> message caused by extensions linked against 1.5.2 on your favourite
> platform?

	What makes you think that a crash will not happen under Unix
	when you change the API? You just don't get the Windows crash.

	As this thread has pointed out you have no intention of checking
	for binary compatibility on the API as you move up versions.
 
> > Small code change in python core. But need to tell extension writers
> > what the new interface is and update all extensions within the python
> > CVS tree.
>
> you mean "update the source code for all extensions ever written."

	Yes, I'm aware of the impact.

> -1
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From gward at python.net  Sat Aug  5 02:53:09 2000
From: gward at python.net (Greg Ward)
Date: Fri, 4 Aug 2000 20:53:09 -0400
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 04, 2000 at 12:27:32PM +1000
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com>
Message-ID: <20000804205309.A1013@beelzebub>

On 04 August 2000, Mark Hammond said:
> I would prefer python20_bcpp.lib, but that is not an issue.

Good suggestion: the contents of the library are more important than the 
format.  Rene, can you make this change and include it in your next
patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as 
opposed to "python20_bcpp"?

> I am a little confused by the intention, tho.  Wouldnt it make sense to
> have Borland builds of the core create a Python20.lib, then we could keep
> the pragma in too?
> 
> If people want to use Borland for extensions, can't we ask them to use that
> same compiler to build the core too?  That would seem to make lots of the
> problems go away?

But that requires people to build all of Python from source, which I'm
guessing is a bit more bothersome than building an extension or two from 
source.  Especially since Python is already distributed as a very
easy-to-use binary installer for Windows, but most extensions are not.

Rest assured that we probably won't be making things *completely*
painless for those who do not toe Chairman Bill's party line and insist
on using "non-standard" Windows compilers.  They'll probably have to get
python20_bcpp.lib (or python20_gcc.lib, or python20_lcc.lib) on their
own -- whether downloaded or generated, I don't know.  But the
alternative is to include 3 or 4 python20_xxx.lib files in the standard
Windows distribution, which I think is silly.

> But assuming there are good reasons, I am happy.  It wont bother me for
> some time yet ;-) <just deleted a rant about the fact that anyone on
> Windows who values their time in more than cents-per-hour would use MSVC,
> but deleted it ;->

Then I won't even write my "it's not just about money, it's not even
about features, it's about the freedom to use the software you want to
use no matter what it says in Chairman Bill's book of wisdom" rant.

Windows: the Cultural Revolution of the 90s.  ;-)

        Greg
-- 
Greg Ward - geek-at-large                               gward at python.net
http://starship.python.net/~gward/
What happens if you touch these two wires tog--



From guido at beopen.com  Sat Aug  5 04:27:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 04 Aug 2000 21:27:59 -0500
Subject: [Python-Dev] Python 1.6b1 is released!
Message-ID: <200008050227.VAA11161@cj20424-a.reston1.va.home.com>

Python 1.6b1, with the new CNRI open source license, is released today
from the python.org website.  Read all about it:

    http://www.python.org/1.6/

Here's a little background on the new license (also posted on
www.pythonlabs.com):

CNRI has funded Python development for five years and held copyright,
but never placed a CNRI-specific license on the software.  In order to
clarify the licensing, BeOpen.com has been working with CNRI to
produce a new CNRI license.  The result of these discussions (which
included Eric Raymond, Bruce Perens, Richard Stallman and Python
Consortium members) has produced the CNRI Open Source License, under
which Python 1.6b1 has been released.

Bob Weiner, CTO of BeOpen.com, on the result of the licensing
discussions: "Bob Kahn [CNRI's President] worked with us to understand
the particular needs of the Open Source community and Python users.
The result is a very open license."

The new CNRI license was approved by the Python Consortium members, at
a meeting of the Python Consortium on Friday, July 21, 2000 in
Monterey, California.

Eric Raymond, President of the Open Source Initiative (OSI), reports
that OSI's Board of Directors voted to certify the new CNRI license
[modulo minor editing] as fully Open Source compliant.

Richard Stallman, founder of the Free Software Foundation, is in
discussion with CNRI about the new license's compatibility with the
GPL.  We are hopeful that the remaining issues will be resolved in
favor of GPL compatibility before the release of Python 1.6 final.

We would like to thank all who graciously volunteered their time to
help make these results possible: Bob Kahn for traveling out west to
discuss these issues in person; Eric Raymond and Bruce Perens for
their useful contributions to the discussions; Bob Weiner for taking
care of the bulk of the negotiations; Richard Stallman for GNU; and
the Python Consortium representatives for making the consortium
meeting a success!

(And I would personally like to thank Tim Peters for keeping the
newsgroup informed and for significant editing of the text above.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From amk1 at erols.com  Sat Aug  5 06:15:22 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Sat, 5 Aug 2000 00:15:22 -0400
Subject: [Python-Dev] python-dev summary posted
Message-ID: <200008050415.AAA00811@207-172-146-87.s87.tnt3.ann.va.dialup.rcn.com>

I've posted the python-dev summary for July 16-31 to
comp.lang.python/python-list; interested people can go check it out.

--amk



From just at letterror.com  Sat Aug  5 10:03:33 2000
From: just at letterror.com (Just van Rossum)
Date: Sat, 05 Aug 2000 09:03:33 +0100
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com> <bld7joah8z.fsf@bitdiddle.concentric.net> <9b13RLA800i5EwLY@jessikat.fsnet.co.uk> <8mg3au$rtb$1@nnrp1.deja.com>
Message-ID: <398BCA4F.17E23309@letterror.com>

[ CC-d to python-dev from c.l.py ]

Jeremy Hylton wrote:
> It is a conservative response.  JPython is an implementation of Python,
> and compatibility between Python and JPython is important.  It's not
> required for every language feature, of course; you can't load a Java
> class file in C Python.

Jeremy, have you ever *looked* at stackless? Even though it requires
extensive patches in the eval loop, all additional semantics are nicely
hidden in an extension module. The Java argument is a *very* poor one
because of this. No, you can't load a Java class in CPython, and yes,
"import continuation" fails under JPython. So what?

> I'm not sure what you mean by distinguishing between the semantics of
> continuations and the implementation of Stackless Python.  They are
> both issues!  In the second half of my earlier message, I observed that
> we would never add continuations without a PEP detailing their exact
> semantics.  I do not believe such a specification currently exists for
> stackless Python.

That's completely unfair. Stackless has been around *much* longer than
those silly PEPs. It seems stackless isn't in the same league as, say,
"adding @ to the print statement for something that is almost as
conveniently done with a function". I mean, jeez.

> The PEP would also need to document the C interface and how it affects
> people writing extensions and doing embedded work.  Python is a glue
> language and the effects on the glue interface are also important.

The stackless API is 100% b/w compatible. There are (or could/should be)
additional calls for extension writers and embedders that would like
to take advantage of stackless features, but full compatibility is
*there*. To illustrate this: for windows as well as MacOS, there are
DLLs for stackless that you just put in the place if the original
Python core DLLs, and *everything* just works. 

Christian has done an amazing piece of work, and he's gotten much
praise from the community. I mean, if you *are* looking for a killer
feature to distinguish 1.6 from 2.0, I'd know where to look...

Just



From mal at lemburg.com  Sat Aug  5 11:35:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 05 Aug 2000 11:35:06 +0200
Subject: [Python-Dev] Python 1.6b1 out ?!
Message-ID: <398BDFCA.4D5A262D@lemburg.com>

Strange: either I missed it or Guido chose to release 1.6b1 
in silence, but I haven't seen any official announcement of the
release available from http://www.python.org/1.6/.

BTW, nice holiday, Guido ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Sun Aug  6 01:34:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 5 Aug 2000 19:34:43 -0400
Subject: [Python-Dev] Python 1.6b1 out ?!
In-Reply-To: <398BDFCA.4D5A262D@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>

[M.-A. Lemburg]
> Strange: either I missed it or Guido chose to release 1.6b1
> in silence, but I haven't seen any official announcement of the
> release available from http://www.python.org/1.6/.
>
> BTW, nice holiday, Guido ;-)

There's an announcement at the top of http://www.python.org/, though, and an
announcement about the new license at http://www.pythonlabs.com/.  Guido
also posted to comp.lang.python.  You probably haven't seen the latter if
you use the mailing list gateway, because many mailing lists at python.org
coincidentally got hosed at the same time due to a full disk.  Else your
news server simply hasn't gotten it yet (I saw it come across on
netnews.msn.com, but then Microsoft customers get everything first <wink>).





From thomas at xs4all.net  Sat Aug  5 17:18:30 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 5 Aug 2000 17:18:30 +0200
Subject: [Python-Dev] UNPACK_LIST & UNPACK_TUPLE
Message-ID: <20000805171829.N266@xs4all.nl>

I'm a tad confused about the 'UNPACK_LIST' and 'UNPACK_TUPLE' opcodes. There
doesn't seem to be a difference between the two, yet the way they are
compiled is slightly different (but not much.) I can list all the
differences I can see, but I just don't understand them, and because of that
I'm not sure how to handle them in augmented assignment. UNPACK_LIST just
seems so redundant :)

Wouldn't it make sense to remove the difference between the two, or better
yet, remove UNPACK_LIST (and possibly rename UNPACK_TUPLE to UNPACK_SEQ ?)
We already lost bytecode compability anyway!

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Sun Aug  6 01:46:00 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 5 Aug 2000 19:46:00 -0400
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <398BCA4F.17E23309@letterror.com>; from just@letterror.com on Sat, Aug 05, 2000 at 09:03:33AM +0100
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com> <bld7joah8z.fsf@bitdiddle.concentric.net> <9b13RLA800i5EwLY@jessikat.fsnet.co.uk> <8mg3au$rtb$1@nnrp1.deja.com> <398BCA4F.17E23309@letterror.com>
Message-ID: <20000805194600.A7242@thyrsus.com>

Just van Rossum <just at letterror.com>:
> Christian has done an amazing piece of work, and he's gotten much
> praise from the community. I mean, if you *are* looking for a killer
> feature to distinguish 1.6 from 2.0, I'd know where to look...

I must say I agree.  Something pretty similar to Stackless Python is
going to have to happen anyway for the language to make its next major
advance in capability -- generators, co-routining, and continuations.

I also agree that this is a more important debate, and a harder set of
decisions, than the PEPs.  Which means we should start paying attention
to it *now*.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

I don't like the idea that the police department seems bent on keeping
a pool of unarmed victims available for the predations of the criminal
class.
         -- David Mohler, 1989, on being denied a carry permit in NYC



From bwarsaw at beopen.com  Sun Aug  6 01:50:04 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 19:50:04 -0400 (EDT)
Subject: [Python-Dev] Python 1.6b1 out ?!
References: <398BDFCA.4D5A262D@lemburg.com>
	<LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>
Message-ID: <14732.43052.91330.426211@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> There's an announcement at the top of http://www.python.org/,
    TP> though, and an announcement about the new license at
    TP> http://www.pythonlabs.com/.  Guido also posted to
    TP> comp.lang.python.  You probably haven't seen the latter if you
    TP> use the mailing list gateway, because many mailing lists at
    TP> python.org coincidentally got hosed at the same time due to a
    TP> full disk.  Else your news server simply hasn't gotten it yet
    TP> (I saw it come across on netnews.msn.com, but then Microsoft
    TP> customers get everything first <wink>).

And you should soon see the announcement if you haven't already.  All
the mailing lists on py.org should be back on line now.  It'll take a
while to clear out the queue though.

-Barry



From bwarsaw at beopen.com  Sun Aug  6 01:52:05 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 19:52:05 -0400 (EDT)
Subject: [Python-Dev] UNPACK_LIST & UNPACK_TUPLE
References: <20000805171829.N266@xs4all.nl>
Message-ID: <14732.43173.634118.381282@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> I'm a tad confused about the 'UNPACK_LIST' and 'UNPACK_TUPLE'
    TW> opcodes. There doesn't seem to be a difference between the
    TW> two, yet the way they are compiled is slightly different (but
    TW> not much.) I can list all the differences I can see, but I
    TW> just don't understand them, and because of that I'm not sure
    TW> how to handle them in augmented assignment. UNPACK_LIST just
    TW> seems so redundant :)

    TW> Wouldn't it make sense to remove the difference between the
    TW> two, or better yet, remove UNPACK_LIST (and possibly rename
    TW> UNPACK_TUPLE to UNPACK_SEQ ?)  We already lost bytecode
    TW> compability anyway!

This is a historical artifact.  I don't remember what version it was,
but at one point there was a difference between

    a, b, c = gimme_a_tuple()

and

    [a, b, c] = gimme_a_list()

That difference was removed, and support was added for any sequence
unpacking.  If changing the bytecode is okay, then there doesn't seem
to be any reason to retain the differences.

-Barry



From jack at oratrix.nl  Sat Aug  5 23:14:08 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sat, 05 Aug 2000 23:14:08 +0200
Subject: [Python-Dev] New SRE core dump (was: SRE 0.9.8 benchmarks) 
In-Reply-To: Message by "Fredrik Lundh" <effbot@telia.com> ,
	     Thu, 3 Aug 2000 19:19:03 +0200 , <007401bffd6e$ed9bbde0$f2a6b5d4@hagrid> 
Message-ID: <20000805211413.E1224E2670@oratrix.oratrix.nl>

Fredrik,
could you add a PyOS_CheckStack() call to the recursive part of sre
(within #ifdef USE_STACKCHECK, of course)?
I'm getting really really nasty crashes on the Mac if I run the
regression tests...
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From jack at oratrix.nl  Sat Aug  5 23:41:15 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sat, 05 Aug 2000 23:41:15 +0200
Subject: [Python-Dev] strftime()
Message-ID: <20000805214120.A55EEE2670@oratrix.oratrix.nl>

The test_strftime regression test has been failing on the Mac for
ages, and I finally got round to investigating the problem: the
MetroWerks library returns the strings "am" and "pm" for %p but the
regression test expects "AM" and "PM". According to the comments in
the source of the library (long live vendors who provide it! Yeah!)
this is C9X compatibility.

I can of course move the %p to the nonstandard expectations, but maybe 
someone has a better idea?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From bwarsaw at beopen.com  Sun Aug  6 02:12:58 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sat, 5 Aug 2000 20:12:58 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <fc8mosgajj5db74oijjb8e1vbrrvgf0mi5@4ax.com>
	<bld7joah8z.fsf@bitdiddle.concentric.net>
	<9b13RLA800i5EwLY@jessikat.fsnet.co.uk>
	<8mg3au$rtb$1@nnrp1.deja.com>
	<398BCA4F.17E23309@letterror.com>
	<20000805194600.A7242@thyrsus.com>
Message-ID: <14732.44426.201651.690336@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> I must say I agree.  Something pretty similar to Stackless
    ESR> Python is going to have to happen anyway for the language to
    ESR> make its next major advance in capability -- generators,
    ESR> co-routining, and continuations.

Stackless definitely appeals to me from a coolness factor, though I
don't know how much I'd use those new capabilities that it allows.
The ability to embed Python on hardware that might otherwise not be
possible without Stackless is also an interesting thing to explore.

    ESR> I also agree that this is a more important debate, and a
    ESR> harder set of decisions, than the PEPs.  Which means we
    ESR> should start paying attention to it *now*.

Maybe a PEP isn't the right venue, but the semantics and externally
visible effects of Stackless need to be documented.  What if JPython
or Python .NET wanted to adopt those same semantics, either by doing
their implementation's equivalent of Stackless or by some other means?
We can't even think about doing that without a clear and complete
specification.

Personally, I don't see Stackless making it into 2.0 and possibly not
2.x.  But I agree it is something to seriously consider for Py3K.

-Barry



From tim_one at email.msn.com  Sun Aug  6 07:07:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 01:07:27 -0400
Subject: [Python-Dev] strftime()
In-Reply-To: <20000805214120.A55EEE2670@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com>

[Jack Jansen]
> The test_strftime regression test has been failing on the Mac for
> ages, and I finally got round to investigating the problem: the
> MetroWerks library returns the strings "am" and "pm" for %p but the
> regression test expects "AM" and "PM". According to the comments in
> the source of the library (long live vendors who provide it! Yeah!)
> this is C9X compatibility.

My copy of a draft C99 std agrees (7.23.3.5) with MetroWerks on this point
(i.e., that %p in the "C" locale becomes "am" or "pm").

> I can of course move the %p to the nonstandard expectations, but maybe
> someone has a better idea?

Not really.  If Python thinks this function is valuable, it "should" offer a
platform-independent implementation, but as nobody has time for that ...





From MarkH at ActiveState.com  Sun Aug  6 07:08:46 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sun, 6 Aug 2000 15:08:46 +1000
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
In-Reply-To: <000d01bffe6a$7e4bab60$060210ac@private>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEHPDDAA.MarkH@ActiveState.com>

[/F]
> > huh?  are you seriously proposing to break every single C extension
> > ever written -- on each and every platform -- just to trap an error
> > message caused by extensions linked against 1.5.2 on your favourite
> > platform?

[Barry]
> 	What makes you think that a crash will not happen under Unix
> 	when you change the API? You just don't get the Windows crash.
>
> 	As this thread has pointed out you have no intention of checking
> 	for binary compatibility on the API as you move up versions.

I imtimated the following, but did not spell it out, so I will here to
clarify.

I was -1 on Barry's solution getting into 1.6, given the time frame.  I
hinted that the solution Guido recently checked in "if
(!Py_IsInitialized()) ..." would not be too great an impact even if Barry's
solution, or one like it, was eventually adopted.

So I think that the adoption of our half-solution (ie, we are really only
forcing a better error message - not even getting a traceback to indicate
_which_ module fails) need not preclude a better solution when we have more
time to implement it...

Mark.




From moshez at math.huji.ac.il  Sun Aug  6 08:23:48 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 09:23:48 +0300 (IDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <20000805194600.A7242@thyrsus.com>
Message-ID: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>

On Sat, 5 Aug 2000, Eric S. Raymond wrote:

> I must say I agree.  Something pretty similar to Stackless Python is
> going to have to happen anyway for the language to make its next major
> advance in capability -- generators, co-routining, and continuations.
> 
> I also agree that this is a more important debate, and a harder set of
> decisions, than the PEPs.  Which means we should start paying attention
> to it *now*.

I tend to disagree. For a while now I'm keeping an eye on the guile
interpreter development (a very cool project, but unfortunately limping
along. It probably will be the .NET of free software, though). In guile,
they were able to implement continuations *without* what we call
stacklessness. Sure, it might look inefficient, but for most applications
(like co-routines) it's actually quite all right. What all that goes to
say is that we should treat stackles exactly like it is -- an
implementation detail. Now, that's not putting down Christian's work -- on
the contrary, I think the Python implementation is very important. But
that alone should indicate there's no need for a PEP. I, for one, am for
it, because I happen to think it's a much better implementation. If it
also has the effect of making continuationsmodule.c easier to write, well,
that's not an issue in this discussion as far as I'm concerned.

brain-dumping-ly y'rs, Z.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Sun Aug  6 10:55:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 06 Aug 2000 10:55:55 +0200
Subject: [Python-Dev] Python 1.6b1 out ?!
References: <LNBBLJKPBEHFEDALKOLCCECEGOAA.tim_one@email.msn.com>
Message-ID: <398D281B.E7F118C0@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Strange: either I missed it or Guido chose to release 1.6b1
> > in silence, but I haven't seen any official announcement of the
> > release available from http://www.python.org/1.6/.
> >
> > BTW, nice holiday, Guido ;-)
> 
> There's an announcement at the top of http://www.python.org/, though, and an
> announcement about the new license at http://www.pythonlabs.com/.  Guido
> also posted to comp.lang.python.  You probably haven't seen the latter if
> you use the mailing list gateway, because many mailing lists at python.org
> coincidentally got hosed at the same time due to a full disk.  Else your
> news server simply hasn't gotten it yet (I saw it come across on
> netnews.msn.com, but then Microsoft customers get everything first <wink>).

I saw the announcement on www.python.org, thanks. (I already
started to miss the usual 100+ Python messages I get into my mailbox
every day ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sun Aug  6 14:20:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 06 Aug 2000 14:20:56 +0200
Subject: [Python-Dev] Pickling using XML as output format
Message-ID: <398D5827.EE8938DD@lemburg.com>

Before starting to reinvent the wheel:

I need a pickle.py compatible module which essentially works
just like pickle.py, but uses XML as output format. I've already
looked at xml_pickle.py (see Parnassus), but this doesn't seem
to handle object references at all. Also, it depends on 
xml.dom which I'd rather avoid.

My idea was to rewrite the format used by pickle in an
XML syntax and then hard-code the DTD into a subclass
of the parser in xmllib.py.

Now, I'm very new to XML, so I may be missing something here...
would this be doable in a fairly sensible way (I'm thinking
of closely sticking to the pickle stream format) ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Sun Aug  6 14:46:09 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 15:46:09 +0300 (IDT)
Subject: [Python-Dev] Pickling using XML as output format
In-Reply-To: <398D5827.EE8938DD@lemburg.com>
Message-ID: <Pine.GSO.4.10.10008061544180.20069-100000@sundial>

On Sun, 6 Aug 2000, M.-A. Lemburg wrote:

> Before starting to reinvent the wheel:

Ummmm......I'd wait for some DC guy to chime in: I think Zope had
something like that. You might want to ask around on the Zope lists
or search zope.org.

I'm not sure what it has and what it doesn't have, though.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez





From moshez at math.huji.ac.il  Sun Aug  6 15:22:09 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 16:22:09 +0300 (IDT)
Subject: [Python-Dev] Warnings on gcc -Wall
Message-ID: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>

As those of you with a firm eye on python-checkins noticed, I've been
trying to clear the source files (as many of them as I could get to
compile on my setup) from warnings. This is only with gcc -Wall: a future
project of mine is to enable much more warnings and try to clean them too.

There are currently two places where warnings still remain:

 -- readline.c -- readline/history.h is included only on BeOS, and
otherwise prototypes are declared by hand. Does anyone remember why? 

-- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
oparg which might be used before initialized. I've had a look at that
code, and I'm certain gcc's flow analysis is simply not good enough.
However, I would like to silence the warning, so I can get used to
building with -Wall -Werror and make sure to mind any warnings. Does
anyone see any problem with putting opcode=0 and oparg=0 near the top?

Any comments welcome, of course.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Sun Aug  6 16:00:26 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 16:00:26 +0200
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>; from moshez@math.huji.ac.il on Sun, Aug 06, 2000 at 04:22:09PM +0300
References: <Pine.GSO.4.10.10008061612490.20069-100000@sundial>
Message-ID: <20000806160025.P266@xs4all.nl>

On Sun, Aug 06, 2000 at 04:22:09PM +0300, Moshe Zadka wrote:

>  -- readline.c -- readline/history.h is included only on BeOS, and
> otherwise prototypes are declared by hand. Does anyone remember why? 

Possibly because old versions of readline don't have history.h ?

> -- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
> oparg which might be used before initialized. I've had a look at that
> code, and I'm certain gcc's flow analysis is simply not good enough.
> However, I would like to silence the warning, so I can get used to
> building with -Wall -Werror and make sure to mind any warnings. Does
> anyone see any problem with putting opcode=0 and oparg=0 near the top?

Actually, I don't think this is true. 'opcode' and 'oparg' get filled inside
the permanent for-loop, but after the check on pending signals and
exceptions. I think it's theoretically possible to have 'things_to_do' on
the first time through the loop, which end up in an exception, thereby
causing the jump to on_error, entering the branch on WHY_EXCEPTION, which
uses oparg and opcode. I'm not sure if initializing opcode/oparg is the
right thing to do, though, but I'm not sure what is, either :-)

As for the checkins, I haven't seen some of the pending checkin-mails pass
by (I did some cleaning up of configure.in last night, for instance, after
the re-indent and grammar change in compile.c that *did* come through.)
Barry (or someone else ;) are those still waiting in the queue, or should we
consider them 'lost' ? 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Sun Aug  6 16:13:10 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 17:13:10 +0300 (IDT)
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: <20000806160025.P266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008061703040.20069-100000@sundial>

On Sun, 6 Aug 2000, Thomas Wouters wrote:

> On Sun, Aug 06, 2000 at 04:22:09PM +0300, Moshe Zadka wrote:
> 
> >  -- readline.c -- readline/history.h is included only on BeOS, and
> > otherwise prototypes are declared by hand. Does anyone remember why? 
> 
> Possibly because old versions of readline don't have history.h ?

And it did have the history functions? If so, maybe we can include
<readline/readline.h> unconditionally, and switch on the readline version.
If not, I'd just announce support for earlier versions of readline
nonexistant and be over and done with it.

> 'opcode' and 'oparg' get filled inside
> the permanent for-loop, but after the check on pending signals and
> exceptions. I think it's theoretically possible to have 'things_to_do' on
> the first time through the loop, which end up in an exception, thereby
> causing the jump to on_error, entering the branch on WHY_EXCEPTION, which
> uses oparg and opcode. I'm not sure if initializing opcode/oparg is the
> right thing to do, though, but I'm not sure what is, either :-)

Probably initializing them before the "goto no_error" to some dummy value,
then checking for this dummy value in the relevant place. You're right,
of course, I hadn't noticed the goto.

> As for the checkins, I haven't seen some of the pending checkin-mails pass
> by (I did some cleaning up of configure.in last night, for instance, after
> the re-indent and grammar change in compile.c that *did* come through.)
> Barry (or someone else ;) are those still waiting in the queue, or should we
> consider them 'lost' ? 

I got a reject on two e-mails, but I didn't think of saving
them....oooops..well, no matter, most of them were trivial stuff.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tismer at appliedbiometrics.com  Sun Aug  6 16:47:26 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Sun, 06 Aug 2000 16:47:26 +0200
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
Message-ID: <398D7A7E.2AB1BDF3@appliedbiometrics.com>


Moshe Zadka wrote:
> 
> On Sat, 5 Aug 2000, Eric S. Raymond wrote:
> 
> > I must say I agree.  Something pretty similar to Stackless Python is
> > going to have to happen anyway for the language to make its next major
> > advance in capability -- generators, co-routining, and continuations.
> >
> > I also agree that this is a more important debate, and a harder set of
> > decisions, than the PEPs.  Which means we should start paying attention
> > to it *now*.
> 
> I tend to disagree. For a while now I'm keeping an eye on the guile
> interpreter development (a very cool project, but unfortunately limping
> along. It probably will be the .NET of free software, though). In guile,
> they were able to implement continuations *without* what we call
> stacklessness. Sure, it might look inefficient, but for most applications
> (like co-routines) it's actually quite all right.

Despite the fact that I consider the Guile implementation a pile
of junk code that I would never touch like I did with Python*),
you are probably right. Stackless goes a bit too far in a sense,
that it implies abilities for other implementations which are
hard to achieve.

There are in fact other ways to implement coroutines and uthreads.
Stackless happens to achieve all of that and a lot more, and to
be very efficient. Therefore it would be a waste to go back to
a restricted implementation since it exists already. If stackless
didn't go so far, it would probably have been successfully
integrated, already. I wanted it all and luckily got it all.

On the other hand, there is no need to enforce every Python
implementation to do the full continuation support. In CPython,
continuationmodule.c can be used for such purposes, and it can
be used as a basement for coroutine and generator implementations.
Using Guile's way to implement these would be a possible path
for JPython.
The point is to use only parts of the possibilities and don't
enforce everything for every environment. There is just no point
in shrinking the current implementation down; not even a subset
would be helpful in JPython.

> What all that goes to
> say is that we should treat stackles exactly like it is -- an
> implementation detail. Now, that's not putting down Christian's work -- on
> the contrary, I think the Python implementation is very important. But
> that alone should indicate there's no need for a PEP. I, for one, am for
> it, because I happen to think it's a much better implementation. If it
> also has the effect of making continuationsmodule.c easier to write, well,
> that's not an issue in this discussion as far as I'm concerned.

A possible proposal could be this:

- incorporate Stackless into CPython, but don't demand it
  for every implementation
- implement coroutines and others with Stackless for CPython
  try alternative implementations for JPython if there are users
- do *not* make continuations a standard language feature until
  there is a portable way to get it everywhere

Still, I can't see the point with Java. There are enough
C extension which are not available for JPython, but it is
allowed to use them. Same with the continuationmodule, why
does it need to exist for Java, in order to allow it for
CPython?
This is like not implementing new browser features until
they can be implemented on my WAP handy. Sonsense.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com

*) sorry, feel free to disagree, but this was my impression when
   I read the whole code half a year ago.
   This is exactly what I not want :-)



From moshez at math.huji.ac.il  Sun Aug  6 17:11:21 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 6 Aug 2000 18:11:21 +0300 (IDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <Pine.GSO.4.10.10008061807230.20069-100000@sundial>

On Sun, 6 Aug 2000, Christian Tismer wrote:

> On the other hand, there is no need to enforce every Python
> implementation to do the full continuation support. In CPython,
> continuationmodule.c can be used for such purposes, and it can
> be used as a basement for coroutine and generator implementations.
> Using Guile's way to implement these would be a possible path
> for JPython.

Actually, you can't use Guile's way for JPython -- the guile folks
are doing some low-level semi-portable stuff in C...

> - incorporate Stackless into CPython, but don't demand it
>   for every implementation

Again, I want to say I don't think there's a meaning for "for every
implementation" -- Stackless is not part of the language definiton,
it's part of the implementation. The whole Java/.NET is a red herring.

> - implement coroutines and others with Stackless for CPython

I think that should be done in a third-party module. But hey, if Guido
wants to maintain another module...

> - do *not* make continuations a standard language feature until
>   there is a portable way to get it everywhere

I'd got further and say "do *not* make continuations a standard language
feature" <wink>

> Still, I can't see the point with Java. There are enough
> C extension which are not available for JPython, but it is
> allowed to use them. Same with the continuationmodule, why
> does it need to exist for Java, in order to allow it for
> CPython?

My point exactly.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tismer at appliedbiometrics.com  Sun Aug  6 17:22:39 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Sun, 06 Aug 2000 17:22:39 +0200
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008061807230.20069-100000@sundial>
Message-ID: <398D82BF.85D0E5AB@appliedbiometrics.com>


Moshe Zadka wrote:

...
> > - implement coroutines and others with Stackless for CPython
> 
> I think that should be done in a third-party module. But hey, if Guido
> wants to maintain another module...

Right, like now. CPython has the necessary stackless hooks, nuts
and bolts, but nothing else, and no speed impact.

Then is just happens to be *possible* to write such an extension,
and it will be written, but this is no language feature.

> > - do *not* make continuations a standard language feature until
> >   there is a portable way to get it everywhere
> 
> I'd got further and say "do *not* make continuations a standard language
> feature" <wink>

This was my sentence, in the first place, but when reviewing
the message, I could not resist to plug that in again <1.5 wink>

As discussed in a private thread with Just, some continuation
features can only be made "nice" if they are supported by
some language extension. I want to use Python in CS classes
to teach them continuations, therefore I need a backdoor :-)

and-there-will-always-be-a-version-on-my-site-that-goes-
   -beyond-the-standard - ly y'rs  - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From bwarsaw at beopen.com  Sun Aug  6 17:49:07 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sun, 6 Aug 2000 11:49:07 -0400 (EDT)
Subject: [Python-Dev] Re: Python 2.0 and Stackless
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
	<398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <14733.35059.53619.98300@anthem.concentric.net>

>>>>> "CT" == Christian Tismer <tismer at appliedbiometrics.com> writes:

    CT> Still, I can't see the point with Java. There are enough C
    CT> extension which are not available for JPython, but it is
    CT> allowed to use them. Same with the continuationmodule, why
    CT> does it need to exist for Java, in order to allow it for
    CT> CPython?  This is like not implementing new browser features
    CT> until they can be implemented on my WAP handy. Sonsense.

It's okay if there are some peripheral modules that are available to
CPython but not JPython (include Python .NET here too), and vice
versa.  That'll just be the nature of things.  But whatever basic
language features Stackless allows one to do /from Python/ must be
documented.  That's the only way we'll be able to one of these things:

- support the feature a different way in a different implementation
- agree that the feature is part of the Python language definition,
  but possibly not (yet) supported by all implementations.
- define the feature as implementation dependent (so people writing
  portable code know to avoid those features).

-Barry



From guido at beopen.com  Sun Aug  6 19:23:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 12:23:52 -0500
Subject: [Python-Dev] Warnings on gcc -Wall
In-Reply-To: Your message of "Sun, 06 Aug 2000 16:22:09 +0300."
             <Pine.GSO.4.10.10008061612490.20069-100000@sundial> 
References: <Pine.GSO.4.10.10008061612490.20069-100000@sundial> 
Message-ID: <200008061723.MAA14418@cj20424-a.reston1.va.home.com>

>  -- readline.c -- readline/history.h is included only on BeOS, and
> otherwise prototypes are declared by hand. Does anyone remember why? 

Please don't touch that module.  GNU readline is wacky.

> -- ceval.c, in ceval() gcc -Wall (wrongly) complains about opcode and
> oparg which might be used before initialized. I've had a look at that
> code, and I'm certain gcc's flow analysis is simply not good enough.
> However, I would like to silence the warning, so I can get used to
> building with -Wall -Werror and make sure to mind any warnings. Does
> anyone see any problem with putting opcode=0 and oparg=0 near the top?

No problem.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sun Aug  6 19:34:34 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 12:34:34 -0500
Subject: [Python-Dev] strftime()
In-Reply-To: Your message of "Sun, 06 Aug 2000 01:07:27 -0400."
             <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCAEDOGOAA.tim_one@email.msn.com> 
Message-ID: <200008061734.MAA14488@cj20424-a.reston1.va.home.com>

> [Jack Jansen]
> > The test_strftime regression test has been failing on the Mac for
> > ages, and I finally got round to investigating the problem: the
> > MetroWerks library returns the strings "am" and "pm" for %p but the
> > regression test expects "AM" and "PM". According to the comments in
> > the source of the library (long live vendors who provide it! Yeah!)
> > this is C9X compatibility.
> 
> My copy of a draft C99 std agrees (7.23.3.5) with MetroWerks on this point
> (i.e., that %p in the "C" locale becomes "am" or "pm").
> 
> > I can of course move the %p to the nonstandard expectations, but maybe
> > someone has a better idea?
> 
> Not really.  If Python thinks this function is valuable, it "should" offer a
> platform-independent implementation, but as nobody has time for that ...

No.  The test is too strict.  It should be fixed.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From just at letterror.com  Sun Aug  6 19:59:42 2000
From: just at letterror.com (Just van Rossum)
Date: Sun, 6 Aug 2000 18:59:42 +0100
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <14733.35059.53619.98300@anthem.concentric.net>
References: <Pine.GSO.4.10.10008060917310.9221-100000@sundial>
 <398D7A7E.2AB1BDF3@appliedbiometrics.com>
Message-ID: <l03102800b5b354bd9114@[193.78.237.132]>

At 11:49 AM -0400 06-08-2000, Barry A. Warsaw wrote:
>It's okay if there are some peripheral modules that are available to
>CPython but not JPython (include Python .NET here too), and vice
>versa.  That'll just be the nature of things.  But whatever basic
>language features Stackless allows one to do /from Python/ must be
>documented.

The things stackless offers are no different from:

- os.open()
- os.popen()
- os.system()
- os.fork()
- threading (!!!)

These things are all doable /from Python/, yet their non-portability seems
hardly an issue for the Python Standard Library.

>That's the only way we'll be able to one of these things:
>
>- support the feature a different way in a different implementation
>- agree that the feature is part of the Python language definition,
>  but possibly not (yet) supported by all implementations.

Honest (but possibly stupid) question: are extension modules part of the
language definition?

>- define the feature as implementation dependent (so people writing
>  portable code know to avoid those features).

It's an optional extension module, so this should be obvious. (As it
happens, it depends on a new and improved implementation of ceval.c, but
this is really beside the point.)

Just

PS: thanks to everybody who kept CC-ing me in this thread; it's much
appreciated as I'm not on python-dev.





From jeremy at alum.mit.edu  Sun Aug  6 20:54:56 2000
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sun, 6 Aug 2000 14:54:56 -0400
Subject: [Python-Dev] Re: Python 2.0 and Stackless
In-Reply-To: <20000805194600.A7242@thyrsus.com>
Message-ID: <AJEAKILOCCJMDILAPGJNOEFJCBAA.jeremy@alum.mit.edu>

Eric S. Raymond <esr at thyrsus.com> writes:
>Just van Rossum <just at letterror.com>:
>> Christian has done an amazing piece of work, and he's gotten much
>> praise from the community. I mean, if you *are* looking for a killer
>> feature to distinguish 1.6 from 2.0, I'd know where to look...
>
>I must say I agree.  Something pretty similar to Stackless Python is
>going to have to happen anyway for the language to make its next major
>advance in capability -- generators, co-routining, and continuations.
>
>I also agree that this is a more important debate, and a harder set of
>decisions, than the PEPs.  Which means we should start paying attention
>to it *now*.

The PEPs exist as a way to formalize important debates and hard decisions.
Without a PEP that offers a formal description of the changes, it is hard to
have a reasonable debate.  I would not be comfortable with the specification
for any feature from stackless being the implementation.

Given the current release schedule for Python 2.0, I don't see any
possibility of getting a PEP accepted in time.  The schedule, from PEP 200,
is:

    Tentative Release Schedule
        Aug. 14: All 2.0 PEPs finished / feature freeze
        Aug. 28: 2.0 beta 1
        Sep. 29: 2.0 final

Jeremy





From guido at beopen.com  Sun Aug  6 23:17:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 16:17:33 -0500
Subject: [Python-Dev] math.rint bites the dust
Message-ID: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>

After a brief consult with Tim, I've decided to drop math.rint() --
it's not standard C, can't be implemented in portable C, and its
naive (non-IEEE-754-aware) effect can easily be had in other ways.

If you disagree, speak up now!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Sun Aug  6 22:25:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 22:25:03 +0200
Subject: [Python-Dev] math.rint bites the dust
In-Reply-To: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 06, 2000 at 04:17:33PM -0500
References: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>
Message-ID: <20000806222502.S266@xs4all.nl>

On Sun, Aug 06, 2000 at 04:17:33PM -0500, Guido van Rossum wrote:

> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

I don't particularly disagree, since I hardly do anything with floating
point numbers, but how can something both not be implementable in portable C
*and* it's effect easily be had in other ways ?

I also recall someone who was implementing rint() on platforms that didnt
have it... Or did that idea get trashed because it wasn't portable enough ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Mon Aug  7 00:49:06 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 06 Aug 2000 22:49:06 +0000
Subject: [Python-Dev] bug-fixes in cnri-16-start branch
Message-ID: <398DEB62.789B4C9C@nowonder.de>

I have a question on the right procedure for fixing a simple
bug in the 1.6 release branch.

Bug #111162 appeared because the tests for math.rint() are
already contained in the cnri-16-start revision of test_math.py
while the "try: ... except AttributeError: ..." construct which
was checked in shortly after was not.

Now the correct bugfix is already known (and has been
applied to the main branch). I have updated the test_math.py
file in my working version with "-r cnri-16-start" and
made the changes.

Now I probably should just commit, close the patch
(with an appropriate follow-up) and be happy.

did-I-get-that-right-or-does-something-else-have-to-be-done-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From tim_one at email.msn.com  Sun Aug  6 22:54:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 16:54:02 -0400
Subject: [Python-Dev] math.rint bites the dust
In-Reply-To: <20000806222502.S266@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEFJGOAA.tim_one@email.msn.com>

[Guido]
> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

[Thomas Wouters]
> I don't particularly disagree, since I hardly do anything with floating
> point numbers, but how can something both not be implementable in
> portable C *and* it's effect easily be had in other ways ?

Can't.  rint is not standard C, but is standard C99, where a conforming
implementation requires paying attention to all the details of the 754 fp
model.  It's a *non* 754-aware version of rint that can be easily had in
other ways (e.g., you easily write a rint in Python that always rounds to
nearest/even, by building on math.floor and checking the sign bit, but
ignoring the possibilities of infinities, NaNs, current 754 rounding mode,
and correct treatment of (at least) the 754 inexact and underflow flags --
Python gives no way to get at any of those now, neither does current C, and
a correct rint from the C99 point of view has to deal with all of them).

This is a case where I'm unwilling to support a function at all before it
can be supported correctly; I see no value in exposing current platforms'
divergent and incorrect implementations of rint, not in the math module.
Code that uses them will fail to work at all on some platforms (since rint
is not in today's C, some platfroms don't have it), and change meaning over
time as the other platforms move toward C99 compliance.

> I also recall someone who was implementing rint() on platforms
> that didnt have it... Or did that idea get trashed because it wasn't
> portable enough ?

Bingo.

everyone's-welcome-to-right-their-own-incorrect-version<wink>-ly
    y'rs  - tim





From jack at oratrix.nl  Sun Aug  6 22:56:48 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Sun, 06 Aug 2000 22:56:48 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
Message-ID: <20000806205653.B0341E2670@oratrix.oratrix.nl>

Could the defenders of Stackless Python please explain _why_ this is
such a great idea? Just and Christian seem to swear by it, but I'd
like to hear of some simple examples of programming tasks that will be 
programmable in 50% less code with it (or 50% more understandable
code, for that matter).

And, similarly, could the detractors of Stackless Python explain why
it is such a bad idea. A lot of core-pythoneers seem to have
misgivings about it, even though issues of compatability and
efficiency have been countered many times here by its champions (at
least, it seems that way to a clueless bystander like myself). I'd
really like to be able to take a firm standpoint myself, that's part
of my personality, but I really don't know which firm standpoint at
the moment:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From tim_one at email.msn.com  Sun Aug  6 23:03:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 17:03:23 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFLGOAA.tim_one@email.msn.com>

[Jack Jansen]
> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? ...

But they already have, and many times.  That's why it needs a PEP:  so we
don't have to endure <wink> the exact same heated discussions multiple times
every year for eternity.

> ...
> And, similarly, could the detractors of Stackless Python explain why
> it is such a bad idea.

Ditto.

if-anyone-hasn't-yet-noticed-98%-of-advocacy-posts-go-straight-
    into-a-black-hole-ly y'rs  - tim





From thomas at xs4all.net  Sun Aug  6 23:05:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 6 Aug 2000 23:05:45 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>; from jack@oratrix.nl on Sun, Aug 06, 2000 at 10:56:48PM +0200
References: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <20000806230545.T266@xs4all.nl>

On Sun, Aug 06, 2000 at 10:56:48PM +0200, Jack Jansen wrote:

> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? Just and Christian seem to swear by it, but I'd
> like to hear of some simple examples of programming tasks that will be 
> programmable in 50% less code with it (or 50% more understandable
> code, for that matter).

That's *continuations*, not Stackless. Stackless itself is just a way of
implementing the Python bytecode eval loop with minimized use of the C
stack. It doesn't change any functionality except the internal dependance on
the C stack (which is limited on some platforms.) Stackless also makes a
number of things possible, like continuations.

Continuations can certainly reduce code, if used properly, and they can make
it a lot more readable if the choice is between continuations or threaded
spaghetti-code. It can, however, make code a lot less readable too, if used
improperly, or when viewed by someone who doesn't grok continuations.

I'm +1 on Stackless, +0 on continuations. (Continuations are cool, and
Pythonic in one sense (stackframes become even firster class citizens ;) but
not easy to learn or get used to.)

> And, similarly, could the detractors of Stackless Python explain why
> it is such a bad idea.

I think my main reservation towards Stackless is the change to ceval.c,
which is likely to be involved (I haven't looked at it, yet) -- but ceval.c
isn't a childrens' book now, and I think the added complexity (if any) is
worth the loss of some of the dependancies on the C stack.

fl.0,02-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Mon Aug  7 01:18:22 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 06 Aug 2000 23:18:22 +0000
Subject: [Python-Dev] math.rint bites the dust
References: <200008062117.QAA15501@cj20424-a.reston1.va.home.com>
Message-ID: <398DF23E.D1DDE196@nowonder.de>

Guido van Rossum wrote:
> 
> After a brief consult with Tim, I've decided to drop math.rint() --
> it's not standard C, can't be implemented in portable C, and its
> naive (non-IEEE-754-aware) effect can easily be had in other ways.

If this is because of Bug #111162, things can be fixed easily.
(as I said in another post just some minutes ago, I just
need to recommit the changes made after cnri-16-start.)

I wouldn't be terribly concerned about its (maybe temporary)
death, though. After I learned more about it I am sure I
want to use round() rather than math.rint().

floating-disap-point-ed-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From esr at thyrsus.com  Sun Aug  6 23:59:35 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 17:59:35 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>; from jack@oratrix.nl on Sun, Aug 06, 2000 at 10:56:48PM +0200
References: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <20000806175935.A14138@thyrsus.com>

Jack Jansen <jack at oratrix.nl>:
> Could the defenders of Stackless Python please explain _why_ this is
> such a great idea? Just and Christian seem to swear by it, but I'd
> like to hear of some simple examples of programming tasks that will be 
> programmable in 50% less code with it (or 50% more understandable
> code, for that matter).

My interest in Stackless is that I want to see Icon-style generators,
co-routining, and first-class continuations in Python.  Generators and
co-routining are wrappers around continuations.  Something
functionally equivalent to the Stackless mods is needed to get us
there, because using the processor stack makes it very hard to do
continuations properly.

In their full generality, first-class continuations are hard to think
about and to explain clearly, and I'm not going to try here.  A large
part of Guido's reluctance to introduce them is precisely because they
are so hard to think about; he thinks it's a recipe for trouble stuff
in the language that *he* has trouble understanding, let alone other
people.

He has a point, and part of the debate going on in the group that has
been tracking this stuff (Guido, Barry Warsaw, Jeremy Hylton, Fred
Drake, Eric Tiedemann and myself) is whether Python should expose
support for first-class continuations or only "safer" packagings such
as coroutining and generators.  So for the moment just think of
continuations as the necessary primitive to implement coroutining and
generators.

You can think of a generator as a function that, internally, is coded 
as a special kind of loop.  Let's say, for example, that you want a function
that returns successive entries in the list "squares of integers".  In 
Python-with-generators, that would look something like this.

def square_generator():
    i = 1
    while 1:
	yield i**2
	i = i + 1

Calling this function five times in succession would return 1, 4, 9,
16, 25.  Now what would be going on under the hood is that the new primitive
"yield" says "return the given value, and save a continuation of this
function to be run next time the function is called".  The continuation 
saves the program counter and the state of automatic variables (the stack)
to be restored on the next call -- thus, execution effectively resumes just
after the yield statement.

This example probably does not look very interesting.  It's a very trivial
use of the facility.  But now suppose you had an analogous function 
implemented by a code loop that gets an X event and yields the event data!

Suddenly, X programs don't have to look like a monster loop with all the
rest of the code hanging off of them.  Instead, any function in the program
that needs to do stateful input parsing can just say "give me the next event"
and get it.  

In general, what generators let you do is invert control hierarchies
based on stateful loops or recursions.  This is extremely nice for
things like state machines and tree traversals -- you can bundle away
the control loop away in a generator, interrupt it, and restart it
without losing your place.

I want this feature a lot.  Guido has agreed in principle that we ought
to have generators, but there is not yet a well-defined path forward to
them.  Stackless may be the most promising route.

I was going to explain coroutines separately, but I realized while writing
this that the semantics of "yield" proposed above actually gives full
coroutining.  Two functions can ping-pong control back and forth among
themselves while retaining their individual stack states as a pair of
continuations.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"This country, with its institutions, belongs to the people who
inhabit it. Whenever they shall grow weary of the existing government,
they can exercise their constitutional right of amending it or their
revolutionary right to dismember it or overthrow it."
	-- Abraham Lincoln, 4 April 1861



From tim_one at email.msn.com  Mon Aug  7 00:07:45 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 18:07:45 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806175935.A14138@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>

[ Eric S. Raymond]
> ...
> I want this feature [generators] a lot.  Guido has agreed in principle
> that we ought to have generators, but there is not yet a well-defined
> path forward to them.  Stackless may be the most promising route.

Actually, if we had a PEP <wink>, it would have recorded for all time that
Guido gave a detailed description of how to implement generators with minor
changes to the current code.  It would also record that Steven Majewski had
already done so some 5 or 6 years ago.  IMO, the real reason we don't have
generators already is that they keep getting hijacked by continuations
(indeed, Steven gave up on his patches as soon as he realized he couldn't
extend his approach to continuations).

> I was going to explain coroutines separately, but I realized while
> writing this that the semantics of "yield" proposed above actually
> gives full coroutining.

Well, the Icon semantics for "suspend"-- which are sufficient for Icon's
generators --are not sufficient for Icon's coroutines.  It's for that very
reason that Icon supports generators on all platforms (including JCon, their
moral equivalent of JPython), but supports coroutines only on platforms that
have the magical blob of platform-dependent machine-language cruft needed to
trick out the C stack at coroutine context switches (excepting JCon, where
coroutines are implemented as Java threads).

Coroutines are plain harder.  Generators are just semi-coroutines
(suspend/yield *always* return "to the caller", and that makes life 100x
easier in a conventional eval loop like Python's -- it's still "stack-like",
and the only novel thing needed is a way to resume a suspended frame but
still in call-like fashion).

and-if-we-had-a-pep-every-word-of-this-reply-would-have-been-
    in-it-too<wink>-ly y'rs  - tim





From esr at thyrsus.com  Mon Aug  7 00:51:59 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 18:51:59 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 06, 2000 at 06:07:45PM -0400
References: <20000806175935.A14138@thyrsus.com> <LNBBLJKPBEHFEDALKOLCCEGAGOAA.tim_one@email.msn.com>
Message-ID: <20000806185159.A14259@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> [ Eric S. Raymond]
> > ...
> > I want this feature [generators] a lot.  Guido has agreed in principle
> > that we ought to have generators, but there is not yet a well-defined
> > path forward to them.  Stackless may be the most promising route.
> 
> Actually, if we had a PEP <wink>, it would have recorded for all time that
> Guido gave a detailed description of how to implement generators with minor
> changes to the current code.  It would also record that Steven Majewski had
> already done so some 5 or 6 years ago. 

Christian Tismer, over to you.  I am not going to presume to initiate
the continuations PEP when there's someone with a Python
implementation and extensive usage experience on the list.  However, I
will help with editing and critiques based on my experience with other
languages that have similar features, if you want.

>                                     IMO, the real reason we don't have
> generators already is that they keep getting hijacked by continuations
> (indeed, Steven gave up on his patches as soon as he realized he couldn't
> extend his approach to continuations).

This report of repeated "hijacking" doesn't surprise me a bit.  In fact,
if I'd thought about it I'd have *expected* it.  We know from experience
with other languages (notably Scheme) that call-with-current-continuation
is the simplest orthogonal primitive that this whole cluster of concepts can
be based on.  Implementors with good design taste are going to keep finding
their way back to it, and they're going to feel incompleteness and pressure
if they can't get there.

This is why I'm holding out for continuation objects and 
call-with-continuation to be an explicit Python builtin. We're going to get
there anyway; best to do it cleanly right away.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Taking my gun away because I might shoot someone is like cutting my tongue
out because I might yell `Fire!' in a crowded theater."
        -- Peter Venetoklis



From esr at snark.thyrsus.com  Mon Aug  7 01:18:35 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 19:18:35 -0400
Subject: [Python-Dev] Adding a new class to the library?
Message-ID: <200008062318.TAA14335@snark.thyrsus.com>

I have a candidate for admission to the Python class library.  It's a
framework class for writing things like menu trees and object
browsers.  What's the correct approval procedure for such things?

In more detail, it supports manipulating a stack of sequence objects.
Each sequence object has an associated selection point (the cirrently
selected sequence member) and an associated viewport around it (a
range of indices or sequence members that are considered `visible'.

There are methods to manipulate the object stack.  More importantly,
there are functions which move the selection point in the current
object around, and drag the viewport with it.  (This sort of
thing sounds simple, but is tricky for the same reason BitBlt is
tricky -- lots of funky boundary cases.)

I've used this as the framework for implementing the curses menu
interface for CML2.  It is well-tested and stable.  It might also
be useful for implementing other kinds of data browsers in any
situation where the concept of limited visibility around a selection
point makes sense.  Symbolic debuggers is an example that leaps to mind.

I am, of course, willing to fully document it.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"One of the ordinary modes, by which tyrants accomplish their purposes
without resistance, is, by disarming the people, and making it an
offense to keep arms."
        -- Constitutional scholar and Supreme Court Justice Joseph Story, 1840



From gmcm at hypernet.com  Mon Aug  7 01:34:44 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sun, 6 Aug 2000 19:34:44 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <1246517606-99838203@hypernet.com>

Jack Jansen wrote:

> Could the defenders of Stackless Python please explain _why_ this
> is such a great idea? Just and Christian seem to swear by it, but
> I'd like to hear of some simple examples of programming tasks
> that will be programmable in 50% less code with it (or 50% more
> understandable code, for that matter).

Here's the complete code for the download of a file (the data 
connection of an FTP server):

    def _doDnStream(self, binary=0):
        mode = 'r'
        if binary:
            mode = mode + 'b'
        f = open(self.cmdconn.filename, mode)
        if self.cmdconn.filepos:
            #XXX check length of file
            f.seek(self.cmdconn.filepos, 0)
        while 1:
            if self.abort:
                break
            data = f.read(8192)
            sz = len(data)
            if sz:
                if not binary:
                    data = '\r\n'.join(data.split('\n'))
                self.write(data)
            if sz < 8192:
                break

[from the base class]
    def write(self, msg):
        while msg:
            sent = self.dispatcher.write(self.sock, msg)
            if sent == 0:
                raise IOError, "unexpected EOF"
            msg = msg[sent:]

Looks like blocking sockets, right? Wrong. That's a fully 
multiplexed socket. About a dozen lines of code (hidden in 
that dispatcher object) mean that I can write async without 
using a state machine. 

stackless-forever-ly y'rs

- Gordon



From guido at beopen.com  Mon Aug  7 03:32:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 06 Aug 2000 20:32:59 -0500
Subject: [Python-Dev] Adding a new class to the library?
In-Reply-To: Your message of "Sun, 06 Aug 2000 19:18:35 -0400."
             <200008062318.TAA14335@snark.thyrsus.com> 
References: <200008062318.TAA14335@snark.thyrsus.com> 
Message-ID: <200008070132.UAA16111@cj20424-a.reston1.va.home.com>

> I have a candidate for admission to the Python class library.  It's a
> framework class for writing things like menu trees and object
> browsers.  What's the correct approval procedure for such things?
> 
> In more detail, it supports manipulating a stack of sequence objects.
> Each sequence object has an associated selection point (the cirrently
> selected sequence member) and an associated viewport around it (a
> range of indices or sequence members that are considered `visible'.
> 
> There are methods to manipulate the object stack.  More importantly,
> there are functions which move the selection point in the current
> object around, and drag the viewport with it.  (This sort of
> thing sounds simple, but is tricky for the same reason BitBlt is
> tricky -- lots of funky boundary cases.)
> 
> I've used this as the framework for implementing the curses menu
> interface for CML2.  It is well-tested and stable.  It might also
> be useful for implementing other kinds of data browsers in any
> situation where the concept of limited visibility around a selection
> point makes sense.  Symbolic debuggers is an example that leaps to mind.
> 
> I am, of course, willing to fully document it.

Have a look at the tree widget in IDLE.  That's Tk specific, but I
believe there's a lot of GUI independent concepts in there.  IDLE's
path and object browsers are built on it.  How does this compare?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at thyrsus.com  Mon Aug  7 02:52:53 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 6 Aug 2000 20:52:53 -0400
Subject: [Python-Dev] Adding a new class to the library?
In-Reply-To: <200008070132.UAA16111@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 06, 2000 at 08:32:59PM -0500
References: <200008062318.TAA14335@snark.thyrsus.com> <200008070132.UAA16111@cj20424-a.reston1.va.home.com>
Message-ID: <20000806205253.B14423@thyrsus.com>

Guido van Rossum <guido at beopen.com>:
> Have a look at the tree widget in IDLE.  That's Tk specific, but I
> believe there's a lot of GUI independent concepts in there.  IDLE's
> path and object browsers are built on it.  How does this compare?

Where is this in the CVS tree? I groveled for it but without success.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

To make inexpensive guns impossible to get is to say that you're
putting a money test on getting a gun.  It's racism in its worst form.
        -- Roy Innis, president of the Congress of Racial Equality (CORE), 1988



From greg at cosc.canterbury.ac.nz  Mon Aug  7 03:04:27 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 07 Aug 2000 13:04:27 +1200 (NZST)
Subject: [Python-Dev] Go \x yourself
In-Reply-To: <200008041511.KAA01925@cj20424-a.reston1.va.home.com>
Message-ID: <200008070104.NAA12334@s454.cosc.canterbury.ac.nz>

BDFL:

> No, problems with literal interpretations traditionally raise
> "runtime" exceptions rather than syntax errors.  E.g.

What about using an exception that's a subclass of *both*
ValueError and SyntaxError?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Mon Aug  7 03:16:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 6 Aug 2000 21:16:44 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806185159.A14259@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>

[Tim]
> IMO, the real reason we don't have generators already is that
> they keep getting hijacked by continuations (indeed, Steven gave
> up on his patches as soon as he realized he couldn't extend his
> approach to continuations).

[esr]
> This report of repeated "hijacking" doesn't surprise me a bit.  In
> fact, if I'd thought about it I'd have *expected* it.  We know from
> experience with other languages (notably Scheme) that call-with-
> current-continuation is the simplest orthogonal primitive that this
> whole cluster of concepts can be based on.  Implementors with good
> design taste are going to keep finding their way back to it, and
> they're going to feel incompleteness and pressure if they can't get
> there.

On the one hand, I don't think I know of a language *not* based on Scheme
that has call/cc (or a moral equivalent).  REBOL did at first, but after Joe
Marshal left, Carl Sassenrath ripped it out in favor of a more conventional
implementation.  Even the massive Common Lisp declined to adopt call/cc, the
reasons for which Kent Pitman has posted eloquently and often on
comp.lang.lisp (basically summarized by that continuations are, in Kent's
view, "a semantic mess" in the way Scheme exposed them -- btw, people should
look his stuff up, as he has good ideas for cleaning that mess w/o
sacrificing the power (and so the Lisp world splinters yet again?)).  So
call/cc remains "a Scheme thing" to me after all these years, and even there
by far the most common warning in the release notes for a new implementation
is that call/cc doesn't work correctly yet or at all (but, in the meantime,
here are 3 obscure variations that will work in hard-to-explain special
cases ...).  So, ya, we *do* have experience with this stuff, and it sure
ain't all good.

On the other hand, what implementors other than Schemeheads *do* keep
rediscovering is that generators are darned useful and can be implemented
easily without exotic views of the world.  CLU, Icon and Sather all fit in
that box, and their designers wouldn't touch continuations with a 10-foot
thick condom <wink>.

> This is why I'm holding out for continuation objects and
> call-with-continuation to be an explicit Python builtin. We're
> going to get there anyway; best to do it cleanly right away.

This can get sorted out in the PEP.  As I'm sure someone else has screamed
by now (because it's all been screamed before), Stackless and the
continuation module are distinct beasts (although the latter relies on the
former).  It would be a shame if the fact that it makes continuations
*possible* were to be held against Stackless.  It makes all sorts of things
possible, some of which Guido would even like if people stopped throwing
continuations in his face long enough for him to see beyond them <0.5
wink -- but he doesn't like continuations, and probably never will>.





From jeremy at alum.mit.edu  Mon Aug  7 03:39:46 2000
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sun, 6 Aug 2000 21:39:46 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <20000806205653.B0341E2670@oratrix.oratrix.nl>
Message-ID: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>

If someone is going to write a PEP, I hope they will explain how the
implementation deals with the various Python C API calls that can call back
into Python.

In the stackless implementation, builtin_apply is a thin wrapper around
builtin_apply_nr.  The wrapper checks the return value from builtin_apply_nr
for Py_UnwindToken.  If Py_UnwindToken is found, it calls
PyEval_Frame_Dispatch. In this case, builtin_apply returns whatever
PyEval_Frame_Dispatch returns; the frame dispatcher just executes stack
frames until it is ready to return.

How does this control flow at the C level interact with a Python API call
like PySequence_Tuple or PyObject_Compare that can start executing Python
code again?  Say there is a Python function call which in turn calls
PySequence_Tuple, which in turn calls a __getitem__ method on some Python
object, which in turn uses a continuation to transfer control.  After the
continuation is called, the Python function will never return and the
PySquence_Tuple call is no longer necessary, but there is still a call to
PySequence_Tuple on the C stack.  How does stackless deal with the return
through this function?

I expect that any C function that may cause Python code to be executed must
be wrapped the way apply was wrapper.  So in the example, PySequence_Tuple
may return Py_UnwindToken.  This adds an extra return condition that every
caller of PySequence_Tuple must check.  Currently, the caller must check for
NULL/exception in addition to a normal return.  With stackless, I assume the
caller would also need to check for "unwinding."

Is this analysis correct? Or is there something I'm missing?

I see that the current source release of stackless does not do anything
special to deal with C API calls that execute Python code.  For example,
PyDict_GetItem calls PyObject_Hash, which could in theory lead to a call on
a continuation, but neither caller nor callee does anything special to
account for the possibility.  Is there some other part of the implementation
that prevents this from being a problem?

Jeremy




From greg at cosc.canterbury.ac.nz  Mon Aug  7 03:50:32 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 07 Aug 2000 13:50:32 +1200 (NZST)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
In-Reply-To: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
Message-ID: <200008070150.NAA12345@s454.cosc.canterbury.ac.nz>

> dict.default('hello', []).append('hello')

Is this new method going to apply to dictionaries only,
or is it to be considered part of the standard mapping
interface?

If the latter, I wonder whether it would be better to
provide a builtin function instead. The more methods
are added to the mapping interface, the more complicated
it becomes to implement an object which fully complies
with the mapping interface. Operations which can be
carried out through the basic interface are perhaps
best kept "outside" the object, in a function or
wrapper object.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From bwarsaw at beopen.com  Mon Aug  7 04:25:54 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Sun, 6 Aug 2000 22:25:54 -0400 (EDT)
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get"
 method
References: <200008041546.KAA02168@cj20424-a.reston1.va.home.com>
	<200008070150.NAA12345@s454.cosc.canterbury.ac.nz>
Message-ID: <14734.7730.698860.642851@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    >> dict.default('hello', []).append('hello')

    GE> Is this new method going to apply to dictionaries only,
    GE> or is it to be considered part of the standard mapping
    GE> interface?

I think we've settled on setdefault(), which is more descriptive, even
if it's a little longer.  I have posted SF patch #101102 which adds
setdefault() to both the dictionary object and UserDict (along with
the requisite test suite and doco changes).

-Barry



From pf at artcom-gmbh.de  Mon Aug  7 10:32:00 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Mon, 7 Aug 2000 10:32:00 +0200 (MEST)
Subject: [Python-Dev] Who is the author of lib-tk/Tkdnd.py?
Message-ID: <m13LiKG-000DieC@artcom0.artcom-gmbh.de>

Hi,

I've some ideas (already implemented <0.5 wink>) for
generic Drag'n'Drop in Python/Tkinter applications.  
Before bothering the list here I would like to discuss this with 
the original author of Tkdnd.py.

Thank you for your attention, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
After all, Python is a programming language, not a psychic hotline. --Tim Peters



From mal at lemburg.com  Mon Aug  7 10:57:01 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 10:57:01 +0200
Subject: [Python-Dev] Pickling using XML as output format
References: <Pine.GSO.4.10.10008061544180.20069-100000@sundial>
Message-ID: <398E79DD.3EB21D3A@lemburg.com>

Moshe Zadka wrote:
> 
> On Sun, 6 Aug 2000, M.-A. Lemburg wrote:
> 
> > Before starting to reinvent the wheel:
> 
> Ummmm......I'd wait for some DC guy to chime in: I think Zope had
> something like that. You might want to ask around on the Zope lists
> or search zope.org.
> 
> I'm not sure what it has and what it doesn't have, though.

I'll download the latest beta and check this out.

Thanks for the tip,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug  7 11:15:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 11:15:08 +0200
Subject: [Python-Dev] Go \x yourself
References: <200008070104.NAA12334@s454.cosc.canterbury.ac.nz>
Message-ID: <398E7E1C.84D28EA5@lemburg.com>

Greg Ewing wrote:
> 
> BDFL:
> 
> > No, problems with literal interpretations traditionally raise
> > "runtime" exceptions rather than syntax errors.  E.g.
> 
> What about using an exception that's a subclass of *both*
> ValueError and SyntaxError?

What would this buy you ?

Note that the contents of a literal string don't really have
anything to do with syntax. The \x escape sequences are
details of the codecs used for converting those literal
strings to Python string objects.

Perhaps we need a CodecError which is subclass of ValueError
and then make the UnicodeError a subclass of this CodecError ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From artcom0!pf at artcom-gmbh.de  Mon Aug  7 10:14:54 2000
From: artcom0!pf at artcom-gmbh.de (artcom0!pf at artcom-gmbh.de)
Date: Mon, 7 Aug 2000 10:14:54 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14734.7730.698860.642851@anthem.concentric.net> from "Barry A. Warsaw" at "Aug 6, 2000 10:25:54 pm"
Message-ID: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>

Hi,

Guido:
>     >> dict.default('hello', []).append('hello')

Greg Ewing <greg at cosc.canterbury.ac.nz>:
>     GE> Is this new method going to apply to dictionaries only,
>     GE> or is it to be considered part of the standard mapping
>     GE> interface?
 
Barry A. Warsaw:
> I think we've settled on setdefault(), which is more descriptive, even
> if it's a little longer.  I have posted SF patch #101102 which adds
> setdefault() to both the dictionary object and UserDict (along with
> the requisite test suite and doco changes).

This didn't answer the question raised by Greg Ewing.  AFAI have seen,
the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
the answer is "applies to dictionaries only".

What is with the other external mapping types already in the core,
like 'dbm', 'shelve' and so on?

If the patch doesn't add this new method to these other mapping types, 
this fact should at least be documented similar to the methods 'items()' 
and 'values' that are already unimplemented in 'dbm':
 """Dbm objects behave like mappings (dictionaries), except that 
    keys and values are always strings.  Printing a dbm object 
    doesn't print the keys and values, and the items() and values() 
    methods are not supported."""

I'm still -1 on the name:  Nobody would expect, that a method 
called 'setdefault()' will actually return something useful.  May be 
it would be better to invent an absolutely obfuscuated new name, so 
that everybody is forced to actually *READ* the documentation of this 
method or nobody will guess, what it is supposed to do or even
worse: how to make clever use of it.

At least it would be a lot more likely, that someone becomes curious, 
what a method called 'grezelbatz()' is suppoed to do, than that someone
will actually lookup the documentation of a method called 'setdefault()'.

If the average Python programmer would ever start to use this method 
at all, then I believe it is very likely that we will see him/her
coding:
	dict.setdefault('key', [])
	dict['key'].append('bar')

So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
it would be better to make this a builtin function, that can be applied
to all mapping types.

Maybe it would be even better to delay this until in Python 3000
builtin types may have become real classes, so that this method may
be inherited by all mapping types from an abstract mapping base class.

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)




From mal at lemburg.com  Mon Aug  7 12:07:09 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 07 Aug 2000 12:07:09 +0200
Subject: [Python-Dev] Pickling using XML as output format
References: <Pine.GSO.4.10.10008061544180.20069-100000@sundial> <398E79DD.3EB21D3A@lemburg.com>
Message-ID: <398E8A4D.CAA87E02@lemburg.com>

"M.-A. Lemburg" wrote:
> 
> Moshe Zadka wrote:
> >
> > On Sun, 6 Aug 2000, M.-A. Lemburg wrote:
> >
> > > Before starting to reinvent the wheel:
> >
> > Ummmm......I'd wait for some DC guy to chime in: I think Zope had
> > something like that. You might want to ask around on the Zope lists
> > or search zope.org.
> >
> > I'm not sure what it has and what it doesn't have, though.
> 
> I'll download the latest beta and check this out.

Ok, Zope has something called ppml.py which aims at converting
Python pickles to XML. It doesn't really pickle directly to XML
though and e.g. uses the Python encoding for various objects.

I guess, I'll start hacking away at my own xmlpickle.py
implementation with the goal of making Python pickles
editable using a XML editor.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tismer at appliedbiometrics.com  Mon Aug  7 12:48:19 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 12:48:19 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
Message-ID: <398E93F3.374B585A@appliedbiometrics.com>


Jeremy Hylton wrote:
> 
> If someone is going to write a PEP, I hope they will explain how the
> implementation deals with the various Python C API calls that can call back
> into Python.

He will.

> In the stackless implementation, builtin_apply is a thin wrapper around
> builtin_apply_nr.  The wrapper checks the return value from builtin_apply_nr
> for Py_UnwindToken.  If Py_UnwindToken is found, it calls
> PyEval_Frame_Dispatch. In this case, builtin_apply returns whatever
> PyEval_Frame_Dispatch returns; the frame dispatcher just executes stack
> frames until it is ready to return.

Correct.

> How does this control flow at the C level interact with a Python API call
> like PySequence_Tuple or PyObject_Compare that can start executing Python
> code again?  Say there is a Python function call which in turn calls
> PySequence_Tuple, which in turn calls a __getitem__ method on some Python
> object, which in turn uses a continuation to transfer control.  After the
> continuation is called, the Python function will never return and the
> PySquence_Tuple call is no longer necessary, but there is still a call to
> PySequence_Tuple on the C stack.  How does stackless deal with the return
> through this function?

Right. What you see here is the incompleteness of Stackless.
In order to get this "right", I would have to change many
parts of the implementation, in order to allow for continuations
in every (probably even unwanted) place.
I could not do this.

Instead, the situation of these still occouring recursions
are handled differently. continuationmodule guarantees, that
in the context of recursive interpreter calls, the given
stack order of execution is obeyed. Violations of this
cause simply an exception.

> I expect that any C function that may cause Python code to be executed must
> be wrapped the way apply was wrapper.  So in the example, PySequence_Tuple
> may return Py_UnwindToken.  This adds an extra return condition that every
> caller of PySequence_Tuple must check.  Currently, the caller must check for
> NULL/exception in addition to a normal return.  With stackless, I assume the
> caller would also need to check for "unwinding."

No, nobody else is allowed to return Py_UnwindToken but the few
functions in the builtins implementation and in ceval. The
continuationmodule may produce it since it knows the context
where it is called. eval_code is supposed to be the main instance
who checks for this special value.

As said, allowing this in any context would have been a huge
change to the whole implementation, and would probably also
have broken existing extensions which do not expect that
a standard function wants to do a callback.

> Is this analysis correct? Or is there something I'm missing?
> 
> I see that the current source release of stackless does not do anything
> special to deal with C API calls that execute Python code.  For example,
> PyDict_GetItem calls PyObject_Hash, which could in theory lead to a call on
> a continuation, but neither caller nor callee does anything special to
> account for the possibility.  Is there some other part of the implementation
> that prevents this from being a problem?

This problem is no problem for itself, since inside the stackless
modification for Python, there are no places where unexpected
Py_UnwindTokens or continuations are produced. This is a closed
system insofar. But with the continuation extension, it is
of course a major problem.

The final solution to the recursive interpreter/continuation
problem was found long after my paper was presented. The idea
is simple, solves everything, and shortened my implementation
substantially:

Whenever a recursive interpreter call takes place, the calling
frame gets a lock flag set. This flag says "this frame is wrapped
in a suspended eval_code call and cannot be a continuation".
continuationmodule always obeys this flag and prevends the
creation of continuations for such frames by raising an
exception. In other words: Stack-like behavior is enforced
in situations where the C stack is involved.

So, a builtin or an extension *can* call a continuation, but
finally, it will have to come back to the calling point.
If not, then one of the locked frames will be touched,
finally, in the wrong C stack order. But by reference
counting, this touching will cause the attempt to create
a continuation, and what I said above will raise an exception.

Probably the wrong place to explain this in more detail here, but
it doesn't apply to the stackless core at all which is just
responsible for the necessary support machinery.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From paul at prescod.net  Mon Aug  7 14:18:43 2000
From: paul at prescod.net (Paul Prescod)
Date: Mon, 07 Aug 2000 08:18:43 -0400
Subject: [Python-Dev] New winreg module really an improvement?
References: <39866F8D.FCFA85CB@prescod.net> <1246975873-72274187@hypernet.com>
Message-ID: <398EA923.E5400D2B@prescod.net>

Gordon McMillan wrote:
> 
> ...
> 
> As a practical matter, it looks to me like winreg (under any but
> the most well-informed usage) may well leak handles. If so,
> that would be a disaster. But I don't have time to check it out.

I would be very surprised if that was the case. Perhaps you can outline
your thinking so that *I* can check it out.

I claim that:

_winreg never leaks Windows handles as long _winreg handle objects are
destroyed.

winreg is written entirely in Python and destroys _winreg handles as
long as winreg key objects are destroyed.

winreg key objects are destroyed as long as there is no cycle.

winreg does not create cycles.

Therefore, winreg does not leak handles. I'm 98% confident of each
assertion...for a total confidence of 92%.
-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Mon Aug  7 14:38:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 07:38:11 -0500
Subject: [Python-Dev] Re: A small proposed change to dictionaries' "get" method
In-Reply-To: Your message of "Mon, 07 Aug 2000 13:50:32 +1200."
             <200008070150.NAA12345@s454.cosc.canterbury.ac.nz> 
References: <200008070150.NAA12345@s454.cosc.canterbury.ac.nz> 
Message-ID: <200008071238.HAA18076@cj20424-a.reston1.va.home.com>

> > dict.default('hello', []).append('hello')
> 
> Is this new method going to apply to dictionaries only,
> or is it to be considered part of the standard mapping
> interface?
> 
> If the latter, I wonder whether it would be better to
> provide a builtin function instead. The more methods
> are added to the mapping interface, the more complicated
> it becomes to implement an object which fully complies
> with the mapping interface. Operations which can be
> carried out through the basic interface are perhaps
> best kept "outside" the object, in a function or
> wrapper object.

The "mapping interface" has no firm definition.  You're free to
implement something without a default() method and call it a mapping.

In Python 3000, where classes and built-in types will be unified, of
course this will be fixed: there will be a "mapping" base class that
implements get() and default() in terms of other, more primitive
operations.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From moshez at math.huji.ac.il  Mon Aug  7 13:45:45 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 7 Aug 2000 14:45:45 +0300 (IDT)
Subject: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd)
Message-ID: <Pine.GSO.4.10.10008071444080.4113-100000@sundial>

I've answered him personally about the first part -- but the second part
is interesting (and even troubling)

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez

---------- Forwarded message ----------
Date: Mon, 7 Aug 2000 08:59:30 +0000 (UTC)
From: Eddy De Greef <degreef at imec.be>
To: python-list at python.org
Newsgroups: comp.lang.python
Subject: Minor compilation problem on HP-UX (1.6b1)

Hi,

when I compile version 1.6b1 on HP-UX-10, I get a few compilation errors 
in Python/getargs.c (undefined UCHAR_MAX etc). The following patch fixes this:

------------------------------------------------------------------------------
*** Python/getargs.c.orig       Mon Aug  7 10:19:55 2000
--- Python/getargs.c    Mon Aug  7 10:20:21 2000
***************
*** 8,13 ****
--- 8,14 ----
  #include "Python.h"
  
  #include <ctype.h>
+ #include <limits.h>
  
  
  int PyArg_Parse Py_PROTO((PyObject *, char *, ...));
------------------------------------------------------------------------------

I also have a suggestion to improve the speed on the HP-UX platform. 
By tuning the memory allocation algorithm (see the patch below), it is 
possible to obtain a speed improvement of up to 22% on non-trivial 
Python scripts, especially when lots of (small) objects have to be created. 
I'm aware that platform-specific features are undesirable for a 
multi-platform application such as Python, but 22% is quite a lot
for such a small modification ...
Maybe similar tricks can be used on other platforms too.

------------------------------------------------------------------------------
*** Modules/main.c.orig Mon Aug  7 10:02:09 2000
--- Modules/main.c      Mon Aug  7 10:02:37 2000
***************
*** 83,88 ****
--- 83,92 ----
        orig_argc = argc;       /* For Py_GetArgcArgv() */
        orig_argv = argv;
  
+ #ifdef __hpux
+       mallopt (M_MXFAST, 512);
+ #endif /* __hpux */
+ 
        if ((p = getenv("PYTHONINSPECT")) && *p != '\0')
                inspect = 1;
        if ((p = getenv("PYTHONUNBUFFERED")) && *p != '\0')
------------------------------------------------------------------------------

Regards,

Eddy
-- 
http://www.python.org/mailman/listinfo/python-list




From gmcm at hypernet.com  Mon Aug  7 14:00:10 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 08:00:10 -0400
Subject: [Python-Dev] New winreg module really an improvement?
In-Reply-To: <398EA923.E5400D2B@prescod.net>
Message-ID: <1246472883-102528128@hypernet.com>

Paul Prescod wrote:

> Gordon McMillan wrote:
> > 
> > ...
> > 
> > As a practical matter, it looks to me like winreg (under any
> > but the most well-informed usage) may well leak handles. If so,
> > that would be a disaster. But I don't have time to check it
> > out.
> 
> I would be very surprised if that was the case. Perhaps you can
> outline your thinking so that *I* can check it out.

Well, I saw RegKey.close nowhere referenced. I saw the 
method it calls in _winreg not getting triggered elsewhere. I 
missed that _winreg closes them another way on dealloc.

BTW, not all your hive names exist on every Windows 
platform (or build of _winreg).
 


- Gordon



From jack at oratrix.nl  Mon Aug  7 14:27:59 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 14:27:59 +0200
Subject: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd) 
In-Reply-To: Message by Moshe Zadka <moshez@math.huji.ac.il> ,
	     Mon, 7 Aug 2000 14:45:45 +0300 (IDT) , <Pine.GSO.4.10.10008071444080.4113-100000@sundial> 
Message-ID: <20000807122800.8D0B1303181@snelboot.oratrix.nl>

> + #ifdef __hpux
> +       mallopt (M_MXFAST, 512);
> + #endif /* __hpux */
> + 

After reading this I went off and actually _read_ the mallopt manpage for the 
first time in my life, and it seems there's quite a few parameters there we 
might want to experiment with. Besides the M_MXFAST there's also M_GRAIN, 
M_BLKSIZ, M_MXCHK and M_FREEHD that could have significant impact on Python 
performance. I know that all the tweaks and tricks I did in the MacPython 
malloc implementation resulted in a speedup of 20% or more (including the 
cache-aligment code in dictobject.c).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 14:59:49 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 14:59:49 +0200 (CEST)
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX (1.6b1) (fwd))
In-Reply-To: <20000807122800.8D0B1303181@snelboot.oratrix.nl> from "Jack Jansen" at Aug 07, 2000 02:27:59 PM
Message-ID: <200008071259.OAA22446@python.inrialpes.fr>

Jack Jansen wrote:
> 
> 
> > + #ifdef __hpux
> > +       mallopt (M_MXFAST, 512);
> > + #endif /* __hpux */
> > + 
> 
> After reading this I went off and actually _read_ the mallopt manpage for the 
> first time in my life, and it seems there's quite a few parameters there we 
> might want to experiment with. Besides the M_MXFAST there's also M_GRAIN, 
> M_BLKSIZ, M_MXCHK and M_FREEHD that could have significant impact on Python 
> performance. I know that all the tweaks and tricks I did in the MacPython 
> malloc implementation resulted in a speedup of 20% or more (including the 
> cache-aligment code in dictobject.c).

To start with, try the optional object malloc I uploaded yestedray at SF.
[Patch #101104]

Tweaking mallopt and getting 20% speedup for some scripts is no surprise
at all. For me <wink>. It is not portable though.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From jeremy at beopen.com  Mon Aug  7 15:05:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 09:05:20 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>
References: <20000806185159.A14259@thyrsus.com>
	<LNBBLJKPBEHFEDALKOLCIEGEGOAA.tim_one@email.msn.com>
Message-ID: <14734.46096.366920.827786@bitdiddle.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

  TP> On the one hand, I don't think I know of a language *not* based
  TP> on Scheme that has call/cc (or a moral equivalent).

ML also has call/cc, at least the Concurrent ML variant.

Jeremy



From jeremy at beopen.com  Mon Aug  7 15:10:14 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 09:10:14 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <398E93F3.374B585A@appliedbiometrics.com>
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
	<398E93F3.374B585A@appliedbiometrics.com>
Message-ID: <14734.46390.190481.441065@bitdiddle.concentric.net>

>>>>> "CT" == Christian Tismer <tismer at appliedbiometrics.com> writes:

  >> If someone is going to write a PEP, I hope they will explain how
  >> the implementation deals with the various Python C API calls that
  >> can call back into Python.

  CT> He will.

Good!  You'll write a PEP.

  >> How does this control flow at the C level interact with a Python
  >> API call like PySequence_Tuple or PyObject_Compare that can start
  >> executing Python code again?  Say there is a Python function call
  >> which in turn calls PySequence_Tuple, which in turn calls a
  >> __getitem__ method on some Python object, which in turn uses a
  >> continuation to transfer control.  After the continuation is
  >> called, the Python function will never return and the
  >> PySquence_Tuple call is no longer necessary, but there is still a
  >> call to PySequence_Tuple on the C stack.  How does stackless deal
  >> with the return through this function?

  CT> Right. What you see here is the incompleteness of Stackless.  In
  CT> order to get this "right", I would have to change many parts of
  CT> the implementation, in order to allow for continuations in every
  CT> (probably even unwanted) place.  I could not do this.

  CT> Instead, the situation of these still occouring recursions are
  CT> handled differently. continuationmodule guarantees, that in the
  CT> context of recursive interpreter calls, the given stack order of
  CT> execution is obeyed. Violations of this cause simply an
  CT> exception.

Let me make sure I understand: If I invoke a continuation when there
are extra C stack frames between the mainloop invocation that captured
the continuation and the call of the continuation, the interpreter
raises an exception?

If so, continuations don't sound like they would mix well with C
extension modules and callbacks.  I guess it also could not be used
inside methods that implement operator overloading.  Is there a simple
set of rules that describe the situtations where they will not work?

Jeremy



From thomas at xs4all.net  Mon Aug  7 15:07:11 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 7 Aug 2000 15:07:11 +0200
Subject: [Python-Dev] augmented assignment
Message-ID: <20000807150711.W266@xs4all.nl>


I 'finished' the new augmented assignment patch yesterday, following the
suggestions made by Guido about using INPLACE_* bytecodes rather than
special GETSET_* opcodes.

I ended up with 13 new opcodes: INPLACE_* opcodes for the 11 binary
operation opcodes, DUP_TOPX which duplicates a number of stack items instead
of just the topmost item, and ROT_FOUR.

I thought I didn't need ROT_FOUR if we had DUP_TOPX but I hadn't realized
assignment needs the new value at the bottom of the 'stack', and the objects
that are used in the assignment above that. So ROT_FOUR is necessary in the
case of slice-assignment:

a[b:c] += i

LOAD a			[a]
LOAD b			[a, b]
LOAD c			[a, b, c]
DUP_TOPX 3		[a, b, c, a, b, c]
SLICE+3			[a, b, c, a[b:c]]
LOAD i			[a, b, c, a[b:c], i]
INPLACE_ADD		[a, b, c, result]
ROT_FOUR		[result, a, b, c]
STORE_SLICE+3		[]

When (and if) the *SLICE opcodes are removed, ROT_FOUR can, too :)

The patch is 'done' in my opinion, except for two tiny things:

- PyNumber_InPlacePower() takes just two arguments, not three. Three
argument power() does such 'orrible things to coerce all the arguments, and
you can't do augmented-assignment-three-argument-power anyway. If it's added
it would be for the API only, and I'm not sure if it's worth it :P

- I still don't like the '_ab_' names :) I think __inplace_add__ or __iadd__
  is better, but that's just me.

The PEP is also 'done'. Feedback is more than welcome, including spelling
fixes and the like. I've attached the PEP to this mail, for convenience.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: pep-0203.txt
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000807/c2d02a00/attachment-0001.txt>

From guido at beopen.com  Mon Aug  7 16:11:52 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 09:11:52 -0500
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: Your message of "Mon, 07 Aug 2000 10:14:54 +0200."
             <m13Lj9u-000DieC@artcom0.artcom-gmbh.de> 
References: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de> 
Message-ID: <200008071411.JAA18437@cj20424-a.reston1.va.home.com>

> Guido:
> >     >> dict.default('hello', []).append('hello')
> 
> Greg Ewing <greg at cosc.canterbury.ac.nz>:
> >     GE> Is this new method going to apply to dictionaries only,
> >     GE> or is it to be considered part of the standard mapping
> >     GE> interface?
>  
> Barry A. Warsaw:
> > I think we've settled on setdefault(), which is more descriptive, even
> > if it's a little longer.  I have posted SF patch #101102 which adds
> > setdefault() to both the dictionary object and UserDict (along with
> > the requisite test suite and doco changes).

PF:
> This didn't answer the question raised by Greg Ewing.  AFAI have seen,
> the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
> the answer is "applies to dictionaries only".

I replied to Greg Ewing already: it's not part of the required mapping
protocol.

> What is with the other external mapping types already in the core,
> like 'dbm', 'shelve' and so on?
> 
> If the patch doesn't add this new method to these other mapping types, 
> this fact should at least be documented similar to the methods 'items()' 
> and 'values' that are already unimplemented in 'dbm':
>  """Dbm objects behave like mappings (dictionaries), except that 
>     keys and values are always strings.  Printing a dbm object 
>     doesn't print the keys and values, and the items() and values() 
>     methods are not supported."""

Good point.

> I'm still -1 on the name:  Nobody would expect, that a method 
> called 'setdefault()' will actually return something useful.  May be 
> it would be better to invent an absolutely obfuscuated new name, so 
> that everybody is forced to actually *READ* the documentation of this 
> method or nobody will guess, what it is supposed to do or even
> worse: how to make clever use of it.

I don't get your point.  Since when is it a requirement for a method
to convey its full meaning by just its name?  As long as the name
doesn't intuitively contradict the actual meaning it should be fine.

If you read code that does:

	dict.setdefault('key', [])
	dict['key'].append('bar')

you will have no problem understanding this.  There's no need for the
reader to know that this is suboptimal.  (Of course, if you're an
experienced Python user doing a code review, you might know that.  But
it's not needed to understand what goes on.)

Likewise, if you read code like this:

	dict.setdefault('key', []).append('bar')

it doesn't seem hard to guess what it does (under the assumption that
you already know the program works).  After all, there are at most
three things that setdefault() could *possibly* return:

1. None		-- but then the append() would't work

2. dict		-- but append() is not a dict method so wouldn't work either

3. dict['key']	-- this is the only one that makes sense

> At least it would be a lot more likely, that someone becomes curious, 
> what a method called 'grezelbatz()' is suppoed to do, than that someone
> will actually lookup the documentation of a method called 'setdefault()'.

Bogus.  This would argue that we should give all methods obscure names.

> If the average Python programmer would ever start to use this method 
> at all, then I believe it is very likely that we will see him/her
> coding:
> 	dict.setdefault('key', [])
> 	dict['key'].append('bar')

And I have no problem with that.  It's still less writing than the
currently common idioms to deal with this!

> So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
> it would be better to make this a builtin function, that can be applied
> to all mapping types.

Yes, and let's also make values(), items(), has_key() and get()
builtins instead of methods.  Come on!  Python is an OO language.

> Maybe it would be even better to delay this until in Python 3000
> builtin types may have become real classes, so that this method may
> be inherited by all mapping types from an abstract mapping base class.

Sure, but that's not an argument for not adding it to the dictionary
type today!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jack at oratrix.nl  Mon Aug  7 15:26:40 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 15:26:40 +0200
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX 
 (1.6b1) (fwd))
In-Reply-To: Message by Vladimir.Marangozov@inrialpes.fr (Vladimir Marangozov) 
 ,
	     Mon, 7 Aug 2000 14:59:49 +0200 (CEST) , <200008071259.OAA22446@python.inrialpes.fr> 
Message-ID: <20000807132641.A60E6303181@snelboot.oratrix.nl>

Don't worry, Vladimir, I hadn't forgotten your malloc stuff:-) Its just that 
if mallopt is available in the standard C library this may be a way to squeeze 
out a couple of extra percent of performance that the admin who installs 
Python needn't be aware of. And I don't think your allocator can be dropped in 
to the standard distribution, because it has the potential problem of 
fragmenting the heap due to multiple malloc packages in one address space (at 
least, that was the problem when I last looked at it, which is admittedly more 
than a year ago).

And about mallopt not being portable: right, but I would assume that something 
like
#ifdef M_MXFAST
	mallopt(M_MXFAST, xxxx);
#endif
shouldn't do any harm if we set xxxx to be a size that will cause 80% or so of 
the python objects to fall into the M_MXFAST category 
(sizeof(PyObject)+sizeof(void *), maybe?). This doesn't sound 
platform-dependent...

Similarly, M_FREEHD sounds like it could speed up Python allocation, but this 
would need to be measured. Python allocation patterns shouldn't be influenced 
too much by platform, so again if this is good on one platform it is probably 
good on all.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From mark at per.dem.csiro.au  Mon Aug  7 21:34:42 2000
From: mark at per.dem.csiro.au (Mark Favas)
Date: Mon, 7 Aug 0 21:34:42 WST
Subject: [Python-Dev] mallopt (Was: Minor compilation problem on HP-UX (1.6b1))
Message-ID: <200008071334.VAA15707@demperth.per.dem.csiro.au>

To add to Vladimir Marangozov's comments about mallopt, in terms of both
portability and utility (before too much time is expended)...


From tismer at appliedbiometrics.com  Mon Aug  7 15:47:39 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 15:47:39 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <AJEAKILOCCJMDILAPGJNGEGGCBAA.jeremy@alum.mit.edu>
		<398E93F3.374B585A@appliedbiometrics.com> <14734.46390.190481.441065@bitdiddle.concentric.net>
Message-ID: <398EBDFB.4ED9FAE7@appliedbiometrics.com>

[about recursion and continuations]

>   CT> Right. What you see here is the incompleteness of Stackless.  In
>   CT> order to get this "right", I would have to change many parts of
>   CT> the implementation, in order to allow for continuations in every
>   CT> (probably even unwanted) place.  I could not do this.
> 
>   CT> Instead, the situation of these still occouring recursions are
>   CT> handled differently. continuationmodule guarantees, that in the
>   CT> context of recursive interpreter calls, the given stack order of
>   CT> execution is obeyed. Violations of this cause simply an
>   CT> exception.
> 
> Let me make sure I understand: If I invoke a continuation when there
> are extra C stack frames between the mainloop invocation that captured
> the continuation and the call of the continuation, the interpreter
> raises an exception?

Not always. Frames which are not currently bound by an
interpreter acting on them can always be jump targets.
Only those frames which are currently in the middle of
an opcode are forbidden.

> If so, continuations don't sound like they would mix well with C
> extension modules and callbacks.  I guess it also could not be used
> inside methods that implement operator overloading.  Is there a simple
> set of rules that describe the situtations where they will not work?

Right. In order to have good mixing with C callbacks, extra
work is necessary. The C extension module must then play the
same frame dribbling game as the eval_loop does. An example
can be found in stackless map.
If the C extension does not do so, it restricts execution
order in the way I explained. This is not always needed,
and it is no new requirement for C developers to do so.
Only if they want to support free continuation switching,
they have to implement it.

The simple set of rules where continuations will not work at
the moment is: Generally it does not work across interpreter
recursions. At least restrictions appear:

- you cannot run an import and jump off to the caller's frame
+ but you can save a continuation in your import and use it
  later, when this recursive interpreter is gone.

- all special class functions are restricted.
+ but you can for instance save a continuation in __init__
  and use it later, when the init recursion has gone.

Reducing all these restrictions is a major task, and there
are situations where it looks impossible without an extra
subinterpreter language. If you look into the implementation
of operators like __add__, you will see that there are
repeated method calls which all may cause other interpreters
to show up. I tried to find a way to roll these functions
out in a restartable way, but it is quite a mess. The
clean way to do it would be to have microcodes, and to allow
for continuations to be caught between them.

this-is-a-stackless-3000-feature - ly y'rs - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 16:00:08 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 16:00:08 +0200 (CEST)
Subject: mallopt (Re: [Python-Dev] Minor compilation problem on HP-UX
In-Reply-To: <20000807132641.A60E6303181@snelboot.oratrix.nl> from "Jack Jansen" at Aug 07, 2000 03:26:40 PM
Message-ID: <200008071400.QAA22652@python.inrialpes.fr>

Jack Jansen wrote:
> 
> Don't worry, Vladimir, I hadn't forgotten your malloc stuff:-)

Me? worried about mallocs? :-)

> if mallopt is available in the standard C library this may be a way
> to squeeze out a couple of extra percent of performance that the admin
> who installs Python needn't be aware of.

As long as you're maintaining a Mac-specific port of Python, you can
do this without pbs on the Mac port.

> And I don't think your allocator can be dropped in 
> to the standard distribution, because it has the potential problem of 
> fragmenting the heap due to multiple malloc packages in one address
> space (at least, that was the problem when I last looked at it, which
> is admittedly more than a year ago).

Things have changed since then. Mainly on the Python side.
Have a look again.

> 
> And about mallopt not being portable: right, but I would assume that
> something like
> #ifdef M_MXFAST
> 	mallopt(M_MXFAST, xxxx);
> #endif
> shouldn't do any harm if we set xxxx to be a size that will cause 80%
> or so of the python objects to fall into the M_MXFAST category 

Which is exactly what pymalloc does, except that this applies for > 95% of
all allocations.

> (sizeof(PyObject)+sizeof(void *), maybe?). This doesn't sound 
> platform-dependent...

Indeed, I also use this trick to tune automatically the object allocator
for 64-bit platforms. I haven't tested it on such machines as I don't have
access to them, though. But it should work.

> Similarly, M_FREEHD sounds like it could speed up Python allocation,
> but this would need to be measured. Python allocation patterns shouldn't
> be influenced too much by platform, so again if this is good on one
> platform it is probably good on all.

I am against any guesses in this domain. Measures and profiling evidence:
that's it.  Being able to make lazy decisions about Python's mallocs is
our main advantage. Anything else is wild hype <0.3 wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gmcm at hypernet.com  Mon Aug  7 16:20:50 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 10:20:50 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <14734.46390.190481.441065@bitdiddle.concentric.net>
References: <398E93F3.374B585A@appliedbiometrics.com>
Message-ID: <1246464444-103035791@hypernet.com>

Jeremy wrote:

> >>>>> "CT" == Christian Tismer <tismer at appliedbiometrics.com>
> >>>>> writes:
> 
>   >> If someone is going to write a PEP, I hope they will explain
>   how >> the implementation deals with the various Python C API
>   calls that >> can call back into Python.
> 
>   CT> He will.
> 
> Good!  You'll write a PEP.

Actually, "He" is me. While I speak terrible German, my 
Tismerish is pretty good (Tismerish to English is a *huge* 
jump <wink>).

But I can't figure out what the h*ll is being PEPed. We know 
that continuations / coroutines / generators have great value. 
We know that stackless is not continuations; it's some mods 
(mostly to ceval.c) that enables continuation.c. But the 
questions you're asking (after protesting that you want a 
formal spec, not a reference implementation) are all about 
Christian's implementation of continuation.c. (Well, OK, it's 
whether the stackless mods are enough to allow a perfect 
continuations implementation.)

Assuming that stackless can get along with GC, ceval.c and 
grammar changes (or Christian can make it so), it seems to 
me the PEPable issue is whether the value this can add is 
worth the price of a less linear implementation.

still-a-no-brainer-to-me-ly y'rs

- Gordon



From jack at oratrix.nl  Mon Aug  7 16:23:14 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 07 Aug 2000 16:23:14 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons 
In-Reply-To: Message by Christian Tismer <tismer@appliedbiometrics.com> ,
	     Mon, 07 Aug 2000 15:47:39 +0200 , <398EBDFB.4ED9FAE7@appliedbiometrics.com> 
Message-ID: <20000807142314.C0186303181@snelboot.oratrix.nl>

> > Let me make sure I understand: If I invoke a continuation when there
> > are extra C stack frames between the mainloop invocation that captured
> > the continuation and the call of the continuation, the interpreter
> > raises an exception?
> 
> Not always. Frames which are not currently bound by an
> interpreter acting on them can always be jump targets.
> Only those frames which are currently in the middle of
> an opcode are forbidden.

And how about the reverse? If I'm inside a Python callback from C code, will 
the Python code be able to use continuations? This is important, because there 
are a lot of GUI applications where almost all code is executed within a C 
callback. I'm pretty sure (and otherwise I'll be corrected within 
milliseconds:-) that this is the case for MacPython IDE and PythonWin (don't 
know about Idle).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jeremy at beopen.com  Mon Aug  7 16:32:35 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 7 Aug 2000 10:32:35 -0400 (EDT)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246464444-103035791@hypernet.com>
References: <398E93F3.374B585A@appliedbiometrics.com>
	<1246464444-103035791@hypernet.com>
Message-ID: <14734.51331.820955.54653@bitdiddle.concentric.net>

Gordon,

Thanks for channeling Christian, if that's what writing a PEP on this
entails :-).

I am also a little puzzled about the subject of the PEP.  I think you
should hash it out with Barry "PEPmeister" Warsaw.  There are two
different issues -- the stackless implementation and the new control
structure exposed to programmers (e.g. continuations, coroutines,
iterators, generators, etc.).  It seems plausible to address these in
two different PEPs, possibly in competing PEPs (e.g. coroutines
vs. continuations).

Jeremy



From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 16:38:32 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 16:38:32 +0200 (CEST)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246464444-103035791@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 10:20:50 AM
Message-ID: <200008071438.QAA22748@python.inrialpes.fr>

Gordon McMillan wrote:
> 
> But I can't figure out what the h*ll is being PEPed.
> ...
> Assuming that stackless can get along with GC,

As long as frames are not considered for GC, don't worry about GC.

> ceval.c and grammar changes (or Christian can make it so), it seems to 
> me the PEPable issue is whether the value this can add is 
> worth the price of a less linear implementation.

There's an essay + paper available, slides and an implementation.
What's the problem about formalizing this in a PEP and addressing
the controversial issues + explaining how they are dealt with?

I mean, if you're a convinced long-time Stackless user and everything
is obvious for you, this PEP should try to convince the rest of us -- 
so write it down and ask no more <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tismer at appliedbiometrics.com  Mon Aug  7 16:50:42 2000
From: tismer at appliedbiometrics.com (Christian Tismer)
Date: Mon, 07 Aug 2000 16:50:42 +0200
Subject: [Python-Dev] Stackless Python - Pros and Cons
References: <20000807142314.C0186303181@snelboot.oratrix.nl>
Message-ID: <398ECCC2.957A9F67@appliedbiometrics.com>


Jack Jansen wrote:
> 
> > > Let me make sure I understand: If I invoke a continuation when there
> > > are extra C stack frames between the mainloop invocation that captured
> > > the continuation and the call of the continuation, the interpreter
> > > raises an exception?
> >
> > Not always. Frames which are not currently bound by an
> > interpreter acting on them can always be jump targets.
> > Only those frames which are currently in the middle of
> > an opcode are forbidden.
> 
> And how about the reverse? If I'm inside a Python callback from C code, will
> the Python code be able to use continuations? This is important, because there
> are a lot of GUI applications where almost all code is executed within a C
> callback. I'm pretty sure (and otherwise I'll be corrected within
> milliseconds:-) that this is the case for MacPython IDE and PythonWin (don't
> know about Idle).

Without extra effort, this will be problematic. If C calls back
into Python, not by the trampoline scheme that stackless uses,
but by causing an interpreter recursion, then this interpreter
will be limited. It can jump to any other frame that is not held
by an interpreter on the C stack, but the calling frame of the
C extension for instance is locked. Touching it causes an
exception.
This need not necessarily be a problem. Assume you have one or a
couple of frames sitting around, caught as a continuation.
Your Python callback from C jumps to that continuation and does
something. Afterwards, it returns to the C callback.
Performing some cycles of an idle task may be a use of such
a thing.
But as soon as you want to leave the complete calling chain,
be able to modify it, return to a level above your callback
and such, you need to implement your callback in a different
way.
The scheme is rather simple and can be seen in the stackless
map implementation: You need to be able to store your complete
state information in a frame, and you need to provide an
execute function for your frame. Then you return the magic
Py_UnwindToken, and your prepared frame will be scheduled
like any pure Python function frame.

Summary: By default, C extensions are restricted to stackful
behaviror. By giving them a stackless interface, you can
enable it completely for all continuation stuff.

cheers - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com



From gmcm at hypernet.com  Mon Aug  7 17:28:01 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 7 Aug 2000 11:28:01 -0400
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <200008071438.QAA22748@python.inrialpes.fr>
References: <1246464444-103035791@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 10:20:50 AM
Message-ID: <1246460413-103278281@hypernet.com>

Vladimir Marangozov wrote:
> Gordon McMillan wrote:
> > 
> > But I can't figure out what the h*ll is being PEPed.
> > ...
...
> 
> > ceval.c and grammar changes (or Christian can make it so), it
> > seems to me the PEPable issue is whether the value this can add
> > is worth the price of a less linear implementation.
> 
> There's an essay + paper available, slides and an implementation.

Of which the most up to date is the implementation. The 
slides / docs describe an earlier, more complex scheme.

> What's the problem about formalizing this in a PEP and addressing
> the controversial issues + explaining how they are dealt with?

That's sort of what I was asking. As far as I can tell, what's 
controversial is "continuations". That's not in scope. I would 
like to know what controversial issues there are that *are* in 
scope. 
 
> I mean, if you're a convinced long-time Stackless user and
> everything is obvious for you, this PEP should try to convince
> the rest of us -- so write it down and ask no more <wink>.

That's exactly wrong. If that were the case, I would be forced 
to vote -1 on any addition / enhancement to Python that I 
personally didn't plan on using.

- Gordon



From Vladimir.Marangozov at inrialpes.fr  Mon Aug  7 17:53:15 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Mon, 7 Aug 2000 17:53:15 +0200 (CEST)
Subject: [Python-Dev] Stackless Python - Pros and Cons
In-Reply-To: <1246460413-103278281@hypernet.com> from "Gordon McMillan" at Aug 07, 2000 11:28:01 AM
Message-ID: <200008071553.RAA22891@python.inrialpes.fr>

Gordon McMillan wrote:
> 
> > What's the problem about formalizing this in a PEP and addressing
> > the controversial issues + explaining how they are dealt with?
> 
> That's sort of what I was asking. As far as I can tell, what's 
> controversial is "continuations". That's not in scope. I would 
> like to know what controversial issues there are that *are* in 
> scope. 

Here's the context that might help you figure out what I'd
like to see in this PEP. I haven't been at the last conference, I
have read the source and the essay as of years ago and have no idea
that the most up to date thing is the implementation, which I refuse
to look at again, btw, without a clear summary of what this code does,
refreshing my memory on the whole subject.

I'd like to see an overview of the changes, their expected impact on
the core, the extensions, and whatever else you judge worthy to write
about.

I'd like to see a summary of the reactions that have been emitted and
what issues are non-issues for you, and which ones are. I'd like to see
a first draft giving me a horizontal view on the subject in its entirety. 
Code examples are welcome, too. I can then start thinking about it
in a more structured way on this basis. I don't have such a basis right
now, because there's no an up to date document in plain English that
allows me to do that. And without such a document, I won't do it.

it's-simple-<wink>'ly y'rs
-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From sjoerd at oratrix.nl  Mon Aug  7 18:19:59 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Mon, 07 Aug 2000 18:19:59 +0200
Subject: [Python-Dev] SRE incompatibility
In-Reply-To: Your message of Wed, 05 Jul 2000 01:46:07 +0200.
             <002601bfe612$06e90ec0$f2a6b5d4@hagrid> 
References: <20000704095542.8697B31047C@bireme.oratrix.nl> 
            <002601bfe612$06e90ec0$f2a6b5d4@hagrid> 
Message-ID: <20000807162000.5190631047C@bireme.oratrix.nl>

Is this problem ever going to be solved or is it too hard?
If it's too hard, I can fix xmllib to not depend on this.  This
incompatibility is the only reason I'm still not using sre.

In case you don't remember, the regexp that is referred to is
regexp = '(([a-z]+):)?([a-z]+)$'

On Wed, Jul 5 2000 "Fredrik Lundh" wrote:

> sjoerd wrote:
> 
> > >>> re.match(regexp, 'smil').group(0,1,2,3)
> > ('smil', None, 's', 'smil')
> > >>> import pre
> > >>> pre.match(regexp, 'smil').group(0,1,2,3)
> > ('smil', None, None, 'smil')
> > 
> > Needless to say, I am relying on the third value being None...
> 
> I've confirmed this (last night's fix should have solved this,
> but it didn't).  I'll post patches as soon as I have them...
> 
> </F>
> 
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From pf at artcom-gmbh.de  Mon Aug  7 10:14:54 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Mon, 7 Aug 2000 10:14:54 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14734.7730.698860.642851@anthem.concentric.net> from "Barry A. Warsaw" at "Aug 6, 2000 10:25:54 pm"
Message-ID: <m13Li3i-000DieC@artcom0.artcom-gmbh.de>

Hi,

Guido:
>     >> dict.default('hello', []).append('hello')

Greg Ewing <greg at cosc.canterbury.ac.nz>:
>     GE> Is this new method going to apply to dictionaries only,
>     GE> or is it to be considered part of the standard mapping
>     GE> interface?
 
Barry A. Warsaw:
> I think we've settled on setdefault(), which is more descriptive, even
> if it's a little longer.  I have posted SF patch #101102 which adds
> setdefault() to both the dictionary object and UserDict (along with
> the requisite test suite and doco changes).

This didn't answer the question raised by Greg Ewing.  AFAI have seen,
the patch doesn't touch 'dbm', 'shelve' and so on.  So from the patch
the answer is "applies to dictionaries only".

What is with the other external mapping types already in the core,
like 'dbm', 'shelve' and so on?

If the patch doesn't add this new method to these other mapping types, 
this fact should at least be documented similar to the methods 'items()' 
and 'values' that are already unimplemented in 'dbm':
 """Dbm objects behave like mappings (dictionaries), except that 
    keys and values are always strings.  Printing a dbm object 
    doesn't print the keys and values, and the items() and values() 
    methods are not supported."""

I'm still -1 on the name:  Nobody would expect, that a method 
called 'setdefault()' will actually return something useful.  May be 
it would be better to invent an absolutely obfuscuated new name, so 
that everybody is forced to actually *READ* the documentation of this 
method or nobody will guess, what it is supposed to do or even
worse: how to make clever use of it.

At least it would be a lot more likely, that someone becomes curious, 
what a method called 'grezelbatz()' is suppoed to do, than that someone
will actually lookup the documentation of a method called 'setdefault()'.

If the average Python programmer would ever start to use this method 
at all, then I believe it is very likely that we will see him/her
coding:
	dict.setdefault('key', [])
	dict['key'].append('bar')

So I'm also still -1 on the concept.  I'm +0 on Gregs proposal, that
it would be better to make this a builtin function, that can be applied
to all mapping types.

Maybe it would be even better to delay this until in Python 3000
builtin types may have become real classes, so that this method may
be inherited by all mapping types from an abstract mapping base class.

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)



From tim_one at email.msn.com  Mon Aug  7 23:52:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 7 Aug 2000 17:52:18 -0400
Subject: Fun with call/cc (was RE: [Python-Dev] Stackless Python - Pros and Cons)
In-Reply-To: <14734.46096.366920.827786@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>

[Tim]
> On the one hand, I don't think I know of a language *not* based
> on Scheme that has call/cc (or a moral equivalent).

[Jeremy Hylton]
> ML also has call/cc, at least the Concurrent ML variant.

So it does!  I've found 3 language lines that have full-blown call/cc (not
counting the early versions of REBOL, since they took it out later), and at
least one web page claiming "that's all, folks":

1. Scheme + derivatives (but not including most Lisps).

2. Standard ML + derivatives (but almost unique among truly
   functional languages):

   http://cm.bell-labs.com/cm/cs/what/smlnj/doc/SMLofNJ/pages/cont.html

   That page is pretty much incomprehensible on its own.  Besides
   callcc (no "/"), SML-NJ also has related "throw", "isolate",
   "capture" and "escape" functions.  At least some of them *appear*
   to be addressing Kent Pitman's specific complaints about the
   excruciating interactions between call/cc and unwind-protect in
   Scheme.

3. Unlambda.  This one is a hoot!  Don't know why I haven't bumped
   into it before:

   http://www.eleves.ens.fr:8080/home/madore/programs/unlambda/
   "Your Functional Programming Language Nightmares Come True"

   Unlambda is a deliberately obfuscated functional programming
   language, whose only data type is function and whose only
   syntax is function application:  no lambdas (or other "special
   forms"), no integers, no lists, no variables, no if/then/else,
   ...  call/cc is spelled with the single letter "c" in Unlambda,
   and the docs note "expressions including c function calls tend
   to be hopelessly difficult to track down.  This was, of course,
   the reason for including it in the language in the first place".

   Not all frivolous, though!  The page goes on to point out that
   writing an interpreter for Unlambda in something other than Scheme
   exposes many of the difficult issues (like implementing call/cc
   in a language that doesn't have any such notion -- which is,
   after all, almost all languages), in a language that's otherwise
   relentlessly simple-minded so doesn't bog you down with
   accidental complexities.

Doesn't mean call/cc sucks, but language designers *have* been avoiding it
in vast numbers -- despite that the Scheme folks have been pushing it (&
pushing it, & pushing it) in every real language they flee to <wink>.

BTW, lest anyone get the wrong idea, I'm (mostly) in favor of it!  It can't
possibly be sold on any grounds other than that "it works, for real Python
programmers with real programming problems they can't solve in other ways",
though.  Christian has been doing a wonderful (if slow-motion <wink>) job of
building that critical base of real-life users.





From guido at beopen.com  Tue Aug  8 01:03:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 18:03:46 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre.py,1.23,1.24 sre_compile.py,1.29,1.30 sre_parse.py,1.29,1.30
In-Reply-To: Your message of "Mon, 07 Aug 2000 13:59:08 MST."
             <200008072059.NAA11904@slayer.i.sourceforge.net> 
References: <200008072059.NAA11904@slayer.i.sourceforge.net> 
Message-ID: <200008072303.SAA31635@cj20424-a.reston1.va.home.com>

> -- reset marks if repeat_one tail doesn't match
>    (this should fix Sjoerd's xmllib problem)

Somebody please add a test case for this to test_re.py !!!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at thyrsus.com  Tue Aug  8 00:13:02 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:13:02 -0400
Subject: [Python-Dev] Adding library modules to the core
Message-ID: <20000807181302.A27463@thyrsus.com>

A few days ago I asked about the procedure for adding a module to the
Python core library.  I have a framework class for things like menu systems
and symbolic debuggers I'd like to add.

Guido asked if this was similar to the TreeWidget class in IDLE.  I 
investigated and discovered that it is not, and told him so.  I am left
with a couple of related questions:

1. Has anybody got a vote on the menubrowser framwork facility I described?

1. Do we have a procedure for vetting modules for inclusion in the stock
distribution?  If not, should be institute one?

2. I am willing to do a pass through the Vaults of Parnassus and other
sources for modules that seem both sufficiently useful and sufficiently
mature to be added.  I have in mind things like mimectl, PIL, and Vladimir's
shared-memory module.  

Now, assuming I do 3, would I need to go through the vote process
on each of these, or can I get a ukase from the BDFL authorizing me to
fold in stuff?

I realize I'm raising questions for which there are no easy answers.
But Python is growing.  The Python social machine needs to adapt to
make such decisions in a more timely and less ad-hoc fashion.  I'm not
attached to being the point person in this process, but somebody's gotta be.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Question with boldness even the existence of a God; because, if there
be one, he must more approve the homage of reason, than that of
blindfolded fear.... Do not be frightened from this inquiry from any
fear of its consequences. If it ends in the belief that there is no
God, you will find incitements to virtue in the comfort and
pleasantness you feel in its exercise...
	-- Thomas Jefferson, in a 1787 letter to his nephew



From esr at thyrsus.com  Tue Aug  8 00:24:03 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:24:03 -0400
Subject: Fun with call/cc (was RE: [Python-Dev] Stackless Python - Pros and Cons)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 07, 2000 at 05:52:18PM -0400
References: <14734.46096.366920.827786@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCCEILGOAA.tim_one@email.msn.com>
Message-ID: <20000807182403.A27485@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> Doesn't mean call/cc sucks, but language designers *have* been avoiding it
> in vast numbers -- despite that the Scheme folks have been pushing it (&
> pushing it, & pushing it) in every real language they flee to <wink>.

Yes, we have.  I haven't participated in conspiratorial hugggermugger with
other ex-Schemers, but I suspect we'd all answer pretty much the same way.
Lots of people have been avoiding call/cc not because it sucks but but because
the whole area is very hard to think about even if you have the right set
of primitives.
 
> BTW, lest anyone get the wrong idea, I'm (mostly) in favor of it!  It can't
> possibly be sold on any grounds other than that "it works, for real Python
> programmers with real programming problems they can't solve in other ways",
> though.  Christian has been doing a wonderful (if slow-motion <wink>) job of
> building that critical base of real-life users.

And it's now Christian's job to do the next stop, supplying up-to-date
documentation on his patch and proposal as a PEP.

Suggestion: In order to satisfy the BDFL's conservative instincts, perhaps
it would be better to break the Stackless patch into two pieces -- one 
that de-stack-izes ceval, and one that implements new language features.
That way we can build a firm base for later exploration.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Government is not reason, it is not eloquence, it is force; like fire, a
troublesome servant and a fearful master. Never for a moment should it be left
to irresponsible action."
	-- George Washington, in a speech of January 7, 1790



From thomas at xs4all.net  Tue Aug  8 00:23:35 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 00:23:35 +0200
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807181302.A27463@thyrsus.com>; from esr@thyrsus.com on Mon, Aug 07, 2000 at 06:13:02PM -0400
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <20000808002335.A266@xs4all.nl>

On Mon, Aug 07, 2000 at 06:13:02PM -0400, Eric S. Raymond wrote:

[ You didn't ask for votes on all these, but the best thing I can do is
vote :-]

> 1. Has anybody got a vote on the menubrowser framwork facility I described?

+0. I don't see any harm in adding it, but I can't envision a use for it,
myself.

> I have in mind things like mimectl,

+1. A nice complement to the current mime and message handling routines.

> PIL,

+0. The main reason I don't compile PIL myself is because it's such a hassle
to do it each time, so I think adding it would be nice. However, I'm not
sure if it's doable to add, whether it would present a lot of problems for
'strange' platforms and the like.

> and Vladimir's shared-memory module.

+1. Fits very nicely with the mmapmodule, even if it's supported on less
platforms.

But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
new PEP, 'enriching the Standard Library' ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Tue Aug  8 00:39:30 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:39:30 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:23:35AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl>
Message-ID: <20000807183930.A27556@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> new PEP, 'enriching the Standard Library' ?

I think that leads in a sub-optimal direction.  Adding suitable modules
shouldn't be a one-shot or episodic event but a continuous process of 
incorporating the best work the community has done.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"Taking my gun away because I might shoot someone is like cutting my tongue
out because I might yell `Fire!' in a crowded theater."
        -- Peter Venetoklis



From esr at thyrsus.com  Tue Aug  8 00:42:24 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:42:24 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:23:35AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl>
Message-ID: <20000807184224.B27556@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> On Mon, Aug 07, 2000 at 06:13:02PM -0400, Eric S. Raymond wrote:
> 
> [ You didn't ask for votes on all these, but the best thing I can do is
> vote :-]
> 
> > 1. Has anybody got a vote on the menubrowser framwork facility I described?
> 
> +0. I don't see any harm in adding it, but I can't envision a use for it,
> myself.

I'll cheerfully admit that I think it's kind of borderline myself.  It works,
but it teeters on the edge of being too specialized for the core library.  I
might only +0 it myself :-).
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

As with the Christian religion, the worst advertisement for Socialism
is its adherents.
	-- George Orwell 



From thomas at xs4all.net  Tue Aug  8 00:38:39 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 00:38:39 +0200
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807183930.A27556@thyrsus.com>; from esr@thyrsus.com on Mon, Aug 07, 2000 at 06:39:30PM -0400
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl> <20000807183930.A27556@thyrsus.com>
Message-ID: <20000808003839.Q13365@xs4all.nl>

On Mon, Aug 07, 2000 at 06:39:30PM -0400, Eric S. Raymond wrote:
> Thomas Wouters <thomas at xs4all.net>:
> > But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> > new PEP, 'enriching the Standard Library' ?

> I think that leads in a sub-optimal direction.  Adding suitable modules
> shouldn't be a one-shot or episodic event but a continuous process of 
> incorporating the best work the community has done.

That depends on what the PEP does. PEPs can do two things (according to the
PEP that covers PEPs :): argue for a new feature/addition to the Python
language, or describe a standard or procedure of some sort. This PEP could
perhaps do both: describe a standard procedure for proposing and accepting a
new module in the library (and probably also removal, though that's a lot
trickier) AND do some catching-up on that process to get a few good modules
into the stdlib before 2.0 goes into a feature freeze (which is next week,
by the way.)

As for the procedure to add a new module, I think someone volunteering to
'adopt' the module and perhaps a few people reviewing it would about do it,
for the average module. Giving people a chance to say 'no!' of course.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Tue Aug  8 00:59:54 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 18:59:54 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808003839.Q13365@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 12:38:39AM +0200
References: <20000807181302.A27463@thyrsus.com> <20000808002335.A266@xs4all.nl> <20000807183930.A27556@thyrsus.com> <20000808003839.Q13365@xs4all.nl>
Message-ID: <20000807185954.B27636@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> That depends on what the PEP does. PEPs can do two things (according to the
> PEP that covers PEPs :): argue for a new feature/addition to the Python
> language, or describe a standard or procedure of some sort. This PEP could
> perhaps do both: describe a standard procedure for proposing and accepting a
> new module in the library (and probably also removal, though that's a lot
> trickier) AND do some catching-up on that process to get a few good modules
> into the stdlib before 2.0 goes into a feature freeze (which is next week,
> by the way.)
> 
> As for the procedure to add a new module, I think someone volunteering to
> 'adopt' the module and perhaps a few people reviewing it would about do it,
> for the average module. Giving people a chance to say 'no!' of course.

Sounds like my cue to write a PEP.  What's the URL for the PEP on PEPs again?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

See, when the GOVERNMENT spends money, it creates jobs; whereas when the money
is left in the hands of TAXPAYERS, God only knows what they do with it.  Bake
it into pies, probably.  Anything to avoid creating jobs.
	-- Dave Barry



From bwarsaw at beopen.com  Tue Aug  8 00:58:42 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 18:58:42 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
Message-ID: <14735.16162.275037.583897@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> 1. Do we have a procedure for vetting modules for inclusion
    ESR> in the stock distribution?  If not, should be institute one?

Is there any way to use the SourceForge machinery to help here?  The
first step would be to upload a patch so at least the new stuff
doesn't get forgotten, and it's always easy to find the latest version
of the changes.

Also SF has a polling or voting tool, doesn't it?  I know nothing
about it, but perhaps there's some way to leverage it to test the
pulse of the community for any new module (with BDFL veto of course).

-Barry



From esr at thyrsus.com  Tue Aug  8 01:09:39 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 19:09:39 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <14735.16162.275037.583897@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 07, 2000 at 06:58:42PM -0400
References: <20000807181302.A27463@thyrsus.com> <14735.16162.275037.583897@anthem.concentric.net>
Message-ID: <20000807190939.A27730@thyrsus.com>

Barry A. Warsaw <bwarsaw at beopen.com>:
> Is there any way to use the SourceForge machinery to help here?  The
> first step would be to upload a patch so at least the new stuff
> doesn't get forgotten, and it's always easy to find the latest version
> of the changes.

Patch?  Eh?  In most cases, adding a library module will consist of adding
one .py and one .tex, with no changes to existing code.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The price of liberty is, always has been, and always will be blood.  The person
who is not willing to die for his liberty has already lost it to the first
scoundrel who is willing to risk dying to violate that person's liberty.  Are
you free? 
	-- Andrew Ford



From bwarsaw at beopen.com  Tue Aug  8 01:04:39 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 19:04:39 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
	<20000808002335.A266@xs4all.nl>
	<20000807183930.A27556@thyrsus.com>
	<20000808003839.Q13365@xs4all.nl>
	<20000807185954.B27636@thyrsus.com>
Message-ID: <14735.16519.185236.794662@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> Sounds like my cue to write a PEP.  What's the URL for the
    ESR> PEP on PEPs again?

http://python.sourceforge.net/peps/pep-0001.html

-Barry



From bwarsaw at beopen.com  Tue Aug  8 01:06:21 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 19:06:21 -0400 (EDT)
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com>
	<14735.16162.275037.583897@anthem.concentric.net>
	<20000807190939.A27730@thyrsus.com>
Message-ID: <14735.16621.369206.564320@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at thyrsus.com> writes:

    ESR> Patch?  Eh?  In most cases, adding a library module will
    ESR> consist of adding one .py and one .tex, with no changes to
    ESR> existing code.

And there's no good way to put those into SF?  If the Patch Manager
isn't appropriate, what about the Task Manager (I dunno, I've never
looked at it).  The cool thing about using SF is that there's less of
a chance that this stuff will get buried in an inbox.

-Barry



From guido at beopen.com  Tue Aug  8 02:21:43 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 19:21:43 -0500
Subject: [Python-Dev] bug-fixes in cnri-16-start branch
In-Reply-To: Your message of "Sun, 06 Aug 2000 22:49:06 GMT."
             <398DEB62.789B4C9C@nowonder.de> 
References: <398DEB62.789B4C9C@nowonder.de> 
Message-ID: <200008080021.TAA31766@cj20424-a.reston1.va.home.com>

> I have a question on the right procedure for fixing a simple
> bug in the 1.6 release branch.
> 
> Bug #111162 appeared because the tests for math.rint() are
> already contained in the cnri-16-start revision of test_math.py
> while the "try: ... except AttributeError: ..." construct which
> was checked in shortly after was not.
> 
> Now the correct bugfix is already known (and has been
> applied to the main branch). I have updated the test_math.py
> file in my working version with "-r cnri-16-start" and
> made the changes.
> 
> Now I probably should just commit, close the patch
> (with an appropriate follow-up) and be happy.

That would work, except that I prefer to remove math.rint altogether,
as explained by Tim Peters.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at snark.thyrsus.com  Tue Aug  8 01:31:21 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 19:31:21 -0400
Subject: [Python-Dev] Request for PEP number
Message-ID: <200008072331.TAA27825@snark.thyrsus.com>

In accordance with the procedures in PEP 1, I am applying to initiate PEP 2.  

Proposed title: Procedure for incorporating new modules into the core.

Abstract: This PEP will describes review and voting procedures for 
incorporating candidate modules and extensions into the Python core.

Barry, could I get you to create a pep2 at python.org mailing list for
this one?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

That the said Constitution shall never be construed to authorize
Congress to infringe the just liberty of the press or the rights of
conscience; or to prevent the people of the United states who are
peaceable citizens from keeping their own arms...
        -- Samuel Adams, in "Phila. Independent Gazetteer", August 20, 1789



From guido at beopen.com  Tue Aug  8 02:42:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 07 Aug 2000 19:42:40 -0500
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: Your message of "Mon, 07 Aug 2000 18:13:02 -0400."
             <20000807181302.A27463@thyrsus.com> 
References: <20000807181302.A27463@thyrsus.com> 
Message-ID: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>

[ESR]
> 1. Has anybody got a vote on the menubrowser framwork facility I described?

Eric, as far as I can tell you haven't shown the code or given a
pointer to it.  I explained to you that your description left me in
the dark as to what it does.  Or did I miss a pointer?  It seems your
module doesn't even have a name!  This is a bad way to start a
discussion about the admission procedure.  Nothing has ever been
accepted into Python before the code was written and shown.

> 1. Do we have a procedure for vetting modules for inclusion in the stock
> distribution?  If not, should be institute one?

Typically, modules get accepted after extensive lobbying and agreement
from multiple developers.  The definition of "developer" is vague, and
I can't give a good rule -- not everybody who has been admitted to the
python-dev list has enough standing to make his opinion count!

Basically, I rely a lot on what various people say, but I have my own
bias about who I trust in what area.  I don't think I'll have to
publish a list of this bias, but one thing is clear: I;m not counting
votes!  Proposals and ideas get approved based on merit, not on how
many people argue for (or against) it.  I want Python to keep its
typical Guido-flavored style, and (apart from the occasional succesful
channeling by TP) there's only one way to do that: let me be the final
arbiter.  I'm willing to be the bottleneck, it gives Python the
typical slow-flowing evolution that has served it well over the past
ten years.  (At the same time, I can't read all messages in every
thread on python-dev any more -- that's why substantial ideas need a
PEP to summarize the discussion.)

> 2. I am willing to do a pass through the Vaults of Parnassus and other
> sources for modules that seem both sufficiently useful and sufficiently
> mature to be added.  I have in mind things like mimectl, PIL, and Vladimir's
> shared-memory module.  

I don't know mimectl or Vladimir's module (how does it compare to
mmap?).  Regarding PIL, I believe the problem there is that it is a
large body of code maintained by a third party.  It should become part
of a SUMO release and of binary releases, but I see no advantage in
carrying it along in the core source distribution.

> Now, assuming I do 3, would I need to go through the vote process
> on each of these, or can I get a ukase from the BDFL authorizing me to
> fold in stuff?

Sorry, I don't write blank checks.

> I realize I'm raising questions for which there are no easy answers.
> But Python is growing.  The Python social machine needs to adapt to
> make such decisions in a more timely and less ad-hoc fashion.  I'm
> not attached to being the point person in this process, but
> somebody's gotta be.

Watch out though: if we open the floodgates now we may seriously
deteriorate the quality of the standard library, without doing much
good.

I'd much rather see an improved Vaults of Parnassus (where every
module uses distutils and installation becomes trivial) than a
fast-track process for including new code in the core.

That said, I think writing a bunch of thoughts up as a PEP makes a lot
of sense!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From esr at thyrsus.com  Tue Aug  8 03:23:34 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 21:23:34 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 07, 2000 at 07:42:40PM -0500
References: <20000807181302.A27463@thyrsus.com> <200008080042.TAA31856@cj20424-a.reston1.va.home.com>
Message-ID: <20000807212333.A27996@thyrsus.com>

Guido van Rossum <guido at beopen.com>:
> [ESR]
> > 1. Has anybody got a vote on the menubrowser framwork facility I described?
> 
> Eric, as far as I can tell you haven't shown the code or given a
> pointer to it.  I explained to you that your description left me in
> the dark as to what it does.  Or did I miss a pointer?  It seems your
> module doesn't even have a name!  This is a bad way to start a
> discussion about the admission procedure.  Nothing has ever been
> accepted into Python before the code was written and shown.

Easily fixed.  Code's in an enclosure.
 
> > 1. Do we have a procedure for vetting modules for inclusion in the stock
> > distribution?  If not, should be institute one?
> 
> Typically, modules get accepted after extensive lobbying and agreement
> from multiple developers.  The definition of "developer" is vague, and
> I can't give a good rule -- not everybody who has been admitted to the
> python-dev list has enough standing to make his opinion count!

Understood, and I assume one of those insufficient-standing people is
*me*, given my short tenure on the list, and I cheerfully accept that.
The real problem I'm going after here is that this vague rule won't
scale well.

> Basically, I rely a lot on what various people say, but I have my own
> bias about who I trust in what area.  I don't think I'll have to
> publish a list of this bias, but one thing is clear: I;m not counting
> votes! 

I wasn't necessarily expecting you to.  I can't imagine writing a
procedure in which the BDFL doesn't retain a veto.

> I don't know mimectl or Vladimir's module (how does it compare to
> mmap?).

Different, as Thomas Wouters has already observed.  Vladimir's module is more
oriented towards supporting semaphores and exclusion.  At one point many months
ago, before Vladimir was on the list, I looked into it as a way to do exclusion
locking for shared shelves.  Vladimir and I even negotiated a license change
with INRIA so Python could use it.  That was my first pass at sharable 
shelves; it foundered on problems with the BSDDB 1.85 API.  But shm would
still be worth having in the core librariy, IMO.

The mimecntl module supports classes for representing MIME objects that
include MIME-structure-sensitive mutation operations.  Very strong candidate
for inclusion, IMO.

> > Now, assuming I do 3, would I need to go through the vote process
> > on each of these, or can I get a ukase from the BDFL authorizing me to
> > fold in stuff?
> 
> Sorry, I don't write blank checks.

And I wasn't expecting one.  I'll write up some thoughts about this in the PEP.
 
> > I realize I'm raising questions for which there are no easy answers.
> > But Python is growing.  The Python social machine needs to adapt to
> > make such decisions in a more timely and less ad-hoc fashion.  I'm
> > not attached to being the point person in this process, but
> > somebody's gotta be.
> 
> Watch out though: if we open the floodgates now we may seriously
> deteriorate the quality of the standard library, without doing much
> good.

The alternative is to end up with a Perl-like Case of the Missing Modules,
where lots of things Python writers should be able to count on as standard
builtins can't realistically be used, because the users they deliver to
aren't going to want to go through a download step.
 
> I'd much rather see an improved Vaults of Parnassus (where every
> module uses distutils and installation becomes trivial) than a
> fast-track process for including new code in the core.

The trouble is that I flat don't believe in this solution.  It works OK
for developers, who will be willing to do extra download steps -- but it
won't fly with end-user types.

> That said, I think writing a bunch of thoughts up as a PEP makes a lot
> of sense!

I've applied to initiate PEP 2.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Hoplophobia (n.): The irrational fear of weapons, correctly described by 
Freud as "a sign of emotional and sexual immaturity".  Hoplophobia, like
homophobia, is a displacement symptom; hoplophobes fear their own
"forbidden" feelings and urges to commit violence.  This would be
harmless, except that they project these feelings onto others.  The
sequelae of this neurosis include irrational and dangerous behaviors
such as passing "gun-control" laws and trashing the Constitution.
-------------- next part --------------
# menubrowser.py -- framework class for abstract browser objects

from sys import stderr

class MenuBrowser:
    "Support abstract browser operations on a stack of indexable objects."
    def __init__(self, debug=0, errout=stderr):
        self.page_stack = []
        self.selection_stack = []
        self.viewbase_stack = []
        self.viewport_height = 0
        self.debug = debug
        self.errout = errout

    def match(self, a, b):
        "Browseable-object comparison."
        return a == b

    def push(self, browseable, selected=None):
        "Push a browseable object onto the location stack."
        if self.debug:
            self.errout.write("menubrowser.push(): pushing %s=@%d, selection=%s\n" % (browseable, id(browseable), `selected`))
        selnum = 0
        if selected == None:
            if self.debug:
                self.errout.write("menubrowser.push(): selection defaulted\n")
        else:
            for i in range(len(browseable)):
                selnum = len(browseable) - i - 1
                if self.match(browseable[selnum], selected):
                     break
            if self.debug:
                self.errout.write("menubrowser.push(): selection set to %d\n" % (selnum))
        self.page_stack.append(browseable)
        self.selection_stack.append(selnum)
        self.viewbase_stack.append(selnum - selnum % self.viewport_height)
        if self.debug:
            object = self.page_stack[-1]
            selection = self.selection_stack[-1]
            viewbase = self.viewbase_stack[-1]
            self.errout.write("menubrowser.push(): pushed %s=@%d->%d, selection=%d, viewbase=%d\n" % (object, id(object), len(self.page_stack), selection, viewbase))

    def pop(self):
        "Pop a browseable object off the location stack."
        if not self.page_stack:
            if self.debug:
                self.errout.write("menubrowser.pop(): stack empty\n")
            return None
        else:
            item = self.page_stack[-1]
            self.page_stack = self.page_stack[:-1]
            self.selection_stack = self.selection_stack[:-1]
            self.viewbase_stack = self.viewbase_stack[:-1]
            if self.debug:
                if len(self.page_stack) == 0:
                    self.errout.write("menubrowser.pop(): stack is empty.")
                else:
                    self.errout.write("menubrowser.pop(): new level %d, object=@%d, selection=%d, viewbase=%d\n" % (len(self.page_stack), id(self.page_stack[-1]), self.selection_stack[-1], self.viewbase_stack[-1]))
            return item

    def stackdepth(self):
        "Return the current stack depth."
        return len(self.page_stack)

    def list(self):
        "Return all elements of the current object that ought to be visible."
        if not self.page_stack:
            return None
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        viewbase = self.viewbase_stack[-1]

        if self.debug:
            self.errout.write("menubrowser.list(): stack level %d. object @%d, listing %s\n" % (len(self.page_stack)-1, id(object), object[viewbase:viewbase+self.viewport_height]))

        # This requires a slice method
        return object[viewbase:viewbase+self.viewport_height]

    def top(self):
        "Return the top-of-stack menu"
        if self.debug >= 2:
            self.errout.write("menubrowser.top(): level=%d, @%d\n" % (len(self.page_stack)-1,id(self.page_stack[-1])))
        return self.page_stack[-1]

    def selected(self):
        "Return the currently selected element in the top menu."
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        if self.debug:
            self.errout.write("menubrowser.selected(): at %d, object=@%d, %s\n" % (len(self.page_stack)-1, id(object), self.selection_stack[-1]))
        return object[selection]

    def viewbase(self):
        "Return the viewport base of the current menu."
        object = self.page_stack[-1]
        selection = self.selection_stack[-1]
        base = self.viewbase_stack[-1]
        if self.debug:
            self.errout.write("menubrowser.viewbase(): at level=%d, object=@%d, %d\n" % (len(self.page_stack)-1, id(object), base,))
        return base

    def thumb(self):
        "Return top and bottom boundaries of a thumb scaled to the viewport."
        object = self.page_stack[-1]
        windowscale = float(self.viewport_height) / float(len(object))
        thumb_top = self.viewbase() * windowscale
        thumb_bottom = thumb_top + windowscale * self.viewport_height - 1
        return (thumb_top, thumb_bottom)

    def move(self, delta=1, wrap=0):
        "Move the selection on the current item downward."
        if delta == 0:
            return
        object = self.page_stack[-1]
        oldloc = self.selection_stack[-1]

        # Change the selection.  Requires a length method
        if oldloc + delta in range(len(object)):
            newloc = oldloc + delta
        elif wrap:
            newloc = (oldloc + delta) % len(object)
        elif delta > 0:
            newloc = len(object) - 1
        else:
            newloc = 0
        self.selection_stack[-1] = newloc

        # When the selection is moved out of the viewport, move the viewbase
        # just part enough to track it.
        oldbase = self.viewbase_stack[-1]
        if newloc in range(oldbase, oldbase + self.viewport_height):
            pass
        elif newloc < oldbase:
            self.viewbase_stack[-1] = newloc
        else:
            self.scroll(newloc - (oldbase + self.viewport_height) + 1)

        if self.debug:
            self.errout.write("menubrowser.down(): at level=%d, object=@%d, old selection=%d, new selection = %d, new base = %d\n" % (len(self.page_stack)-1, id(object), oldloc, newloc, self.viewbase_stack[-1]))

        return (oldloc != newloc)

    def scroll(self, delta=1, wrap=0):
        "Scroll the viewport up or down in the current option."
        print "delta:", delta
        object = self.page_stack[-1]
        if not wrap:
            oldbase = self.viewbase_stack[-1]
            if delta > 0 and oldbase+delta > len(object)-self.viewport_height:
                return
            elif delta < 0 and oldbase + delta < 0:
                return
        self.viewbase_stack[-1] = (self.viewbase_stack[-1] + delta) % len(object)

    def dump(self):
        "Dump the whole stack of objects."
        self.errout.write("Viewport height: %d\n" % (self.viewport_height,))
        for i in range(len(self.page_stack)):
            self.errout.write("Page: %d\n" % (i,))
            self.errout.write("Selection: %d\n" % (self.selection_stack[i],))
            self.errout.write(`self.page_stack[i]` + "\n");

    def next(self, wrap=0):
        return self.move(1, wrap)

    def previous(self, wrap=0):
        return self.move(-1, wrap)

    def page_down(self):
        return self.move(2*self.viewport_height-1)

    def page_up(self):
        return self.move(-(2*self.viewport_height-1))

if __name__ == '__main__': 
    import cmd, string, readline

    def itemgen(prefix, count):
        return map(lambda x, pre=prefix: pre + `x`, range(count))

    testobject = menubrowser()
    testobject.viewport_height = 6
    testobject.push(itemgen("A", 11))

    class browser(cmd.Cmd):
        def __init__(self):
            self.wrap = 0
            self.prompt = "browser> "

        def preloop(self):
            print "%d: %s (%d) in %s" %  (testobject.stackdepth(), testobject.selected(), testobject.viewbase(), testobject.list())

        def postloop(self):
            print "Goodbye."

        def postcmd(self, stop, line):
            self.preloop()
            return stop

        def do_quit(self, line):
            return 1

        def do_exit(self, line):
            return 1

        def do_EOF(self, line):
            return 1

        def do_list(self, line):
            testobject.dump()

        def do_n(self, line):
            testobject.next()

        def do_p(self, line):
            testobject.previous()

        def do_pu(self, line):
            testobject.page_up()

        def do_pd(self, line):
            testobject.page_down()

        def do_up(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.move(-n, self.wrap)

        def do_down(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.move(n, self.wrap)

        def do_s(self, line):
            if string.strip(line):
                n = string.atoi(line)
            else:
                n = 1
            testobject.scroll(n, self.wrap)

        def do_pop(self, line):
            testobject.pop()

        def do_gen(self, line):
            tokens = string.split(line)
            testobject.push(itemgen(tokens[0], string.atoi(tokens[1])))

        def do_dump(self, line):
            testobject.dump()

        def do_wrap(self, line):
            self.wrap = 1 - self.wrap
            if self.wrap:
                print "Wrap is now on."
            else:
                print "Wrap is now off."

        def emptyline(self):
            pass

    browser().cmdloop()

From MarkH at ActiveState.com  Tue Aug  8 03:36:24 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 8 Aug 2000 11:36:24 +1000
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000807181302.A27463@thyrsus.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>

> Guido asked if this was similar to the TreeWidget class in IDLE.  I
> investigated and discovered that it is not, and told him so.  I am left
> with a couple of related questions:
>
> 1. Has anybody got a vote on the menubrowser framwork facility I
> described?

I would have to give it a -1.  It probably should only be a -0, but I
dropped it a level in the interests of keeping the library small and
relevant.

In a nutshell, it is proposed as a "framework class for abstract browser
objects", but I don't see how.  It looks like a reasonable framework for a
particular kind of browser built for a text based system.  I can not see
how a GUI browser could take advantage of it.

For example:
* How does a "page" concept make sense in a high-res GUI?  Why do we have a
stack of pages?
* What is a "viewport height" - is that a measure of pixels?  If not, what
font are you assuming?  (sorry - obviously rhetorical, given my "text only"
comments above.)
* How does a "thumb position" relate to scroll bars that existing GUI
widgets almost certainly have.

etc.

While I am sure you find it useful, I don't see how it helps anyone else,
so I dont see how it qualifies as a standard module.

If it is designed as part of a "curses" package, then I would be +0 - I
would happily defer to your (or someone elses) judgement regarding its
relevance in that domain.

Obviously, there is a reasonable chance I am missing the point....

Mark.




From bwarsaw at beopen.com  Tue Aug  8 04:34:18 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 7 Aug 2000 22:34:18 -0400 (EDT)
Subject: [Python-Dev] Request for PEP number
References: <200008072331.TAA27825@snark.thyrsus.com>
Message-ID: <14735.29098.168698.86981@anthem.concentric.net>

>>>>> "ESR" == Eric S Raymond <esr at snark.thyrsus.com> writes:

    ESR> In accordance with the procedures in PEP 1, I am applying to
    ESR> initiate PEP 2.

    ESR> Proposed title: Procedure for incorporating new modules into
    ESR> the core.

    ESR> Abstract: This PEP will describes review and voting
    ESR> procedures for incorporating candidate modules and extensions
    ESR> into the Python core.

Done.

    ESR> Barry, could I get you to create a pep2 at python.org mailing
    ESR> list for this one?

We decided not to create separate mailing lists for each PEP.

-Barry



From greg at cosc.canterbury.ac.nz  Tue Aug  8 05:08:48 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 08 Aug 2000 15:08:48 +1200 (NZST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A
 small proposed change to dictionaries' "get" method...)
In-Reply-To: <m13Lj9u-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <200008080308.PAA12740@s454.cosc.canterbury.ac.nz>

artcom0!pf at artcom-gmbh.de:
>	dict.setdefault('key', [])
>	dict['key'].append('bar')

I would agree with this more if it said

   dict.setdefault([])
   dict['key'].append('bar')

But I have a problem with all of these proposals: they require
implicitly making a copy of the default value, which violates
the principle that Python never copies anything unless you
tell it to. The default "value" should really be a thunk, not
a value, e.g.

   dict.setdefault(lambda: [])
   dict['key'].append('bar')

or

   dict.get_or_add('key', lambda: []).append('bar')

But I don't really like that, either, because lambdas look
ugly to me, and I don't want to see any more builtin
constructs that more-or-less require their use.

I keep thinking that the solution to this lies somewhere
in the direction of short-circuit evaluation techniques and/or
augmented assignment, but I can't quite see how yet.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From esr at thyrsus.com  Tue Aug  8 05:30:03 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 23:30:03 -0400
Subject: [Python-Dev] Request for PEP number
In-Reply-To: <14735.29098.168698.86981@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 07, 2000 at 10:34:18PM -0400
References: <200008072331.TAA27825@snark.thyrsus.com> <14735.29098.168698.86981@anthem.concentric.net>
Message-ID: <20000807233003.A28267@thyrsus.com>

Barry A. Warsaw <bwarsaw at beopen.com>:
>     ESR> Barry, could I get you to create a pep2 at python.org mailing
>     ESR> list for this one?
> 
> We decided not to create separate mailing lists for each PEP.

OK, where should discussion take place?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

A ``decay in the social contract'' is detectable; there is a growing
feeling, particularly among middle-income taxpayers, that they are not
getting back, from society and government, their money's worth for
taxes paid. The tendency is for taxpayers to try to take more control
of their finances ..
	-- IRS Strategic Plan, (May 1984)



From tim_one at email.msn.com  Tue Aug  8 05:44:05 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 7 Aug 2000 23:44:05 -0400
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <200008080308.PAA12740@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com>

> artcom0!pf at artcom-gmbh.de:
> >	dict.setdefault('key', [])
> >	dict['key'].append('bar')
>

[Greg Ewing]
> I would agree with this more if it said
>
>    dict.setdefault([])
>    dict['key'].append('bar')

Ha!  I *told* Guido people would think that's the proper use of something
named setdefault <0.9 wink>.

> But I have a problem with all of these proposals: they require
> implicitly making a copy of the default value, which violates
> the principle that Python never copies anything unless you
> tell it to.

But they don't.  The occurrence of an, e.g., [] literal in Python source
*always* leads to a fresh list being created whenever the line of code
containing it is executed.  That behavior is guaranteed by the Reference
Manual.  In that respect

    dict.get('hi', [])
or
    dict.getorset('hi', []).append(42)  # getorset is my favorite

is exactly the same as

    x = []

No copy of anything is made; the real irritation is that because arguments
are always evaluated, we end up mucking around allocating an empty list
regardless of whether it's needed; which you partly get away from via your:

 The default "value" should really be a thunk, not
> a value, e.g.
>
>    dict.setdefault(lambda: [])
>    dict['key'].append('bar')
>
> or
>
>    dict.get_or_add('key', lambda: []).append('bar')

except that lambda is also an executable expression and so now we end up
creating an anonymous function dynamically regardless of whether it's
needed.

> But I don't really like that, either, because lambdas look
> ugly to me, and I don't want to see any more builtin
> constructs that more-or-less require their use.

Ditto.

> I keep thinking that the solution to this lies somewhere
> in the direction of short-circuit evaluation techniques and/or
> augmented assignment, but I can't quite see how yet.

If new *syntax* were added, the compiler could generate short-circuiting
code.  Guido will never go for this <wink>, but to make it concrete, e.g.,

    dict['key']||[].append('bar')
    count[word]||0 += 1

I found that dict.get(...) already confused my brain at times because my
*eyes* want to stop at "[]" when scanning code for dict references.
".get()" just doesn't stick out as much; setdefault/default/getorset won't
either.

can't-win-ly y'rs  - tim





From esr at thyrsus.com  Tue Aug  8 05:55:14 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 7 Aug 2000 23:55:14 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Tue, Aug 08, 2000 at 11:36:24AM +1000
References: <20000807181302.A27463@thyrsus.com> <ECEPKNMJLHAPFFJHDOJBMELLDDAA.MarkH@ActiveState.com>
Message-ID: <20000807235514.C28267@thyrsus.com>

Mark Hammond <MarkH at ActiveState.com>:
> For example:
> * How does a "page" concept make sense in a high-res GUI?  Why do we have a
> stack of pages?
> * What is a "viewport height" - is that a measure of pixels?  If not, what
> font are you assuming?  (sorry - obviously rhetorical, given my "text only"
> comments above.)
> * How does a "thumb position" relate to scroll bars that existing GUI
> widgets almost certainly have.

It's not designed for use with graphical browsers.  Here are three contexts
that could use it:

* A menu tree being presented through a window or viewport (this is how it's
  being used now).

* A symbolic debugger that can browse text around a current line.

* A database browser for a sequential record-based file format.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Under democracy one party always devotes its chief energies
to trying to prove that the other party is unfit to rule--and
both commonly succeed, and are right... The United States
has never developed an aristocracy really disinterested or an
intelligentsia really intelligent. Its history is simply a record
of vacillations between two gangs of frauds. 
	--- H. L. Mencken



From tim_one at email.msn.com  Tue Aug  8 06:52:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 8 Aug 2000 00:52:20 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJJGOAA.tim_one@email.msn.com>

[Guido]
> ...
> Nothing has ever been accepted into Python before the code
> was written and shown.

C'mon, admit it:  you were sooooo appalled by the thread that lead to the
creation of tabnanny.py that you decided at once it would end up in the
distribution, just so you could justify skipping all the dozens of tedious
long messages in which The Community developed The General Theory of
Tab-Space Equivalence ab initio.  It was just too much of a
stupid-yet-difficult hack to resist <wink>.

> ...
> I want Python to keep its typical Guido-flavored style,

So do most of us, most of the time.  Paradoxically, it may be easier to
stick to that as Python's popularity zooms beyond the point where it's even
*conceivable* "votes" make any sense.

> and (apart from the occasional succesful channeling by TP) there's
> only one way to do that: let me be the final arbiter.

Well, there's only one *obvious* way to do it.  That's what keeps it
Pythonic.

> I'm willing to be the bottleneck, it gives Python the typical slow-
> flowing evolution that has served it well over the past ten years.

Except presumably for 2.0, where we decided at the last second to change
large patches from "postponed" to "gotta have it".  Consistency is the
hobgoblin ...

but-that's-pythonic-too-ly y'rs  - tim





From moshez at math.huji.ac.il  Tue Aug  8 07:42:30 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 8 Aug 2000 08:42:30 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808002335.A266@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008080836470.1417-100000@sundial>

On Tue, 8 Aug 2000, Thomas Wouters wrote:

> > PIL,
> 
> +0. The main reason I don't compile PIL myself is because it's such a hassle
> to do it each time, so I think adding it would be nice. However, I'm not
> sure if it's doable to add, whether it would present a lot of problems for
> 'strange' platforms and the like.
> 
> > and Vladimir's shared-memory module.
> 
> +1. Fits very nicely with the mmapmodule, even if it's supported on less
> platforms.
> 
> But perhaps all this falls in the 'batteries included' PEP ? Or perhaps a
> new PEP, 'enriching the Standard Library' ?

PIL is definitely in the 206 PEP. The others are not yet there. Please
note that a central database of "useful modules" which are in distutils'
.tar.gz (or maybe .zip, now that Python has zipfile.py") + a simple tool
to download and install. The main reason for the "batteries included" 
PEP is reliance on external libraries, which do not mesh as well with
the distutils.

expect-a-change-of-direction-in-the-pep-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From pf at artcom-gmbh.de  Tue Aug  8 10:00:29 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Tue, 8 Aug 2000 10:00:29 +0200 (MEST)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com> from Tim Peters at "Aug 7, 2000 11:44: 5 pm"
Message-ID: <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>

Hi Tim!

Tim Peters:
[...]
> Ha!  I *told* Guido people would think that's the proper use of something
> named setdefault <0.9 wink>.
[...]
>     dict.getorset('hi', []).append(42)  # getorset is my favorite

'getorset' is a *MUCH* better name compared to 'default' or 'setdefault'. 

Regards, Peter



From R.Liebscher at gmx.de  Tue Aug  8 11:26:47 2000
From: R.Liebscher at gmx.de (Rene Liebscher)
Date: Tue, 08 Aug 2000 11:26:47 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub>
Message-ID: <398FD257.CFDC3B74@gmx.de>

Greg Ward wrote:
> 
> On 04 August 2000, Mark Hammond said:
> > I would prefer python20_bcpp.lib, but that is not an issue.
> 
> Good suggestion: the contents of the library are more important than the
> format.  Rene, can you make this change and include it in your next
> patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as
> opposed to "python20_bcpp"?
OK, it is no problem to change it.
> 
> > I am a little confused by the intention, tho.  Wouldnt it make sense to
> > have Borland builds of the core create a Python20.lib, then we could keep
> > the pragma in too?
> >
> > If people want to use Borland for extensions, can't we ask them to use that
> > same compiler to build the core too?  That would seem to make lots of the
> > problems go away?
> 
> But that requires people to build all of Python from source, which I'm
> guessing is a bit more bothersome than building an extension or two from
> source.  Especially since Python is already distributed as a very
> easy-to-use binary installer for Windows, but most extensions are not.
> 
> Rest assured that we probably won't be making things *completely*
> painless for those who do not toe Chairman Bill's party line and insist
> on using "non-standard" Windows compilers.  They'll probably have to get
> python20_bcpp.lib (or python20_gcc.lib, or python20_lcc.lib) on their
> own -- whether downloaded or generated, I don't know.  But the
> alternative is to include 3 or 4 python20_xxx.lib files in the standard
> Windows distribution, which I think is silly.
(GCC uses libpython20.a)
It is not necessary to include the libraries for all compilers. The only
thing what is necessary is a def-file for the library. Every compiler I
know has a program to create a import library from a def-file.
BCC55 can even convert python20.lib in its own format. (The program is
called "coff2omf". BCC55 uses the OMF format for its libraries which is
different of MSVC's COFF format. (This answers your question, Tim?))  
Maybe it should be there a file in the distribution, which explains what
to do
if someone wants to use another compiler, especially how to build a
import
library for this compiler or at least some general information what you
need to do. 
( or should it be included in the 'Ext' documentation. )

kind regards

Rene Liebscher



From Vladimir.Marangozov at inrialpes.fr  Tue Aug  8 12:00:35 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 8 Aug 2000 12:00:35 +0200 (CEST)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 07, 2000 07:42:40 PM
Message-ID: <200008081000.MAA29344@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> I don't know mimectl or Vladimir's module (how does it compare to
> mmap?).

To complement ESR:

- written 3 years ago
- exports a file-like interface, defines 2 object types: shm & sem
- resembles buffer but lacks the slice interface.
- has all sysV shared memory bells & whistles + native semaphore support

http://sirac.inrialpes.fr/~marangoz/python/shm

Technically, mmap is often built on top of shared memory OS facilities.
Adding slices + Windows code for shared mem & semaphores + a simplified
unified interface might be a plan.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From mal at lemburg.com  Tue Aug  8 12:46:25 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 08 Aug 2000 12:46:25 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <200008081000.MAA29344@python.inrialpes.fr>
Message-ID: <398FE501.FFF09FAE@lemburg.com>

Vladimir Marangozov wrote:
> 
> Guido van Rossum wrote:
> >
> > I don't know mimectl or Vladimir's module (how does it compare to
> > mmap?).
> 
> To complement ESR:
> 
> - written 3 years ago
> - exports a file-like interface, defines 2 object types: shm & sem
> - resembles buffer but lacks the slice interface.
> - has all sysV shared memory bells & whistles + native semaphore support
> 
> http://sirac.inrialpes.fr/~marangoz/python/shm
> 
> Technically, mmap is often built on top of shared memory OS facilities.
> Adding slices + Windows code for shared mem & semaphores + a simplified
> unified interface might be a plan.

I would be +1 if you could get it to work on Windows, +0
otherwise.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From R.Liebscher at gmx.de  Tue Aug  8 13:41:12 2000
From: R.Liebscher at gmx.de (Rene Liebscher)
Date: Tue, 08 Aug 2000 13:41:12 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de>
Message-ID: <398FF1D8.A91A8C02@gmx.de>

Rene Liebscher wrote:
> 
> Greg Ward wrote:
> >
> > On 04 August 2000, Mark Hammond said:
> > > I would prefer python20_bcpp.lib, but that is not an issue.
> >
> > Good suggestion: the contents of the library are more important than the
> > format.  Rene, can you make this change and include it in your next
> > patch?  Or did you have some hidden, subtle reson for "bcpp_python20" as
> > opposed to "python20_bcpp"?
> OK, it is no problem to change it.
I forgot to ask which name you would like for debug libraries

	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"

may be we should use "bcpp_python20_d.lib", and use the name schema
which 
I suggested first.


kind regards
 
Rene Liebscher



From skip at mojam.com  Tue Aug  8 15:24:06 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 8 Aug 2000 08:24:06 -0500 (CDT)
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <m13M4JJ-000DieC@artcom0.artcom-gmbh.de>
References: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com>
	<m13M4JJ-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <14736.2550.586217.758500@beluga.mojam.com>

    >> dict.getorset('hi', []).append(42)  # getorset is my favorite

    Peter> 'getorset' is a *MUCH* better name compared to 'default' or
    Peter> 'setdefault'.

Shouldn't that be getorsetandget?  After all, it doesn't just set or get it
gets, but if it's undefined, it sets, then gets.

I know I'll be shouted down, but I still vote against a method that both
sets and gets dict values.  I don't think the abbreviation in the source is
worth the obfuscation of the code.

Skip




From akuchlin at mems-exchange.org  Tue Aug  8 15:31:29 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 8 Aug 2000 09:31:29 -0400
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 08, 2000 at 08:24:06AM -0500
References: <LNBBLJKPBEHFEDALKOLCMEJHGOAA.tim_one@email.msn.com> <m13M4JJ-000DieC@artcom0.artcom-gmbh.de> <14736.2550.586217.758500@beluga.mojam.com>
Message-ID: <20000808093129.A18519@kronos.cnri.reston.va.us>

On Tue, Aug 08, 2000 at 08:24:06AM -0500, Skip Montanaro wrote:
>I know I'll be shouted down, but I still vote against a method that both
>sets and gets dict values.  I don't think the abbreviation in the source is
>worth the obfuscation of the code.

-1 from me, too.  A shortcut that only saves a line or two of code
isn't worth the obscurity of the name.

("Ohhh, I get it.  Back on that old minimalism kick, Andrew?"

"Not back on it.  Still on it.")

--amk




From effbot at telia.com  Tue Aug  8 17:10:28 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 17:10:28 +0200
Subject: [Python-Dev] Preventing 1.5 extensions crashing under 1.6/2.0 Python
References: <ECEPKNMJLHAPFFJHDOJBMEHPDDAA.MarkH@ActiveState.com>
Message-ID: <00c901c0014a$cc7c9be0$f2a6b5d4@hagrid>

mark wrote:
> So I think that the adoption of our half-solution (ie, we are really only
> forcing a better error message - not even getting a traceback to indicate
> _which_ module fails)

note that the module name is available to the Py_InitModule4
module (for obvious reasons ;-), so it's not that difficult to
improve the error message.

how about:

...

static char not_initialized_error[] =
"ERROR: Module %.200s loaded an uninitialized interpreter!\n\
  This Python has API version %d, module %.200s has version %d.\n";

...

    if (!Py_IsInitialized()) {
        char message[500];
        sprintf(message, not_initialized_error, name, PYTHON_API_VERSION,
            name, module_api_version)
        Py_FatalError(message);
    }

</F>




From pf at artcom-gmbh.de  Tue Aug  8 16:48:32 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Tue, 8 Aug 2000 16:48:32 +0200 (MEST)
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com> from Skip Montanaro at "Aug 8, 2000  8:24: 6 am"
Message-ID: <m13MAgC-000DieC@artcom0.artcom-gmbh.de>

Hi,

Tim Peters:
>     >> dict.getorset('hi', []).append(42)  # getorset is my favorite
> 
>     Peter> 'getorset' is a *MUCH* better name compared to 'default' or
>     Peter> 'setdefault'.
 
Skip Montanaro:
> Shouldn't that be getorsetandget?  After all, it doesn't just set or get it
> gets, but if it's undefined, it sets, then gets.

That would defeat the main purpose of this method: abbreviation.
This name is simply too long.

> I know I'll be shouted down, but I still vote against a method that both
> sets and gets dict values.  I don't think the abbreviation in the source is
> worth the obfuscation of the code.

Yes.  
But I got the impression that Patch#101102 can't be avoided any more.  
So in this situation Tims '.getorset()' is the lesser of two evils 
compared to '.default()' or '.setdefault()'.

BTW: 
I think the "informal" mapping interface should get a more
explicit documentation.  The language reference only mentions the
'len()' builtin method and indexing.  But the section about mappings
contains the sentence: "The extension modules dbm, gdbm, bsddb provide
additional examples of mapping types."

On the other hand section "2.1.6 Mapping Types" of the library reference
says: "The following operations are defined on mappings ..." and than
lists all methods including 'get()', 'update()', 'copy()' ...

Unfortunately only a small subset of these methods actually works on
a dbm mapping:

>>> import dbm
>>> d = dbm.open("piff", "c")
>>> d.get('foo', [])
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  AttributeError: get
>>> d.copy()
Traceback (innermost last):
  File "<stdin>", line 1, in ?
  AttributeError: copy
   
That should be documented.

Regards, Peter



From trentm at ActiveState.com  Tue Aug  8 17:18:12 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 8 Aug 2000 08:18:12 -0700
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <398FF1D8.A91A8C02@gmx.de>; from R.Liebscher@gmx.de on Tue, Aug 08, 2000 at 01:41:12PM +0200
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de>
Message-ID: <20000808081811.A10965@ActiveState.com>

On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> I forgot to ask which name you would like for debug libraries
> 
> 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"
> 
> may be we should use "bcpp_python20_d.lib", and use the name schema
> which 
> I suggested first.

Python20 is most important so it should go first. Then I suppose it is
debatable whether 'd' or 'bcpp' should come first. My preference is
"python20_bcpp_d.lib" because this would maintain the pattern that the
basename of debug-built libs, etc. end in "_d".

Generally speaking this would give a name spec of

python<version>(_<metadata>)*(_d)?.lib


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Tue Aug  8 17:22:17 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 17:22:17 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808081811.A10965@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 08, 2000 at 08:18:12AM -0700
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com>
Message-ID: <20000808172217.G266@xs4all.nl>

On Tue, Aug 08, 2000 at 08:18:12AM -0700, Trent Mick wrote:
> On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> > I forgot to ask which name you would like for debug libraries

> > 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"

> > may be we should use "bcpp_python20_d.lib", and use the name schema
> > which I suggested first.

> Python20 is most important so it should go first.

To clarify something Rene said earlier (I appear to have deleted that mail
eventhough I had intended to reply to it :P) 'gcc' names its libraries
'libpython<version>.{so,a}' because that's the UNIX convention: libraries
are named 'lib<name>.<libtype>', where libtype is '.a' for static libraries
and '.so' for dynamic (ELF, in any case) ones, and you link with -l<name>,
without the 'lib' in front of it. The 'lib' is UNIX-imposed, not something
gcc or Guido made up.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Tue Aug  8 17:26:03 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 8 Aug 2000 08:26:03 -0700
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808172217.G266@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 08, 2000 at 05:22:17PM +0200
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com> <20000808172217.G266@xs4all.nl>
Message-ID: <20000808082603.B10965@ActiveState.com>

On Tue, Aug 08, 2000 at 05:22:17PM +0200, Thomas Wouters wrote:
> On Tue, Aug 08, 2000 at 08:18:12AM -0700, Trent Mick wrote:
> > On Tue, Aug 08, 2000 at 01:41:12PM +0200, Rene Liebscher wrote:
> > > I forgot to ask which name you would like for debug libraries
> 
> > > 	"python20_bcpp_d.lib" or "python20_d_bcpp.lib"
> 
> > > may be we should use "bcpp_python20_d.lib", and use the name schema
> > > which I suggested first.
> 
> > Python20 is most important so it should go first.
> 
> To clarify something Rene said earlier (I appear to have deleted that mail
> eventhough I had intended to reply to it :P) 'gcc' names its libraries
> 'libpython<version>.{so,a}' because that's the UNIX convention: libraries
> are named 'lib<name>.<libtype>', where libtype is '.a' for static libraries
> and '.so' for dynamic (ELF, in any case) ones, and you link with -l<name>,
> without the 'lib' in front of it. The 'lib' is UNIX-imposed, not something
> gcc or Guido made up.
> 

Yes, you are right. I was being a Windows bigot there for an email. :)


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Tue Aug  8 17:35:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 8 Aug 2000 17:35:24 +0200
Subject: [Python-Dev] Library pragma in PC/config.h
In-Reply-To: <20000808082603.B10965@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 08, 2000 at 08:26:03AM -0700
References: <20000803212444.A1237@beelzebub> <ECEPKNMJLHAPFFJHDOJBGEEKDDAA.MarkH@ActiveState.com> <20000804205309.A1013@beelzebub> <398FD257.CFDC3B74@gmx.de> <398FF1D8.A91A8C02@gmx.de> <20000808081811.A10965@ActiveState.com> <20000808172217.G266@xs4all.nl> <20000808082603.B10965@ActiveState.com>
Message-ID: <20000808173524.H266@xs4all.nl>

On Tue, Aug 08, 2000 at 08:26:03AM -0700, Trent Mick wrote:

[ Discussion about what to call the Borland version of python20.dll:
  bcpp_python20.dll or python20_bcpp.dll. Rene brought up that gcc calls
  "its" library libpython.so, and Thomas points out that that isn't Python's
  decision. ]

> Yes, you are right. I was being a Windows bigot there for an email. :)

And rightly so ! :) I think the 'python20_bcpp' name is more Windows-like,
and if there is some area in which Python should try to stay as platform
specific as possible, it's platform specifics such as libraries :)

Would Windows users(*) when seeing 'bcpp_python20.dll' be thinking "this is a
bcpp specific library of python20", or would they be thinking "this is a
bcpp library for use with python20" ? I'm more inclined to think the second,
myself :-)

*) And the 'user' in this context is the extention-writer and
python-embedder, of course.
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Tue Aug  8 17:46:55 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 11:46:55 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081000.MAA29344@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 12:00:35PM +0200
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr>
Message-ID: <20000808114655.C29686@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:
> Guido van Rossum wrote:
> > 
> > I don't know mimectl or Vladimir's module (how does it compare to
> > mmap?).
> 
> To complement ESR:
> 
> - written 3 years ago
> - exports a file-like interface, defines 2 object types: shm & sem
> - resembles buffer but lacks the slice interface.
> - has all sysV shared memory bells & whistles + native semaphore support
> 
> http://sirac.inrialpes.fr/~marangoz/python/shm
> 
> Technically, mmap is often built on top of shared memory OS facilities.
> Adding slices + Windows code for shared mem & semaphores + a simplified
> unified interface might be a plan.

Vladimir, I suggest that the most useful thing you could do to advance
the process at this point would be to document shm in core-library style.

At the moment, core Python has nothing (with the weak and nonportable 
exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
shm would address a real gap in the language.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Are we at last brought to such a humiliating and debasing degradation,
that we cannot be trusted with arms for our own defence?  Where is the
difference between having our arms in our own possession and under our
own direction, and having them under the management of Congress?  If
our defence be the *real* object of having those arms, in whose hands
can they be trusted with more propriety, or equal safety to us, as in
our own hands?
        -- Patrick Henry, speech of June 9 1788



From tim_one at email.msn.com  Tue Aug  8 17:46:00 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 8 Aug 2000 11:46:00 -0400
Subject: dict.setdefault() (Patch#101102) (was: Re: [Python-Dev] Re: A small proposed change to dictionaries' "get" method...)
In-Reply-To: <14736.2550.586217.758500@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com>

[Skip Montanaro, on .getorset]
> Shouldn't that be getorsetandget?  After all, it doesn't just set
> or get it gets, but if it's undefined, it sets, then gets.

It's mnemonic enough for me.  You can take comfort in that Guido seems to
like "default" better, and is merely incensed by arguments about names
<wink>.

> I know I'll be shouted down, but I still vote against a method that both
> sets and gets dict values.  I don't think the abbreviation in the
> source is worth the obfuscation of the code.

So this is at least your second vote, while I haven't voted at all?  I
protest.

+1 from me.  I'd use it a lot.  Yes, I'm one of those who probably has more
dicts mapping to lists than to strings, and

    if dict.has_key(thing):
        dict[thing].append(newvalue)
    else:
        dict[thing] = [newvalue]

litters my code -- talk about obfuscated!  Of course I know shorter ways to
spell that, but I find them even more obscure than the above.  Easing a
common operation is valuable, firmly in the tradition of the list.extend(),
list.pop(), dict.get(), 3-arg getattr() and no-arg "raise" extensions.  The
*semantics* are clear and non-controversial and frequently desired, they're
simply clumsy to spell now.

The usual ploy in cases like this is to add the new gimmick and call it
"experimental".  Phooey.  Add it or don't.

for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim





From guido at beopen.com  Tue Aug  8 18:51:27 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 08 Aug 2000 11:51:27 -0500
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: Your message of "Tue, 08 Aug 2000 11:46:55 -0400."
             <20000808114655.C29686@thyrsus.com> 
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr>  
            <20000808114655.C29686@thyrsus.com> 
Message-ID: <200008081651.LAA01319@cj20424-a.reston1.va.home.com>

> At the moment, core Python has nothing (with the weak and nonportable 
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

If it also works on Windows.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Tue Aug  8 17:58:27 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Tue, 8 Aug 2000 17:58:27 +0200 (CEST)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808114655.C29686@thyrsus.com> from "Eric S. Raymond" at Aug 08, 2000 11:46:55 AM
Message-ID: <200008081558.RAA30190@python.inrialpes.fr>

Eric S. Raymond wrote:
>
> At the moment, core Python has nothing (with the weak and nonportable
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

There's a Semaphore class in Lib/threading.py. Are there any problems
with it? I haven't used it, but threading.py has thread mutexes and
semaphores on top of them, so as long as you don't need IPC, they should
be fine. Or am I missing something?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From esr at thyrsus.com  Tue Aug  8 18:07:15 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 12:07:15 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081558.RAA30190@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 05:58:27PM +0200
References: <20000808114655.C29686@thyrsus.com> <200008081558.RAA30190@python.inrialpes.fr>
Message-ID: <20000808120715.A29873@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:
> Eric S. Raymond wrote:
> >
> > At the moment, core Python has nothing (with the weak and nonportable
> > exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> > shm would address a real gap in the language.
> 
> There's a Semaphore class in Lib/threading.py. Are there any problems
> with it? I haven't used it, but threading.py has thread mutexes and
> semaphores on top of them, so as long as you don't need IPC, they should
> be fine. Or am I missing something?

If I'm not mistaken, that's semaphores across a thread bundle within
a single process. It's semaphores visible across processes that I 
don't think we currently have a facility for.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The people cannot delegate to government the power to do anything
which would be unlawful for them to do themselves.
	-- John Locke, "A Treatise Concerning Civil Government"



From esr at thyrsus.com  Tue Aug  8 18:07:58 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Tue, 8 Aug 2000 12:07:58 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <200008081651.LAA01319@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Tue, Aug 08, 2000 at 11:51:27AM -0500
References: <200008080042.TAA31856@cj20424-a.reston1.va.home.com> <200008081000.MAA29344@python.inrialpes.fr> <20000808114655.C29686@thyrsus.com> <200008081651.LAA01319@cj20424-a.reston1.va.home.com>
Message-ID: <20000808120758.B29873@thyrsus.com>

Guido van Rossum <guido at beopen.com>:
> > At the moment, core Python has nothing (with the weak and nonportable 
> > exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> > shm would address a real gap in the language.
> 
> If it also works on Windows.

As usual, I expect Unix to lead and Windows to follow.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Government is actually the worst failure of civilized man. There has
never been a really good one, and even those that are most tolerable
are arbitrary, cruel, grasping and unintelligent.
	-- H. L. Mencken 



From guido at beopen.com  Tue Aug  8 19:18:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 08 Aug 2000 12:18:49 -0500
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: Your message of "Tue, 08 Aug 2000 11:46:00 -0400."
             <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCIEKNGOAA.tim_one@email.msn.com> 
Message-ID: <200008081718.MAA01681@cj20424-a.reston1.va.home.com>

Enough said.  I've checked it in and closed Barry's patch.  Since
'default' is a Java reserved word, I decided that that would not be a
good name for it after all, so I stuck with setdefault().

> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From effbot at telia.com  Tue Aug  8 18:17:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 18:17:01 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <20000807181302.A27463@thyrsus.com><14735.16162.275037.583897@anthem.concentric.net><20000807190939.A27730@thyrsus.com> <14735.16621.369206.564320@anthem.concentric.net>
Message-ID: <001101c00155$cc86ad00$f2a6b5d4@hagrid>

barry wrote:
> And there's no good way to put those into SF?  If the Patch Manager
> isn't appropriate, what about the Task Manager (I dunno, I've never
> looked at it).  The cool thing about using SF is that there's less of
> a chance that this stuff will get buried in an inbox.

why not just switch it on, and see what happens.  I'd prefer
to get a concise TODO list on the login page, rather than having
to look in various strange places (like PEP-160 and PEP-200 ;-)

</F>




From gmcm at hypernet.com  Tue Aug  8 19:51:51 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 8 Aug 2000 13:51:51 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000808120715.A29873@thyrsus.com>
References: <200008081558.RAA30190@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Tue, Aug 08, 2000 at 05:58:27PM +0200
Message-ID: <1246365382-108994225@hypernet.com>

Eric Raymond wrote:
> Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:

> > There's a Semaphore class in Lib/threading.py. Are there any
> > problems with it? I haven't used it, but threading.py has
> > thread mutexes and semaphores on top of them, so as long as you
> > don't need IPC, they should be fine. Or am I missing something?
> 
> If I'm not mistaken, that's semaphores across a thread bundle
> within a single process. It's semaphores visible across processes
> that I don't think we currently have a facility for. 

There's the interprocess semaphore / mutex stuff in 
win32event... oh, never mind...

- Gordon



From ping at lfw.org  Tue Aug  8 22:29:52 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 8 Aug 2000 13:29:52 -0700 (PDT)
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <m13MAgC-000DieC@artcom0.artcom-gmbh.de>
Message-ID: <Pine.LNX.4.10.10008081256050.497-100000@skuld.lfw.org>

On Tue, 8 Aug 2000, Peter Funk wrote:
> 
> Unfortunately only a small subset of these methods actually works on
> a dbm mapping:
> 
> >>> import dbm
> >>> d = dbm.open("piff", "c")
> >>> d.get('foo', [])
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
>   AttributeError: get

I just got burned (again!) because neither the cgi.FieldStorage()
nor the cgi.FormContentDict() support .get().

I've submitted a patch that adds FieldStorage.get() and makes
FormContentDict a subclass of UserDict (the latter nicely eliminates
almost all of the code in FormContentDict).

(I know it says we're supposed to use FieldStorage, but i rarely if
ever need to use file-upload forms, so SvFormContentDict() is still
by far the most useful to me of the 17 different form implementations
<wink> in the cgi module, i don't care what anyone says...)

By the way, when/why did all of the documentation at the top of
cgi.py get blown away?


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box




From effbot at telia.com  Tue Aug  8 22:46:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 8 Aug 2000 22:46:15 +0200
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
References: <Pine.LNX.4.10.10008081256050.497-100000@skuld.lfw.org>
Message-ID: <015901c00179$b718cba0$f2a6b5d4@hagrid>

ping wrote:
> By the way, when/why did all of the documentation at the top of
> cgi.py get blown away?

    Date: Thu, 3 Aug 2000 13:57:47 -0700
    From: Jeremy Hylton <jhylton at users.sourceforge.net>
    To: python-checkins at python.org
    Subject: [Python-checkins] CVS: python/dist/src/Lib cgi.py,1.48,1.49

    Update of /cvsroot/python/python/dist/src/Lib
    In directory slayer.i.sourceforge.net:/tmp/cvs-serv2916

    Modified Files:
     cgi.py 
    Log Message:
    Remove very long doc string (it's all in the docs)
    Modify parse_qsl to interpret 'a=b=c' as key 'a' and value 'b=c'
    (which matches Perl's CGI.pm) 

</F>




From tim_one at email.msn.com  Wed Aug  9 06:57:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 9 Aug 2000 00:57:02 -0400
Subject: [Python-Dev] Task Manager on SourceForge
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>

Under the "what the heck" theory, I enabled the Task Manager on the Python
project -- beware the 6-hour delay!  Created two "subprojects" in it, P1.6
and P2, for tasks generally related to finishing the Python 1.6 and 2.0
releases, respectively.

Don't know anything more about it.  It appears you can set up a web of tasks
under a "subproject", with fields for who's assigned, percent complete,
status, hours of work, priority, start & end dates, and a list of tasks each
task depends on.

If anyone can think of a use for it, be my guest <wink>.

I *suspect* everyone already has admin privileges for the Task Manager, but
can't be sure.  Today I couldn't fool either Netscape or IE5 into displaying
the user-permissions Admin page correctly.  Everyone down to "lemburg" does
have admin privs for TaskMan, but from the middle of MAL's line on on down
it's all empty space for me.





From greg at cosc.canterbury.ac.nz  Wed Aug  9 07:27:24 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 09 Aug 2000 17:27:24 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
Message-ID: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>

I think I've actually found a syntax for lockstep
iteration that looks reasonable (or at least not
completely unreasonable) and is backward compatible:

   for (x in a, y in b):
      ...

Not sure what the implications are for the parser
yet.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From MarkH at ActiveState.com  Wed Aug  9 08:39:30 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Wed, 9 Aug 2000 16:39:30 +1000
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>

>    for (x in a, y in b):
>       ...

Hmmm.  Until someone smarter than me shoots it down for some obvious reason
<wink>, it certainly appeals to me.

My immediate reaction _is_ lockstep iteration, and that is the first time I
can say that.  Part of the reason is that it looks like a tuple unpack,
which I think of as a "lockstep/parallel/atomic" operation...

Mark.




From jack at oratrix.nl  Wed Aug  9 10:31:27 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 09 Aug 2000 10:31:27 +0200
Subject: [Python-Dev] A question for the Python Secret Police
Message-ID: <20000809083127.7FFF6303181@snelboot.oratrix.nl>

A question for the Python Secret Police (or P. Inquisition, or whoever 
else:-).

Is the following morally allowed:

package1/mod.py:
class Foo:
    def method1(self):
        ...

package2/mod.py:
from package1.mod import *

class Foo(Foo):
    def method2(self):
        ...

(The background is that the modules are machine-generated and contain
AppleEvent classes. There's a large set of standard classes, such as
Standard_Suite, and applications can signal that they implement
Standard_Suite with a couple of extensions to it. So, in the
Application-X Standard_Suite I'd like to import everything from the
standard Standard_Suite and override/add those methods that are
specific to Application X)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++






From tim_one at email.msn.com  Wed Aug  9 09:15:02 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 9 Aug 2000 03:15:02 -0400
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <200008081718.MAA01681@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>

[Tim]
>> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

[Guido]
> Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

But it doesn't need to be, right?  That is, change the stuff following
'import' in

    'from' dotted_name 'import' ('*' | NAME (',' NAME)*)

to

    ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

and verify that whenever the 3-NAME form triggers that the middle of the
NAMEs is exactly "as".  The grammar in the Reference Manual can still
advertise it as a syntactic constraint; if a particular implementation
happens to need to treat it as a semantic constraint due to parser
limitations (and CPython specifically would), the user will never know it.

It doesn't interfere with using "as" a regular NAME elsewhere.  Anyone
pointing out that the line

    from as import as as as

would then be legal will be shot.  Fortran had no reserved words of any
kind, and nobody abused that in practice.  Users may be idiots, but they're
not infants <wink>.





From thomas at xs4all.net  Wed Aug  9 10:42:32 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 10:42:32 +0200
Subject: [Python-Dev] Task Manager on SourceForge
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 09, 2000 at 12:57:02AM -0400
References: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com>
Message-ID: <20000809104232.I266@xs4all.nl>

On Wed, Aug 09, 2000 at 12:57:02AM -0400, Tim Peters wrote:

> Don't know anything more about it.  It appears you can set up a web of tasks
> under a "subproject", with fields for who's assigned, percent complete,
> status, hours of work, priority, start & end dates, and a list of tasks each
> task depends on.

Well, it seems mildly useful... It's missing some things that would make it
fairly useful (per-subtask and per-project todo-list, where you an say 'I
need help with this' and such things, the ability to attach patches to
subtasks (which would be useful for 'my' task of adding augmented
assignment ;) and probably more) but I can imagine why SF didn't include all
that (yet) -- it's a lot of work to do right, and I'm not sure if SF has
much projects of the size that needs a project manager like this ;)

But unless Guido and the rest of the PyLab team want to keep an overview of
what us overseas or at least other-state lazy bums are doing by trusting us
to keep a webpage up to date rather than informing the mailing list, I don't
think we'll see much use for it. If you *do* want such an overview, it might
be useful. In which case I'll send out some RFE's on my wishes for the
project manager ;)
 
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From ping at lfw.org  Wed Aug  9 11:37:07 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 9 Aug 2000 02:37:07 -0700 (PDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>

On Wed, 9 Aug 2000, Greg Ewing wrote:
> 
>    for (x in a, y in b):
>       ...

It looks nice, but i'm pretty sure it won't fly.  (x in a, y in b)
is a perfectly valid expression.  For compatibility the parser must
also accept

    for (x, y) in list_of_pairs:

and since the thing after the open-paren can be arbitrarily long,
how is the parser to know whether the lockstep form has been invoked?

Besides, i think Guido has Pronounced quite firmly on zip().

I would much rather petition now to get indices() and irange() into
the built-ins... please pretty please?


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box




From thomas at xs4all.net  Wed Aug  9 13:06:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 13:06:45 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Wed, Aug 09, 2000 at 04:39:30PM +1000
References: <200008090527.RAA13669@s454.cosc.canterbury.ac.nz> <ECEPKNMJLHAPFFJHDOJBKEOGDDAA.MarkH@ActiveState.com>
Message-ID: <20000809130645.J266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:39:30PM +1000, Mark Hammond wrote:

> >    for (x in a, y in b):
> >       ...

> Hmmm.  Until someone smarter than me shoots it down for some obvious reason
> <wink>, it certainly appeals to me.

The only objection I can bring up is that parentheses are almost always
optional, in Python, and this kind of violates it. Suddenly the presence of
parentheses changes the entire expression, not just the grouping of it. Oh,
and there is the question of whether 'for (x in a):' is allowed, too (it
isn't, currently.)

I'm not entirely sure that the parser will swallow this, however, because
'for (x in a, y in b) in z:' *is* valid syntax... so it might be ambiguous.
Then again, it can probably be worked around. It might not be too pretty,
but it can be worked around ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Wed Aug  9 13:29:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 13:29:13 +0200
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 09, 2000 at 03:15:02AM -0400
References: <200008081718.MAA01681@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>
Message-ID: <20000809132913.K266@xs4all.nl>

[Tim]
> for-that-matter-i'm-a-fan-of-"from-m-import-x-as-y"-too-ly y'rs  - tim

[Guido]
> Hm.  Predictably, I'm worried about adding 'as' as a reserved word.

[Tim]
> But it doesn't need to be, right?  That is, change the stuff following
> 'import' in
>     'from' dotted_name 'import' ('*' | NAME (',' NAME)*)
> to
>     ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

I'm very, very much +1 on this. the fact that (for example) 'from' is a
reserved word bothers me no end. If noone is going to comment anymore on
range literals or augmented assignment, I might just tackle this ;)

> Anyone pointing out that the line
>     from as import as as as
> would then be legal will be shot. 

"Cool, that would make 'from from import as as as' a legal sta"<BANG>

Damned American gun laws ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Wed Aug  9 14:30:43 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:30:43 -0500
Subject: [Python-Dev] Task Manager on SourceForge
In-Reply-To: Your message of "Wed, 09 Aug 2000 00:57:02 -0400."
             <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCMEMCGOAA.tim_one@email.msn.com> 
Message-ID: <200008091230.HAA23379@cj20424-a.reston1.va.home.com>

> Under the "what the heck" theory, I enabled the Task Manager on the Python
> project -- beware the 6-hour delay!  Created two "subprojects" in it, P1.6
> and P2, for tasks generally related to finishing the Python 1.6 and 2.0
> releases, respectively.

Beauuuutiful!

> Don't know anything more about it.  It appears you can set up a web of tasks
> under a "subproject", with fields for who's assigned, percent complete,
> status, hours of work, priority, start & end dates, and a list of tasks each
> task depends on.
> 
> If anyone can think of a use for it, be my guest <wink>.

I played with it a bit.  I added three tasks under 1.6 that need to be
done.

> I *suspect* everyone already has admin privileges for the Task Manager, but
> can't be sure.  Today I couldn't fool either Netscape or IE5 into displaying
> the user-permissions Admin page correctly.  Everyone down to "lemburg" does
> have admin privs for TaskMan, but from the middle of MAL's line on on down
> it's all empty space for me.

That must be a Windows limitation on how many popup menus you can
have.  Stupid Windows :-) !  This looks fine on Linux in Netscape (is
there any other browser :-) ?  and indeed the permissions are set
correctly.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From guido at beopen.com  Wed Aug  9 14:42:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:42:49 -0500
Subject: [Python-Dev] A question for the Python Secret Police
In-Reply-To: Your message of "Wed, 09 Aug 2000 10:31:27 +0200."
             <20000809083127.7FFF6303181@snelboot.oratrix.nl> 
References: <20000809083127.7FFF6303181@snelboot.oratrix.nl> 
Message-ID: <200008091242.HAA23451@cj20424-a.reston1.va.home.com>

> A question for the Python Secret Police (or P. Inquisition, or whoever 
> else:-).

That would be the Namespace Police in this case.

> Is the following morally allowed:
> 
> package1/mod.py:
> class Foo:
>     def method1(self):
>         ...
> 
> package2/mod.py:
> from package1.mod import *
> 
> class Foo(Foo):
>     def method2(self):
>         ...

I see no problem with this.  It's totally well-defined and I don't
expect I'll ever have a reason to disallow it.  Future picky compilers
or IDEs might warn about a redefined name, but I suppose you can live
with that given that it's machine-generated.

> (The background is that the modules are machine-generated and contain
> AppleEvent classes. There's a large set of standard classes, such as
> Standard_Suite, and applications can signal that they implement
> Standard_Suite with a couple of extensions to it. So, in the
> Application-X Standard_Suite I'd like to import everything from the
> standard Standard_Suite and override/add those methods that are
> specific to Application X)

That actually looks like a *good* reason to do exactly what you
propose.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From guido at beopen.com  Wed Aug  9 14:49:43 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 07:49:43 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: Your message of "Wed, 09 Aug 2000 02:37:07 MST."
             <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> 
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> 
Message-ID: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>

> On Wed, 9 Aug 2000, Greg Ewing wrote:
> > 
> >    for (x in a, y in b):
> >       ...

No, for exactly the reasons Ping explained.  Let's give this a rest okay?

> I would much rather petition now to get indices() and irange() into
> the built-ins... please pretty please?

I forget what indices() was -- is it the moreal equivalent of keys()?
That's range(len(s)), I don't see a need for a new function.  In fact
I think indices() would reduce readability because you have to guess
what it means.  Everybody knows range() and len(); not everybody will
know indices() because it's not needed that often.

If irange(s) is zip(range(len(s)), s), I see how that's a bit
unwieldy.  In the past there were syntax proposals, e.g. ``for i
indexing s''.  Maybe you and Just can draft a PEP?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Wed Aug  9 14:58:00 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 09 Aug 2000 14:58:00 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
Message-ID: <39915558.A68D7792@lemburg.com>

Guido van Rossum wrote:
> 
> > On Wed, 9 Aug 2000, Greg Ewing wrote:
> > >
> > >    for (x in a, y in b):
> > >       ...
> 
> No, for exactly the reasons Ping explained.  Let's give this a rest okay?
> 
> > I would much rather petition now to get indices() and irange() into
> > the built-ins... please pretty please?
> 
> I forget what indices() was -- is it the moreal equivalent of keys()?

indices() and irange() are both builtins which originated from
mx.Tools. See:

	http://starship.python.net/crew/lemburg/mxTools.html

* indices(object) is the same as tuple(range(len(object))) - only faster
and using a more intuitive and less convoluted name.

* irange(object[,indices]) (in its mx.Tools version) creates
a tuple of tuples (index, object[index]). indices defaults
to indices(object) if not given, otherwise, only the indexes
found in indices are used to create the mentioned tuple -- and
this even works with arbitrary keys, since the PyObject_GetItem()
API is used.

Typical use is:

for i,value in irange(sequence):
    sequence[i] = value + 1


In practice I found that I could always use irange() where indices()
would have been used, since I typically need the indexed
sequence object anyway.

> That's range(len(s)), I don't see a need for a new function.  In fact
> I think indices() would reduce readability because you have to guess
> what it means.  Everybody knows range() and len(); not everybody will
> know indices() because it's not needed that often.
> 
> If irange(s) is zip(range(len(s)), s), I see how that's a bit
> unwieldy.  In the past there were syntax proposals, e.g. ``for i
> indexing s''.  Maybe you and Just can draft a PEP?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Wed Aug  9 17:19:02 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 09 Aug 2000 15:19:02 +0000
Subject: [Python-Dev] Re: dict.setdefault() (Patch#101102)
References: <LNBBLJKPBEHFEDALKOLCMEMGGOAA.tim_one@email.msn.com>
Message-ID: <39917666.87C823E9@nowonder.de>

Tim Peters wrote:
> 
> But it doesn't need to be, right?  That is, change the stuff following
> 'import' in
> 
>     'from' dotted_name 'import' ('*' | NAME (',' NAME)*)
> 
> to
> 
>     ('*' | NAME [NAME NAME] (',' NAME [NAME NAME])*)

What about doing the same for the regular import?

import_stmt: 'import' dotted_name [NAME NAME] (',' dotted_name [NAME
NAME])* | 'from' dotted_name 'import' ('*' | NAME (',' NAME)*)

"import as as as"-isn't-that-impressive-though-ly y'rs
Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From just at letterror.com  Wed Aug  9 17:01:18 2000
From: just at letterror.com (Just van Rossum)
Date: Wed, 9 Aug 2000 16:01:18 +0100
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."            
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <l03102802b5b71c40f9fc@[193.78.237.121]>

At 7:49 AM -0500 09-08-2000, Guido van Rossum wrote:
>In the past there were syntax proposals, e.g. ``for i
>indexing s''.  Maybe you and Just can draft a PEP?

PEP:            1716099-3
Title:          Index-enhanced sequence iteration
Version:        $Revision: 1.1 $
Owner:          Someone-with-commit-rights
Python-Version: 2.0
Status:         Incomplete

Introduction

    This PEP proposes a way to more conveniently iterate over a
    sequence and its indices.

Features

    It adds an optional clause to the 'for' statement:

        for <index> indexing <element> in <seq>:
            ...

    This is equivalent to (see the zip() PEP):

        for <index>, <element> in zip(range(len(seq)), seq):
            ...

    Except no new list is created.

Mechanism

    The index of the current element in a for-loop already
    exists in the implementation, however, it is not reachable
    from Python. The new 'indexing' keyword merely exposes the
    internal counter.

Implementation

    Implementation should be trivial for anyone named Guido,
    Tim or Thomas. Justs better not try.

Advantages:

    Less code needed for this common operation, which is
    currently most often written as:

        for index in range(len(seq)):
            element = seq[i]
            ...

Disadvantages:

    It will break that one person's code that uses "indexing"
    as a variable name.

Copyright

    This document has been placed in the public domain.

Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:





From thomas at xs4all.net  Wed Aug  9 18:15:39 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 18:15:39 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>; from just@letterror.com on Wed, Aug 09, 2000 at 04:01:18PM +0100
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <20000809181539.M266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:01:18PM +0100, Just van Rossum wrote:

> PEP:            1716099-3
> Title:          Index-enhanced sequence iteration
> Version:        $Revision: 1.1 $
> Owner:          Someone-with-commit-rights

I'd be willing to adopt this PEP, if the other two PEPs on my name don't
need extensive rewrites anymore.

> Features
> 
>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:

Ever since I saw the implementation of FOR_LOOP I've wanted this, but I
never could think up a backwards compatible and readable syntax for it ;P

> Disadvantages:

>     It will break that one person's code that uses "indexing"
>     as a variable name.

This needn't be true, if it's done in the same way as Tim proposed the 'form
from import as as as' syntax change ;)

for_stmt: 'for' exprlist [NAME exprlist] 'in' testlist ':' suite ['else' ':' suite]

If the 5th subnode of the expression is 'in', the 3rd should be 'indexing'
and the 4th would be the variable to assign the index number to. If it's
':', the loop is index-less.

(this is just a quick and dirty example; 'exprlist' is probably not the
right subnode for the indexing variable, because it can't be a tuple or
anything like that.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From skip at mojam.com  Wed Aug  9 18:40:27 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 9 Aug 2000 11:40:27 -0500 (CDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809181539.M266@xs4all.nl>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
	<200008091249.HAA23481@cj20424-a.reston1.va.home.com>
	<l03102802b5b71c40f9fc@[193.78.237.121]>
	<20000809181539.M266@xs4all.nl>
Message-ID: <14737.35195.31385.867664@beluga.mojam.com>

    >> Disadvantages:

    >> It will break that one person's code that uses "indexing" as a
    >> variable name.

    Thomas> This needn't be true, if it's done in the same way as Tim
    Thomas> proposed the 'form from import as as as' syntax change ;)

Could this be extended to many/most/all current instances of keywords in
Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
(for example) can't define a method named "print".

Skip




From nowonder at nowonder.de  Wed Aug  9 20:49:53 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 09 Aug 2000 18:49:53 +0000
Subject: [Python-Dev] cannot commit 1.6 changes
Message-ID: <3991A7D0.4D2479C7@nowonder.de>

I have taken care of removing all occurences of math.rint()
from the 1.6 sources. The commit worked fine for the Doc,
Include and Module directory, but cvs won't let me commit
the changes to config.h.in, configure.in, configure:

cvs server: sticky tag `cnri-16-start' for file `config.h.in' is not a
branch
cvs server: sticky tag `cnri-16-start' for file `configure' is not a
branch
cvs server: sticky tag `cnri-16-start' for file `configure.in' is not a
branch
cvs [server aborted]: correct above errors first!

What am I missing?

confused-ly y'rs Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From esr at thyrsus.com  Wed Aug  9 20:03:21 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Wed, 9 Aug 2000 14:03:21 -0400
Subject: [Python-Dev] Un-stalling Berkeley DB support
Message-ID: <20000809140321.A836@thyrsus.com>

I'm still interested in getting support for the version 3 Berkeley DB
into the core.  This is one of my top three Python priorities currently, along
with drafting PEP2 and overhauling the curses HOWTO.  (I'd sure like to see
shm get in, too, but that's blocked on Vladimir writing suitable documentation,
too.

I'd like to get the necessary C extension in before 2.0 freeze, if
possible.  I've copied its author.  Again, the motivation here is to make
shelving transactional, with useful read-many/write-once guarantees.
Thousands of CGI programmers would thank us for this.

When we last discussed this subject, there was general support for the
functionality, but a couple of people went "bletch!" about SWIG-generated
code (there was unhappiness about pointers being treated as strings).

Somebody said something about having SWIG patches to address this.  Is this
the only real issue with SWIG-generated code?  If so, we can pursue two paths:
(1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
extension that meets our cleanliness criteria, and (2) press the SWIG guy 
to incorporate these patches in his next release.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"The best we can hope for concerning the people at large is that they be
properly armed."
        -- Alexander Hamilton, The Federalist Papers at 184-188



From akuchlin at mems-exchange.org  Wed Aug  9 20:09:55 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Wed, 9 Aug 2000 14:09:55 -0400
Subject: [Python-Dev] py-howto project now operational
Message-ID: <20000809140955.C4838@kronos.cnri.reston.va.us>

I've just gotten around to setting up the checkin list for the Python
HOWTO project on SourceForge (py-howto.sourceforge.net), so the
project is now fully operational.  People who want to update the
HOWTOs, such as ESR and the curses HOWTO, can now go ahead and make
changes.

And this is the last you'll hear about the HOWTOs on python-dev;
please use the Doc-SIG mailing list (doc-sig at python.org) for further
discussion of the HOWTOs.

--amk




From thomas at xs4all.net  Wed Aug  9 20:28:54 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 20:28:54 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>; from skip@mojam.com on Wed, Aug 09, 2000 at 11:40:27AM -0500
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]> <20000809181539.M266@xs4all.nl> <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <20000809202854.N266@xs4all.nl>

On Wed, Aug 09, 2000 at 11:40:27AM -0500, Skip Montanaro wrote:

>     >> Disadvantages:

>     >> It will break that one person's code that uses "indexing" as a
>     >> variable name.

>     Thomas> This needn't be true, if it's done in the same way as Tim
>     Thomas> proposed the 'form from import as as as' syntax change ;)

> Could this be extended to many/most/all current instances of keywords in
> Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
> (for example) can't define a method named "print".

No. I just (in the trainride from work to home ;) wrote a patch that adds
'from x import y as z' and 'import foo as fee', and came to the conclusion
that we can't make 'from' a non-reserved word, for instance. Because if we
change

'from' dotted_name 'import' NAME*

into

NAME dotted_name 'import' NAME*

the parser won't know how to parse other expressions that start with NAME,
like 'NAME = expr' or 'NAME is expr'. I know this because I tried it and it
didn't work :-) So we can probably make most names that are *part* of a
statement non-reserved words, but not those that uniquely identify a
statement. That doesn't leave much words, except perhaps for the 'in' in
'for' -- but 'in' is already a reserved word for other purposes ;)

As for the patch that adds 'as' (as a non-reserved word) to both imports,
I'll upload it to SF after I rewrite it a bit ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bckfnn at worldonline.dk  Wed Aug  9 21:43:58 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Wed, 09 Aug 2000 19:43:58 GMT
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809202854.N266@xs4all.nl>
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]> <20000809181539.M266@xs4all.nl> <14737.35195.31385.867664@beluga.mojam.com> <20000809202854.N266@xs4all.nl>
Message-ID: <3991acc4.10990753@smtp.worldonline.dk>

[Skip Montanaro]
> Could this be extended to many/most/all current instances of keywords in
> Python?  As Tim pointed out, Fortran has no keywords.  It annoys me that I
> (for example) can't define a method named "print".

[Thomas Wouters]
>No. I just (in the trainride from work to home ;) wrote a patch that adds
>'from x import y as z' and 'import foo as fee', and came to the conclusion
>that we can't make 'from' a non-reserved word, for instance. Because if we
>change
>
>'from' dotted_name 'import' NAME*
>
>into
>
>NAME dotted_name 'import' NAME*
>
>the parser won't know how to parse other expressions that start with NAME,
>like 'NAME = expr' or 'NAME is expr'. I know this because I tried it and it
>didn't work :-) So we can probably make most names that are *part* of a
>statement non-reserved words, but not those that uniquely identify a
>statement. That doesn't leave much words, except perhaps for the 'in' in
>'for' -- but 'in' is already a reserved word for other purposes ;)

Just a datapoint.

JPython goes a bit further in its attempt to unreserve reserved words in
certain cases:

- after "def"
- after a dot "."
- after "import"
- after "from" (in an import stmt)
- and as argument names

This allow JPython to do:

   from from import x
   def def(): pass
   x.exec(from=1, to=2)


This feature was added to ease JPython's integration to existing java
libraries. IIRC it was remarked that CPython could also make use of such
a feature when integrating to f.ex Tk or COM.


regards,
finn



From nascheme at enme.ucalgary.ca  Wed Aug  9 22:11:04 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 9 Aug 2000 14:11:04 -0600
Subject: [Python-Dev] test_fork1 on SMP? (was Re: [Python Dev] test_fork1 failing --with-threads (for some people)...)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>; from Tim Peters on Mon, Jul 31, 2000 at 04:42:50AM -0400
References: <14724.22554.818853.722906@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>
Message-ID: <20000809141104.A10805@keymaster.enme.ucalgary.ca>

On Mon, Jul 31, 2000 at 04:42:50AM -0400, Tim Peters wrote:
> It's a baffler!  AFAIK, nobody yet has thought of a way that a fork can
> screw up the state of the locks in the *parent* process (it must be easy to
> see how they can get screwed up in a child, because two of us already did
> <wink>).

If I add Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around fork()
in posixmodule then the child is the process which always seems to hang.
The child is hanging at:

#0  0x4006d58b in __sigsuspend (set=0xbf7ffac4)
    at ../sysdeps/unix/sysv/linux/sigsuspend.c:48
#1  0x4001f1a0 in pthread_cond_wait (cond=0x8264e1c, mutex=0x8264e28)
    at restart.h:49
#2  0x806f3c3 in PyThread_acquire_lock (lock=0x8264e18, waitflag=1)
    at thread_pthread.h:311
#3  0x80564a8 in PyEval_RestoreThread (tstate=0x8265a78) at ceval.c:178
#4  0x80bf274 in posix_fork (self=0x0, args=0x8226ccc) at ./posixmodule.c:1659
#5  0x8059460 in call_builtin (func=0x82380e0, arg=0x8226ccc, kw=0x0)
    at ceval.c:2376
#6  0x8059378 in PyEval_CallObjectWithKeywords (func=0x82380e0, arg=0x8226ccc, 
    kw=0x0) at ceval.c:2344
#7  0x80584f2 in eval_code2 (co=0x8265e98, globals=0x822755c, locals=0x0, 
    args=0x8226cd8, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, 
    owner=0x0) at ceval.c:1682
#8  0x805974b in call_function (func=0x8264ddc, arg=0x8226ccc, kw=0x0)
    at ceval.c:2498
#9  0x805936b in PyEval_CallObjectWithKeywords (func=0x8264ddc, arg=0x8226ccc, 
    kw=0x0) at ceval.c:2342
#10 0x80af26a in t_bootstrap (boot_raw=0x8264e00) at ./threadmodule.c:199
#11 0x4001feca in pthread_start_thread (arg=0xbf7ffe60) at manager.c:213

Since there is only one thread in the child this should not be
happening.  Can someone explain this?  I have tested this both a SMP
Linux machine and a UP Linux machine.

   Neil



From thomas at xs4all.net  Wed Aug  9 22:27:50 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 9 Aug 2000 22:27:50 +0200
Subject: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd)
Message-ID: <20000809222749.O266@xs4all.nl>

For those of you not on the patches list, here's the summary of the patch I
just uploaded to SF. In short, it adds "import x as y" and "from module
import x as y", in the way Tim proposed this morning. (Probably late last
night for most of you.)

----- Forwarded message from noreply at sourceforge.net -----

This patch adds the oft-proposed 'import as' syntax, to both 'import module'
and 'from module import ...', but without making 'as' a reserved word (by
using the technique Tim Peters proposed on python-dev.)

'import spam as egg' is a very simple patch to compile.c, which doesn't need
changes to the VM, but 'from spam import dog as meat' needs a new bytecode,
which this patch calls 'FROM_IMPORT_AS'. The bytecode loads an object from a
module onto the stack, so a STORE_NAME can store it later. This can't be
done by the normal FROM_IMPORT opcode, because it needs to take the special
case of '*' into account. Also, because it uses 'STORE_NAME', it's now
possible to mix 'import' and 'global', like so:

global X
from foo import X as X

The patch still generates the old code for

from foo import X

(without 'as') mostly to save on bytecode size, and for the 'compatibility'
with mixing 'global' and 'from .. import'... I'm not sure what's the best
thing to do.

The patch doesn't include a test suite or documentation, yet.

-------------------------------------------------------
For more info, visit:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101135&group_id=5470

----- End forwarded message -----

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From greg at mad-scientist.com  Wed Aug  9 22:27:33 2000
From: greg at mad-scientist.com (Gregory P . Smith)
Date: Wed, 9 Aug 2000 13:27:33 -0700
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809140321.A836@thyrsus.com>; from esr@thyrsus.com on Wed, Aug 09, 2000 at 02:03:21PM -0400
References: <20000809140321.A836@thyrsus.com>
Message-ID: <20000809132733.C2019@mad-scientist.com>

On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
> 
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).
> 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy 
> to incorporate these patches in his next release.

I'm not surprised to see the "bletch!" for SWIG's string/pointer things,
they are technically gross.  Anyone know what SWIG v1.3a3 does (v1.3
is a total rewrite from v1.1)?  py-bsddb3 as distributed was build
using SWIG v1.1-883.  In the meantime, if someone knows of a version of
SWIG that does this better, try using it to build bsddb3 (just pass a
SWIG=/usr/spam/eggs/bin/swig to the Makefile).  If you run into problems,
send them and a copy of that swig my way.

I'll take a quick look at SWIG v1.3alpha3 here and see what that does.

Greg



From skip at mojam.com  Wed Aug  9 22:41:57 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 9 Aug 2000 15:41:57 -0500 (CDT)
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809132733.C2019@mad-scientist.com>
References: <20000809140321.A836@thyrsus.com>
	<20000809132733.C2019@mad-scientist.com>
Message-ID: <14737.49685.902542.576229@beluga.mojam.com>

>>>>> "Greg" == Gregory P Smith <greg at mad-scientist.com> writes:

    Greg> On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
    >> 
    >> When we last discussed this subject, there was general support for
    >> the functionality, but a couple of people went "bletch!" about
    >> SWIG-generated code (there was unhappiness about pointers being
    >> treated as strings).
    ...
    Greg> I'm not surprised to see the "bletch!" for SWIG's string/pointer
    Greg> things, they are technically gross.

We're talking about a wrapper around a single smallish library (probably <
20 exposed functions), right?  Seems to me that SWIG is the wrong tool to
use here.  It's for wrapping massive libraries automatically.  Why not just
recode the current SWIG-generated module manually?

What am I missing?

-- 
Skip Montanaro, skip at mojam.com, http://www.mojam.com/, http://www.musi-cal.com/
"To get what you want you must commit yourself for sometime" - fortune cookie



From nascheme at enme.ucalgary.ca  Wed Aug  9 22:49:25 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Wed, 9 Aug 2000 14:49:25 -0600
Subject: [Python-Dev] Re: [Patches] [Patch #101135] 'import x as y' and 'from x import y as z'
In-Reply-To: <200008092014.NAA08040@delerium.i.sourceforge.net>; from noreply@sourceforge.net on Wed, Aug 09, 2000 at 01:14:52PM -0700
References: <200008092014.NAA08040@delerium.i.sourceforge.net>
Message-ID: <20000809144925.A11242@keymaster.enme.ucalgary.ca>

On Wed, Aug 09, 2000 at 01:14:52PM -0700, noreply at sourceforge.net wrote:
> Patch #101135 has been updated. 
> 
> Project: 
> Category: core (C code)
> Status: Open
> Summary: 'import x as y' and 'from x import y as z'

+1.  This is much more useful and clear than setdefault (which I was -1
on, not that it matters).

  Neil



From esr at thyrsus.com  Wed Aug  9 23:03:51 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Wed, 9 Aug 2000 17:03:51 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101135] 'import x as y' and 'from x import y as z'
In-Reply-To: <20000809144925.A11242@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Wed, Aug 09, 2000 at 02:49:25PM -0600
References: <200008092014.NAA08040@delerium.i.sourceforge.net> <20000809144925.A11242@keymaster.enme.ucalgary.ca>
Message-ID: <20000809170351.A1550@thyrsus.com>

Neil Schemenauer <nascheme at enme.ucalgary.ca>:
> On Wed, Aug 09, 2000 at 01:14:52PM -0700, noreply at sourceforge.net wrote:
> > Patch #101135 has been updated. 
> > 
> > Project: 
> > Category: core (C code)
> > Status: Open
> > Summary: 'import x as y' and 'from x import y as z'
> 
> +1.  This is much more useful and clear than setdefault (which I was -1
> on, not that it matters).

I'm +0 on this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The most foolish mistake we could possibly make would be to permit 
the conquered Eastern peoples to have arms.  History teaches that all 
conquerors who have allowed their subject races to carry arms have 
prepared their own downfall by doing so.
        -- Hitler, April 11 1942, revealing the real agenda of "gun control"



From greg at mad-scientist.com  Wed Aug  9 23:16:39 2000
From: greg at mad-scientist.com (Gregory P . Smith)
Date: Wed, 9 Aug 2000 14:16:39 -0700
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809140321.A836@thyrsus.com>; from esr@thyrsus.com on Wed, Aug 09, 2000 at 02:03:21PM -0400
References: <20000809140321.A836@thyrsus.com>
Message-ID: <20000809141639.D2019@mad-scientist.com>

On Wed, Aug 09, 2000 at 02:03:21PM -0400, Eric S. Raymond wrote:
> 
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).
> 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy 
> to incorporate these patches in his next release.

Out of curiosity, I just made a version of py-bsddb3 that uses SWIG
v1.3alpha3 instead of SWIG v1.1-883.  It looks like 1.3a3 is still
using strings for pointerish things.  One thing to note that may calm
some peoples sense of "eww gross, pointer strings" is that programmers
should never see them.  They are "hidden" behind the python shadow class.
The pointer strings are only contained within the shadow objects "this"
member.

example:

  >>> from bsddb3.db import *
  >>> e = DbEnv()
  >>> e
  <C DbEnv instance at _807eea8_MyDB_ENV_p>
  >>> e.this
  '_807eea8_MyDB_ENV_p'

Anyways, the update if anyone is curious about a version using the more
recent swig is on the py-bsddb3 web site:

http://electricrain.com/greg/python/bsddb3/


Greg




From guido at beopen.com  Thu Aug 10 00:29:58 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 17:29:58 -0500
Subject: [Python-Dev] cannot commit 1.6 changes
In-Reply-To: Your message of "Wed, 09 Aug 2000 18:49:53 GMT."
             <3991A7D0.4D2479C7@nowonder.de> 
References: <3991A7D0.4D2479C7@nowonder.de> 
Message-ID: <200008092229.RAA24802@cj20424-a.reston1.va.home.com>

> I have taken care of removing all occurences of math.rint()
> from the 1.6 sources. The commit worked fine for the Doc,
> Include and Module directory, but cvs won't let me commit
> the changes to config.h.in, configure.in, configure:
> 
> cvs server: sticky tag `cnri-16-start' for file `config.h.in' is not a
> branch
> cvs server: sticky tag `cnri-16-start' for file `configure' is not a
> branch
> cvs server: sticky tag `cnri-16-start' for file `configure.in' is not a
> branch
> cvs [server aborted]: correct above errors first!
> 
> What am I missing?

The error message is right.  Somehow whoever set those tags on those
files did not make them branch tags.  I don't know why -- I think it
was Fred, I don't know why he did that.  The quickest way to fix this
is to issue the command

  cvs tag -F -b -r <revision> cnri-16-start <file>

for each file, where <whatever> is the revision where the tag should
be and <file> is the file.  Note that -F means "force" (otherwise you
get a complaint because the tag is already defined) and -b means
"branch" which makes the tag a branch tag.  I *believe* that branch
tags are recognized because they have the form
<major>.<minor>.0.<branch> but I'm not sure this is documented.

I alread did this for you for these three files!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 10 00:43:35 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 17:43:35 -0500
Subject: [Python-Dev] test_fork1 on SMP? (was Re: [Python Dev] test_fork1 failing --with-threads (for some people)...)
In-Reply-To: Your message of "Wed, 09 Aug 2000 14:11:04 CST."
             <20000809141104.A10805@keymaster.enme.ucalgary.ca> 
References: <14724.22554.818853.722906@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCOEBDGNAA.tim_one@email.msn.com>  
            <20000809141104.A10805@keymaster.enme.ucalgary.ca> 
Message-ID: <200008092243.RAA24914@cj20424-a.reston1.va.home.com>

> If I add Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around fork()
> in posixmodule then the child is the process which always seems to hang.

I first thought that the lock should be released around the fork too,
but later I realized that that was exactly wrong: if you release the
lock before you fork, another thread will likely grab the lock before
you fork; then in the child the lock is held by that other thread but
that thread doesn't exist, so when the main thread tries to get the
lock back it hangs in the Py_END_ALLOW_THREADS.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From ping at lfw.org  Thu Aug 10 00:06:15 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 9 Aug 2000 15:06:15 -0700 (PDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008091249.HAA23481@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>

On Wed, 9 Aug 2000, Guido van Rossum wrote:
> I forget what indices() was -- is it the moreal equivalent of keys()?

Yes, it's range(len(s)).

> If irange(s) is zip(range(len(s)), s), I see how that's a bit
> unwieldy.  In the past there were syntax proposals, e.g. ``for i
> indexing s''.  Maybe you and Just can draft a PEP?

In the same vein as zip(), i think it's much easier to just toss in
a couple of built-ins than try to settle on a new syntax.  (I already
uploaded a patch to add indices() and irange() to the built-ins,
immediately after i posted my first message on this thread.)

Surely a PEP isn't required for a couple of built-in functions that
are simple and well understood?  You can just call thumbs-up or
thumbs-down and be done with it.


-- ?!ng

"All models are wrong; some models are useful."
    -- George Box




From klm at digicool.com  Thu Aug 10 00:05:57 2000
From: klm at digicool.com (Ken Manheimer)
Date: Wed, 9 Aug 2000 18:05:57 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <Pine.LNX.4.21.0008091739020.1282-100000@korak.digicool.com>

On Wed, 9 Aug 2000, Just van Rossum wrote:

> PEP:            1716099-3
> Title:          Index-enhanced sequence iteration
> [...]
>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:
>             ...
> [...]
> Disadvantages:
> 
>     It will break that one person's code that uses "indexing"
>     as a variable name.

      It creates a new 'for' variant, increasing challenge for beginners 
      (and the befuddled, like me) of tracking the correct syntax.

I could see that disadvantage being justified by a more significant change
- lockstep iteration would qualify, for me (though it's circumventing this
drawback with zip()).  List comprehensions have that weight, and analogize
elegantly against the existing slice syntax.  I don't think the 'indexing'
benefits are of that order, not enough so to double the number of 'for'
forms, even if there are some performance gains over the (syntactically
equivalent) zip(), so, sorry, but i'm -1.

Ken
klm at digicool.com




From klm at digicool.com  Thu Aug 10 00:13:37 2000
From: klm at digicool.com (Ken Manheimer)
Date: Wed, 9 Aug 2000 18:13:37 -0400 (EDT)
Subject: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import
 y as z' (fwd)
In-Reply-To: <20000809222749.O266@xs4all.nl>
Message-ID: <Pine.LNX.4.21.0008091808390.1282-100000@korak.digicool.com>

On Wed, 9 Aug 2000, Thomas Wouters wrote:

> For those of you not on the patches list, here's the summary of the patch I
> just uploaded to SF. In short, it adds "import x as y" and "from module
> import x as y", in the way Tim proposed this morning. (Probably late last
> night for most of you.)

I guess the criteria i used in my thumbs down on 'indexing' is very
subjective, because i would say the added functionality of 'import x as y'
*does* satisfy my added-functionality test, and i'd be +1.  (I think the
determining thing is the ability to avoid name collisions without any
gross shuffle.)

I also really like the non-keyword basis for the, um, keyword.

Ken
klm at digicool.com




From guido at beopen.com  Thu Aug 10 01:14:19 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 18:14:19 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: Your message of "Wed, 09 Aug 2000 15:06:15 MST."
             <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> 
References: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> 
Message-ID: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>

> On Wed, 9 Aug 2000, Guido van Rossum wrote:
> > I forget what indices() was -- is it the moreal equivalent of keys()?

[Ping]
> Yes, it's range(len(s)).
> 
> > If irange(s) is zip(range(len(s)), s), I see how that's a bit
> > unwieldy.  In the past there were syntax proposals, e.g. ``for i
> > indexing s''.  Maybe you and Just can draft a PEP?
> 
> In the same vein as zip(), i think it's much easier to just toss in
> a couple of built-ins than try to settle on a new syntax.  (I already
> uploaded a patch to add indices() and irange() to the built-ins,
> immediately after i posted my first message on this thread.)
> 
> Surely a PEP isn't required for a couple of built-in functions that
> are simple and well understood?  You can just call thumbs-up or
> thumbs-down and be done with it.

-1 for indices

-0 for irange

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 10 00:15:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 00:15:10 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>; from just@letterror.com on Wed, Aug 09, 2000 at 04:01:18PM +0100
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <200008091249.HAA23481@cj20424-a.reston1.va.home.com> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <20000810001510.P266@xs4all.nl>

On Wed, Aug 09, 2000 at 04:01:18PM +0100, Just van Rossum wrote:

> Features

>     It adds an optional clause to the 'for' statement:
> 
>         for <index> indexing <element> in <seq>:
>             ...

> Implementation
> 
>     Implementation should be trivial for anyone named Guido,
>     Tim or Thomas.

Well, to justify that vote of confidence <0.4 wink> I wrote a quick hack
that adds this to Python for loops. It can be found on SF, patch #101138.
It's small, but it works. I'll iron out any bugs if there's enough positive
feelings towards this kind of syntax change.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Aug 10 00:22:55 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 00:22:55 +0200
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 09, 2000 at 06:14:19PM -0500
References: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org> <200008092314.SAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <20000810002255.Q266@xs4all.nl>

On Wed, Aug 09, 2000 at 06:14:19PM -0500, Guido van Rossum wrote:

> -1 for indices
> 
> -0 for irange

The same for me, though I prefer 'for i indexing x in l' over 'irange()'. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From beazley at schlitz.cs.uchicago.edu  Thu Aug 10 00:34:16 2000
From: beazley at schlitz.cs.uchicago.edu (David M. Beazley)
Date: Wed,  9 Aug 2000 17:34:16 -0500 (CDT)
Subject: [Python-Dev] Python-Dev digest, Vol 1 #737 - 17 msgs
In-Reply-To: <20000809221115.AC4E61D182@dinsdale.python.org>
References: <20000809221115.AC4E61D182@dinsdale.python.org>
Message-ID: <14737.55249.87871.538988@schlitz.cs.uchicago.edu>

python-dev-request at python.org writes:
 > 
 > I'd like to get the necessary C extension in before 2.0 freeze, if
 > possible.  I've copied its author.  Again, the motivation here is to make
 > shelving transactional, with useful read-many/write-once guarantees.
 > Thousands of CGI programmers would thank us for this.
 > 
 > When we last discussed this subject, there was general support for the
 > functionality, but a couple of people went "bletch!" about SWIG-generated
 > code (there was unhappiness about pointers being treated as strings).
 > 
 > Somebody said something about having SWIG patches to address this.  Is this
 > the only real issue with SWIG-generated code?  If so, we can pursue
 > two paths:

Well, as the guilty party on the SWIG front, I can say that the
current development version of SWIG is using CObjects instead of
strings (well, actually I lie---you have to compile the wrappers with
-DSWIG_COBJECT_TYPES to turn that feature on).  Just as a general
aside on this topic, I did a number of experiments comparing the
performance of using CObjects vs.the gross string-pointer hack about 6
months ago.  Strangely enough, there was virtually no-difference in
runtime performance and if recall correctly, the string hack might
have even been just a little bit faster. Go figure :-).

Overall, the main difference between SWIG1.3 and SWIG1.1 is in runtime 
performance of the wrappers as well as various changes to reduce the
amount of wrapper code.   However, 1.3 is also very much an alpha release
right now---if you're going to use that, make sure you thoroughly test 
everything.

On the subject of the Berkeley DB module, I would definitely like to 
see a module for this.  If there is anything I can do to either modify
the behavior of SWIG or to build an extension module by hand, let me know.

Cheers,

Dave





From MarkH at ActiveState.com  Thu Aug 10 01:03:19 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:03:19 +1000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>

[Skip laments...]
> Could this be extended to many/most/all current instances of
> keywords in Python?  As Tim pointed out, Fortran has no
> keywords.  It annoys me that I (for example) can't define
> a method named "print".

Sometimes it is worse than annoying!

In the COM and CORBA worlds, it can be a showstopper - if an external
object happens to expose a method or property named after a Python keyword,
then you simply can not use it!

This has lead to COM support having to check _every_ attribute name it sees
externally, and mangle it if a keyword.

A bigger support exists for .NET.  The .NET framework explicitly dictates
that a compliant language _must_ have a way of overriding its own keywords
when calling external methods (it was either that, or try and dictate a
union of reserved words they can ban)

Eg, C# allows you to surround a keyword with brackets.  ie, I believe
something like:

object.[if]

Would work in C# to provide access to an attribute named "if"

Unfortunately, Python COM is a layer ontop of CPython, and Python .NET
still uses the CPython parser - so in neither of these cases is there a
simple hack I can use to work around it at the parser level.

Needless to say, as this affects the 2 major technologies I work with
currently, I would like an official way to work around Python keywords!

Mark.




From guido at beopen.com  Thu Aug 10 02:12:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 19:12:59 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: Your message of "Thu, 10 Aug 2000 09:03:19 +1000."
             <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com> 
Message-ID: <200008100012.TAA25968@cj20424-a.reston1.va.home.com>

> [Skip laments...]
> > Could this be extended to many/most/all current instances of
> > keywords in Python?  As Tim pointed out, Fortran has no
> > keywords.  It annoys me that I (for example) can't define
> > a method named "print".
> 
> Sometimes it is worse than annoying!
> 
> In the COM and CORBA worlds, it can be a showstopper - if an external
> object happens to expose a method or property named after a Python keyword,
> then you simply can not use it!
> 
> This has lead to COM support having to check _every_ attribute name it sees
> externally, and mangle it if a keyword.
> 
> A bigger support exists for .NET.  The .NET framework explicitly dictates
> that a compliant language _must_ have a way of overriding its own keywords
> when calling external methods (it was either that, or try and dictate a
> union of reserved words they can ban)
> 
> Eg, C# allows you to surround a keyword with brackets.  ie, I believe
> something like:
> 
> object.[if]
> 
> Would work in C# to provide access to an attribute named "if"
> 
> Unfortunately, Python COM is a layer ontop of CPython, and Python .NET
> still uses the CPython parser - so in neither of these cases is there a
> simple hack I can use to work around it at the parser level.
> 
> Needless to say, as this affects the 2 major technologies I work with
> currently, I would like an official way to work around Python keywords!

The JPython approach should be added to CPython.  This effectively
turns off keywords directly after ".", "def" and in a few other
places.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From MarkH at ActiveState.com  Thu Aug 10 01:17:35 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:17:35 +1000
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <200008092314.SAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEBFDEAA.MarkH@ActiveState.com>

Guido commented yesterday that he doesnt tally votes (yay), but obviously
he still issues them!  It made me think of a Dutch Crocodile Dundee on a
visit to New York, muttering to his harassers as he whips something out
from under his clothing...

> -1 for indices

"You call that a -1,  _this_ is a -1"

:-)

[Apologies to anyone who hasnt seen the knife scene in the forementioned
movie ;-]

Mark.




From MarkH at ActiveState.com  Thu Aug 10 01:21:33 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 10 Aug 2000 09:21:33 +1000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <200008100012.TAA25968@cj20424-a.reston1.va.home.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>

[Guido]
> The JPython approach should be added to CPython.  This effectively
> turns off keywords directly after ".", "def" and in a few other
> places.

Excellent.  I saw a reference to this after I sent my mail.

I'd be happy to help, in a long, drawn out, when-I-find-time kind of way.
What is the general strategy - is it simply to maintain a state in the
parser?  Where would I start to look into?

Mark.




From guido at beopen.com  Thu Aug 10 02:36:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 19:36:30 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: Your message of "Thu, 10 Aug 2000 09:21:33 +1000."
             <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com> 
Message-ID: <200008100036.TAA26235@cj20424-a.reston1.va.home.com>

> [Guido]
> > The JPython approach should be added to CPython.  This effectively
> > turns off keywords directly after ".", "def" and in a few other
> > places.
> 
> Excellent.  I saw a reference to this after I sent my mail.
> 
> I'd be happy to help, in a long, drawn out, when-I-find-time kind of way.
> What is the general strategy - is it simply to maintain a state in the
> parser?  Where would I start to look into?
> 
> Mark.

Alas, I'm not sure how easy it will be.  The parser generator will
probably have to be changed to allow you to indicate not to do a
resword lookup at certain points in the grammar.  I don't know where
to start. :-(

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Thu Aug 10 03:12:59 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 04:12:59 +0300 (IDT)
Subject: [Python-Dev] Re: Un-stalling Berkeley DB support
In-Reply-To: <20000809141639.D2019@mad-scientist.com>
Message-ID: <Pine.GSO.4.10.10008100411500.26961-100000@sundial>

On Wed, 9 Aug 2000, Gregory P . Smith wrote:

> Out of curiosity, I just made a version of py-bsddb3 that uses SWIG
> v1.3alpha3 instead of SWIG v1.1-883.  It looks like 1.3a3 is still
> using strings for pointerish things.  One thing to note that may calm
> some peoples sense of "eww gross, pointer strings" is that programmers
> should never see them.  They are "hidden" behind the python shadow class.
> The pointer strings are only contained within the shadow objects "this"
> member.

It's not "ewww gross", it's "dangerous!". This makes Python "not safe",
since users can access random memory location.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From moshez at math.huji.ac.il  Thu Aug 10 03:28:00 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 04:28:00 +0300 (IDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBAEBFDEAA.MarkH@ActiveState.com>
Message-ID: <Pine.GSO.4.10.10008100425430.26961-100000@sundial>

On Thu, 10 Aug 2000, Mark Hammond wrote:

> Sometimes it is worse than annoying!
> 
> In the COM and CORBA worlds, it can be a showstopper - if an external
> object happens to expose a method or property named after a Python keyword,
> then you simply can not use it!

How about this (simple, but relatively unannoying) convention:

To COM name:
	- remove last "_", if any


From greg at cosc.canterbury.ac.nz  Thu Aug 10 03:29:38 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 13:29:38 +1200 (NZST)
Subject: [Python-Dev] A question for the Python Secret Police
In-Reply-To: <20000809083127.7FFF6303181@snelboot.oratrix.nl>
Message-ID: <200008100129.NAA13775@s454.cosc.canterbury.ac.nz>

Jack Jansen <jack at oratrix.nl>:

> Is the following morally allowed:
>   class Foo(Foo):

Well, the standard admonitions against 'import *' apply.
Whether using 'import *' or not, though, in the interests 
of clarity I think I would write it as

   class Foo(package1.mod.Foo):

On the other hand, the funkiness factor of it does
have a certain appeal!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 03:56:55 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 13:56:55 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
Message-ID: <200008100156.NAA13782@s454.cosc.canterbury.ac.nz>

> It looks nice, but i'm pretty sure it won't fly.

It will! Try it:

>>> for (x in a, y in b):
  File "<stdin>", line 1
    for (x in a, y in b):
                        ^
SyntaxError: invalid syntax

> how is the parser to know whether the lockstep form has been
> invoked?

The parser doesn't have to know as long as the compiler can
tell, and clearly one of them can.

> Besides, i think Guido has Pronounced quite firmly on zip().

That won't stop me from gently trying to change his mind
one day. The existence of zip() doesn't preclude something
more elegant being adopted in a future version.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 04:12:08 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:12:08 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <20000809130645.J266@xs4all.nl>
Message-ID: <200008100212.OAA13789@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas at xs4all.net>:

> The only objection I can bring up is that parentheses are almost always
> optional, in Python, and this kind of violates it.

They're optional around tuple constructors, but this is not
a tuple constructor.

The parentheses around function arguments aren't optional
either, and nobody complains about that.

> 'for (x in a, y in b) in z:' *is* valid syntax...

But it's not valid Python:

>>> for (x in a, y in b) in z:
...   print x,y
... 
SyntaxError: can't assign to operator

> It might not be too pretty, but it can be worked around ;)

It wouldn't be any uglier than what's currently done with
the LHS of an assignment, which is parsed as a general
expression and treated specially later on.

There's-more-to-the-Python-syntax-than-what-it-says-in-
the-Grammar-file-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 04:19:32 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:19:32 +1200 (NZST)
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <200008100219.OAA13793@s454.cosc.canterbury.ac.nz>

Just van Rossum <just at letterror.com>:

>        for <index> indexing <element> in <seq>:

Then idea is good, but I don't like this particular syntax much. It
seems to be trying to do too much at once, giving you both an index
and an element.  Also, the wording reminds me unpleasantly of COBOL
for some reason.

Some time ago I suggested

   for <index> over <sequence>:

as a way of getting hold of the index, and as a direct
replacement for 'for i in range(len(blarg))' constructs.
It could also be used for lockstep iteration applications,
e.g.

   for i over a:
      frobulate(a[i], b[i])

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 10 04:23:50 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 10 Aug 2000 14:23:50 +1200 (NZST)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <200008100036.TAA26235@cj20424-a.reston1.va.home.com>
Message-ID: <200008100223.OAA13796@s454.cosc.canterbury.ac.nz>

BDFL:

> The parser generator will probably have to be changed to allow you to
> indicate not to do a resword lookup at certain points in the grammar.

Isn't it the scanner which recognises reserved words?

In that case, just make it not do that for the next
token after certain tokens.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From billtut at microsoft.com  Thu Aug 10 05:24:11 2000
From: billtut at microsoft.com (Bill Tutt)
Date: Wed, 9 Aug 2000 20:24:11 -0700 
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
	!)
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com>

The parser actually recognizes keywords atm.

We could change that so that each keyword is a token.
Then you would have something like:

keyword_allowed_name: KEY1 | KEY2 | KEY3 | ... | KEYN | NAME
and then tweak func_def like so:
func_def:  DEF keyword_allowed_name parameters ':' suite

I haven't pondered whether or not this would cause the DFA to fall into a
degenerate case or not.

Wondering where the metagrammer source file went to,
Bill


 -----Original Message-----
From: 	Greg Ewing [mailto:greg at cosc.canterbury.ac.nz] 
Sent:	Wednesday, August 09, 2000 7:24 PM
To:	python-dev at python.org
Subject:	Re: [Python-Dev] Python keywords (was Lockstep iteration -
eureka!)

BDFL:

> The parser generator will probably have to be changed to allow you to
> indicate not to do a resword lookup at certain points in the grammar.

Isn't it the scanner which recognises reserved words?

In that case, just make it not do that for the next
token after certain tokens.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+

_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://www.python.org/mailman/listinfo/python-dev



From guido at beopen.com  Thu Aug 10 06:44:45 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 09 Aug 2000 23:44:45 -0500
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka !)
In-Reply-To: Your message of "Wed, 09 Aug 2000 20:24:11 MST."
             <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com> 
References: <58C671173DB6174A93E9ED88DCB0883D0A611B@red-msg-07.redmond.corp.microsoft.com> 
Message-ID: <200008100444.XAA27348@cj20424-a.reston1.va.home.com>

> The parser actually recognizes keywords atm.
> 
> We could change that so that each keyword is a token.
> Then you would have something like:
> 
> keyword_allowed_name: KEY1 | KEY2 | KEY3 | ... | KEYN | NAME
> and then tweak func_def like so:
> func_def:  DEF keyword_allowed_name parameters ':' suite
> 
> I haven't pondered whether or not this would cause the DFA to fall into a
> degenerate case or not.

This would be a good and simple approach.

> Wondering where the metagrammer source file went to,

It may not have existed; I may have handcrafted the metagrammar.c
file.

I believe the metagrammar was something like this:

MSTART: RULE*
RULE: NAME ':' RHS
RHS: ITEM ('|' ITEM)*
ITEM: (ATOM ['*' | '?'])+
ATOM: NAME | STRING | '(' RHS ')' | '[' RHS ']'

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From nowonder at nowonder.de  Thu Aug 10 09:02:12 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:02:12 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 
 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl>
Message-ID: <39925374.59D974FA@nowonder.de>

Thomas Wouters wrote:
> 
> For those of you not on the patches list, here's the summary of the patch I
> just uploaded to SF. In short, it adds "import x as y" and "from module
> import x as y", in the way Tim proposed this morning. (Probably late last
> night for most of you.)

-1 on the implementation. Although it looked okay on a first visual
   inspection, it builds a segfaulting python executable on linux:
      make distclean && ./configure && make test
   segfaults when first time starting python to run regrtest.py.
   Reversing the patch and doing a simple 'make test' has everything
   running again.

+1 on the idea, though. It just seems sooo natural. My first
   reaction before applying the patch was testing if Python
   did not already do this <0.25 wink - really did it>

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From nowonder at nowonder.de  Thu Aug 10 09:21:13 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:21:13 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de>
Message-ID: <399257E9.E399D52D@nowonder.de>

Peter Schneider-Kamp wrote:
> 
> -1 on the implementation. Although it looked okay on a first visual
>    inspection, it builds a segfaulting python executable on linux:
>       make distclean && ./configure && make test
>    segfaults when first time starting python to run regrtest.py.
>    Reversing the patch and doing a simple 'make test' has everything
>    running again.

Also note the following problems:

nowonder at mobility:~/python/python/dist/src > ./python
Python 2.0b1 (#12, Aug 10 2000, 07:17:46)  [GCC 2.95.2 19991024
(release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> from string import join
Speicherzugriffsfehler
nowonder at mobility:~/python/python/dist/src > ./python
Python 2.0b1 (#12, Aug 10 2000, 07:17:46)  [GCC 2.95.2 19991024
(release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> from string import join as j
  File "<stdin>", line 1
    from string import join as j
                             ^
SyntaxError: invalid syntax
>>>  

I think the problem is in compile.c, but that's just my bet.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 07:24:19 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 07:24:19 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39925374.59D974FA@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 07:02:12AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de>
Message-ID: <20000810072419.A17171@xs4all.nl>

On Thu, Aug 10, 2000 at 07:02:12AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:
> > 
> > For those of you not on the patches list, here's the summary of the patch I
> > just uploaded to SF. In short, it adds "import x as y" and "from module
> > import x as y", in the way Tim proposed this morning. (Probably late last
> > night for most of you.)

> -1 on the implementation. Although it looked okay on a first visual
>    inspection, it builds a segfaulting python executable on linux:
>       make distclean && ./configure && make test
>    segfaults when first time starting python to run regrtest.py.
>    Reversing the patch and doing a simple 'make test' has everything
>    running again.

Try running 'make' in 'Grammar/' first. None of my patches that touch
Grammar include the changes to graminit.h and graminit.c, because they can
be quite lengthy (in the order of several thousand lines, in this case, if
I'm not mistaken.) So the same goes for the 'indexing for', 'range literal'
and 'augmented assignment' patches ;)

If it still goes crashy crashy after you re-make the grammar, I'll, well,
I'll, I'll make Baldrick eat one of his own dirty socks ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 09:37:44 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 07:37:44 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl>
Message-ID: <39925BC8.17CD051@nowonder.de>

Thomas Wouters wrote:
> 
> If it still goes crashy crashy after you re-make the grammar, I'll, well,
> I'll, I'll make Baldrick eat one of his own dirty socks ;)

I just found that out for myself. The syntaxerror in the
second examples lead my way ...

Sorry for the hassle, but next time please remind me that
I have to remake the grammar.

+1 on the implementation now.

perversely-minded-note:
What about 'from string import *, join as j'?
I think that would make sense, but as we are not fond of
the star in any case maybe we don't need that.

Peter

P.S.: I'd like to see Baldrick do that. What the heck is
      a Baldrick? I am longing for breakfast, so I hope
      I can eat it. Mjam.
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 07:55:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 07:55:10 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39925BC8.17CD051@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 07:37:44AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de>
Message-ID: <20000810075510.B17171@xs4all.nl>

On Thu, Aug 10, 2000 at 07:37:44AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > If it still goes crashy crashy after you re-make the grammar, I'll, well,
> > I'll, I'll make Baldrick eat one of his own dirty socks ;)

> I just found that out for myself. The syntaxerror in the
> second examples lead my way ...

> Sorry for the hassle, but next time please remind me that
> I have to remake the grammar.

It was late, last night, and I have to force myself not to write essays when
submitting a patch in the first place ;-P How about we fix the dependencies
so that the grammar gets re-made when necessary ? Or is there a good reason
not to do that ?

> perversely-minded-note:
> What about 'from string import *, join as j'?
> I think that would make sense, but as we are not fond of
> the star in any case maybe we don't need that.

'join as j' ? What would it do ? Import all symbols from 'string' into a
new namespace 'j' ? How about you do 'import string as j' instead ? It means
you will still be able to do 'j._somevar', which you probably wouldn't in
your example, but I don't think that's enough reason :P

> P.S.: I'd like to see Baldrick do that. What the heck is
>       a Baldrick? I am longing for breakfast, so I hope
>       I can eat it. Mjam.

Sorry :) They've been doing re-runs of Blackadder (1st through 4th, they're
nearly done) on one of the belgian channels, and it happens to be one of my
favorite comedy shows ;) It's a damned sight funnier than Crocodile Dundee,
hey, Mark ? <nudge> <nudge> <wink> <wink> :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 10:10:13 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 08:10:13 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl>
Message-ID: <39926365.909B2835@nowonder.de>

Thomas Wouters wrote:
> 
> 'join as j' ? What would it do ? Import all symbols from 'string' into a
> new namespace 'j' ? How about you do 'import string as j' instead ? It means
> you will still be able to do 'j._somevar', which you probably wouldn't in
> your example, but I don't think that's enough reason :P

Okay, your misunderstanding of the semantics I had in mind are
reason enough <0.5 wink>.

from string import *, join as j
(or equivalently)
from string import join as j, *

would (in my book) import all "public" symbols from string
and assign j = join.

Assuming we have a Tkinter app (where all the tutorials
do a 'from Tkinter import *') and we don't like
'createtimerhandle'. Then the following would give
us tk_timer instead while still importing all the stuff
from Tkinter with their regular names:

from Tkinter import *, createtimerhandle as tk_timer

An even better way of doing this were if it would not
only give you another name but if it would not import
the original one. In this example our expression
would import all the symbols from Tkinter but would
rename createtimerhandle as tk_timer. In this way you
could still use * if you have a namespace clash. E.g.:

from Tkinter import *, mainloop as tk_mainloop

def mainloop():
  <do some really useful stuff calling tk_mainloop()>

if __name__ == '__main__':
  mainloop()

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 08:23:16 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 08:23:16 +0200
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch #101135] 'import x as y' and 'from x import y as z' (fwd))
In-Reply-To: <39926365.909B2835@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 08:10:13AM +0000
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl> <39926365.909B2835@nowonder.de>
Message-ID: <20000810082316.C17171@xs4all.nl>

On Thu, Aug 10, 2000 at 08:10:13AM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > 'join as j' ? What would it do ? Import all symbols from 'string' into a
> > new namespace 'j' ? How about you do 'import string as j' instead ? It means
> > you will still be able to do 'j._somevar', which you probably wouldn't in
> > your example, but I don't think that's enough reason :P

> Okay, your misunderstanding of the semantics I had in mind are
> reason enough <0.5 wink>.

> from string import *, join as j
> (or equivalently)
> from string import join as j, *

Ahh, like that :) Well, I'd say 'no'. "from module import *" has only one
legitimate use, as far as I'm concerned, and that's taking over all symbols
without prejudice, to encapsulate another module. It shouldn't be used in
code that attempts to stay readable, so 'import join as j' is insanity ;-)
If you really must do the above, do it in two import statements.

> An even better way of doing this were if it would not
> only give you another name but if it would not import
> the original one. In this example our expression
> would import all the symbols from Tkinter but would
> rename createtimerhandle as tk_timer. In this way you
> could still use * if you have a namespace clash. E.g.:

No, that isn't possible. You can't pass a list of names to 'FROM_IMPORT *'
to omit loading them. (That's also the reason the patch needs a new opcode,
because you can't pass both the name to be imported from a module and the
name it should be stored at, to the FROM_IMPORT bytecode :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 10:52:31 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 08:52:31 +0000
Subject: 'import x as y' stops the show (was: Re: [Python-Dev] [Patch 
 #101135] 'import x as y' and 'from x import y as z' (fwd))
References: <20000809222749.O266@xs4all.nl> <39925374.59D974FA@nowonder.de> <20000810072419.A17171@xs4all.nl> <39925BC8.17CD051@nowonder.de> <20000810075510.B17171@xs4all.nl> <39926365.909B2835@nowonder.de> <20000810082316.C17171@xs4all.nl>
Message-ID: <39926D4F.83CAE9C2@nowonder.de>

Thomas Wouters wrote:
> 
> On Thu, Aug 10, 2000 at 08:10:13AM +0000, Peter Schneider-Kamp wrote:
> > An even better way of doing this were if it would not
> > only give you another name but if it would not import
> > the original one. In this example our expression
> > would import all the symbols from Tkinter but would
> > rename createtimerhandle as tk_timer. In this way you
> > could still use * if you have a namespace clash. E.g.:
> 
> No, that isn't possible. You can't pass a list of names to 'FROM_IMPORT *'
> to omit loading them. (That's also the reason the patch needs a new opcode,
> because you can't pass both the name to be imported from a module and the
> name it should be stored at, to the FROM_IMPORT bytecode :)

Yes, it is possible. But as you correctly point out, not
without some serious changes to compile.c and ceval.c.

As we both agree (trying to channel you) it is not worth it
to make 'from import *' more usable, I think we should stop
this discussion before somebody thinks we seriously want
to do this.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From mal at lemburg.com  Thu Aug 10 10:36:07 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 10 Aug 2000 10:36:07 +0200
Subject: [Python-Dev] Un-stalling Berkeley DB support
References: <20000809140321.A836@thyrsus.com>
Message-ID: <39926977.F8495AAD@lemburg.com>

"Eric S. Raymond" wrote:
> [Berkeley DB 3]
> When we last discussed this subject, there was general support for the
> functionality, but a couple of people went "bletch!" about SWIG-generated
> code (there was unhappiness about pointers being treated as strings).

AFAIK, recent versions of SWIG now make proper use of PyCObjects
to store pointers. Don't know how well this works though: I've
had a report that the new support can cause core dumps.
 
> Somebody said something about having SWIG patches to address this.  Is this
> the only real issue with SWIG-generated code?  If so, we can pursue two paths:
> (1) Hand Greg a patched SWIG so he can release a 2.1.2 version of the DB
> extension that meets our cleanliness criteria, and (2) press the SWIG guy
> to incorporate these patches in his next release.

Perhaps these patches are what I was talking about above ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From sjoerd at oratrix.nl  Thu Aug 10 12:59:06 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Thu, 10 Aug 2000 12:59:06 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib mailbox.py,1.20,1.21
In-Reply-To: Your message of Wed, 09 Aug 2000 20:05:30 -0700.
             <200008100305.UAA05018@slayer.i.sourceforge.net> 
References: <200008100305.UAA05018@slayer.i.sourceforge.net> 
Message-ID: <20000810105907.713B331047C@bireme.oratrix.nl>

On Wed, Aug 9 2000 Guido van Rossum wrote:

>           files = os.listdir(self.dirname)
> !         list = []
>           for f in files:
>               if pat.match(f):
> !                 list.append(f)
> !         list = map(long, list)
> !         list.sort()

Isn't this just:
	list = os.listdir(self.dirname)
	list = filter(pat.match, list)
	list = map(long, list)
	list.sort()

Or even shorter:
	list = map(long, filter(pat.match, os.listdir(self.dirname)))
	list.sort()
(Although I can and do see the advantage of the slightly longer
version.)

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From gward at mems-exchange.org  Thu Aug 10 14:38:02 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 10 Aug 2000 08:38:02 -0400
Subject: [Python-Dev] Adding library modules to the core
Message-ID: <20000810083802.A7912@ludwig.cnri.reston.va.us>

[hmmm, this bounced 'cause the root partition on python.org was
full... let's try again, shall we?]

On 07 August 2000, Eric S. Raymond said:
> A few days ago I asked about the procedure for adding a module to the
> Python core library.  I have a framework class for things like menu systems
> and symbolic debuggers I'd like to add.
> 
> Guido asked if this was similar to the TreeWidget class in IDLE.  I 
> investigated and discovered that it is not, and told him so.  I am left
> with a couple of related questions:

Well, I just ploughed through this entire thread, and no one came up
with an idea I've been toying with for a while: the Python Advanced
Library.

This would be the place for well-known, useful, popular, tested, robust,
stable, documented module collections that are just too big or too
specialized to go in the core.  Examples: PIL, mxDateTime, mxTextTools,
mxODBC, ExtensionClass, ZODB, and anything else that I use in my daily
work and wish that we didn't have maintain separate builds of.  ;-)

Obviously this would be most useful as an RPM/Debian package/Windows
installer/etc., so that non-developers could be told, "You need to
install Python 1.6 and the Python Advanced Library 1.0 from ..." and
that's *it*.

Thoughts?  Candidates for admission?  Proposed requirements for admission?

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From gward at mems-exchange.org  Thu Aug 10 15:47:48 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Thu, 10 Aug 2000 09:47:48 -0400
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <Pine.GSO.4.10.10008101557580.1582-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 10, 2000 at 04:00:51PM +0300
References: <20000810083802.A7912@ludwig.cnri.reston.va.us> <Pine.GSO.4.10.10008101557580.1582-100000@sundial>
Message-ID: <20000810094747.C7912@ludwig.cnri.reston.va.us>

[cc'd to python-dev, since I think this belongs out in the open: Moshe,
if you really meant to keep this private, go ahead and slap my wrist]

On 10 August 2000, Moshe Zadka said:
> Greg, this sounds very close to PEP-206. Please let me know if you see
> any useful collaboration with it.

They're definitely related, and I think we're trying to address the same
problem -- but in a different way.

If I read the PEP (http://python.sourceforge.net/peps/pep-0206.html)
correctly, you want to fatten the standard Python distribution
considerably, first by adding lots of third-party C libraries to it, and
second by adding lots of third-party Python libraries ("module
distributions") to it.  This has the advantage of making all these
goodies immediately available in a typical Python installation.  But it
has a couple of serious disadvantages:
  * makes Python even harder to build and install; why should I have
    to build half a dozen major C libraries just to get a basic
    Python installation working?
  * all these libraries are redundant on modern free Unices -- at
    least the Linux distributions that I have experience with all
    include zlib, Tcl/Tk, libjpeg, and ncurses out of the box.
    Including copies of them with throws out one of the advantages
    of having all these installed as shared libraries, namely that
    there only has to be one copy of each in memory.
  * tell me again: what was the point of the Distutils if we just
    throw "everything useful" into the standard distribution?

Anyways, my idea -- the Python Advanced Library -- is to make all of
these goodies available as a single download, *separate* from Python
itself.  It could well be at the the Advanced Library would be larger
than the Python distribution.  (Especially if Tcl/Tk migrates from the
standard Windows installer to the Advanced Library.)

Advantages:
  * keeps the standard distribution relatively small and focussed;
    IMHO the "big framework" libraries (PIL, NumPy, etc.) don't
    belong in the standard library.  (I think there could someday
    be a strong case for moving Tkinter out of the standard library
    if the Advanced Library really takes off.)
  * relieves licensing problems in the Python distribution; if something
    can't be included with Python for licence reasons, then put
    it in the Advanced Library
  * can have variations on the PAL for different platforms.  Eg. could
    have an RPM or Debian package that just requires libjpeg,
    libncurses, libtcl, libtk etc. for the various Linuces, and an
    enormous installer with separate of absolutely everything for
    Windows
  * excellent test case for the Distutils ;-)
  * great acronym: the Python Advanced Library is your PAL.

Sounds worth a PEP to me; I think it should be distinct from (and in
competition with!) PEP 206.

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From moshez at math.huji.ac.il  Thu Aug 10 16:09:23 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 17:09:23 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <20000810094747.C7912@ludwig.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008101707240.1582-100000@sundial>

On Thu, 10 Aug 2000, Greg Ward wrote:

> Sounds worth a PEP to me; I think it should be distinct from (and in
> competition with!) PEP 206.

That's sort of why I wanted to keep this off Python-Dev: I don't think
so (I don't really want competing PEPs), I'd rather we hashed out our
differences in private and come up with a unified PEP to save everyone
on Python-Dev a lot of time. 

So let's keep the conversation off python-dev until we either reach
a consensus or agree to disagree.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Thu Aug 10 16:28:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 10 Aug 2000 16:28:34 +0200
Subject: [Python-Dev] Adding library modules to the core
References: <Pine.GSO.4.10.10008101707240.1582-100000@sundial>
Message-ID: <3992BC12.BFA16AAC@lemburg.com>

Moshe Zadka wrote:
> 
> On Thu, 10 Aug 2000, Greg Ward wrote:
> 
> > Sounds worth a PEP to me; I think it should be distinct from (and in
> > competition with!) PEP 206.
> 
> That's sort of why I wanted to keep this off Python-Dev: I don't think
> so (I don't really want competing PEPs), I'd rather we hashed out our
> differences in private and come up with a unified PEP to save everyone
> on Python-Dev a lot of time.
> 
> So let's keep the conversation off python-dev until we either reach
> a consensus or agree to disagree.

Just a side note: As I recall Guido is not willing to include
all these third party tools to the core distribution, but rather
to a SUMO Python distribution, which then includes Python +
all those nice goodies available to the Python Community.

Maintaining this SUMO distribution should, IMHO, be left to
a commercial entity like e.g. ActiveState or BeOpen to insure
quality and robustness -- this is not an easy job, believe me.
I've tried something like this before: it was called Python
PowerTools and should still be available at:

  http://starship.python.net/crew/lemburg/PowerTools-0.2.zip

I never got far, though, due to the complexity of getting
all that Good Stuff under one umbrella.

Perhaps you ought to retarget you PEP206, Moshe ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Thu Aug 10 16:30:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 17:30:40 +0300 (IDT)
Subject: [Python-Dev] Adding library modules to the core
In-Reply-To: <3992BC12.BFA16AAC@lemburg.com>
Message-ID: <Pine.GSO.4.10.10008101729280.17061-100000@sundial>

On Thu, 10 Aug 2000, M.-A. Lemburg wrote:

> Just a side note: As I recall Guido is not willing to include
> all these third party tools to the core distribution, but rather
> to a SUMO Python distribution, which then includes Python +
> all those nice goodies available to the Python Community.

Yes, that's correct. 

> Maintaining this SUMO distribution should, IMHO, be left to
> a commercial entity like e.g. ActiveState or BeOpen to insure
> quality and robustness -- this is not an easy job, believe me.

Well, I'm hoping that distutils will make this easier.

> Perhaps you ought to retarget you PEP206, Moshe ?!

I'm sorry -- I'm too foolhardy. 

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From nowonder at nowonder.de  Thu Aug 10 19:00:14 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 17:00:14 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
Message-ID: <3992DF9E.BF5A080C@nowonder.de>

Hi Guido!

After submitting the patch to smtplib, I got a bad feeling
about only trying to get the FQDN for the localhost case.

Shouldn't _get_fdqn_hostname() try to get the FQDN
for every argument passed? Currently it does so only
for len(name) == 0

I think (but couldn't immediately find a reference) it
is required by some RFC. There is at least an internet
draft by the the ietf that says it is required
and a lot of references (mostly from postfix) to some
RFC, too.

Of course, automatically trying to get the fully
qualified domain name would mean that the programmer
looses some flexibility (by loosing responsibility).

If that is a problem I would make _get_fqdn_hostname
a public function (and choose a better name). helo()
and ehlo() could still call it for the local host case.

or-should-I-just-leave-things-as-they-are-ly y'rs
Peter

P.S.: I am cc'ing the list so everyone and Thomas can
      rush in and provide their RFC knowledge.
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From guido at beopen.com  Thu Aug 10 18:14:20 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 10 Aug 2000 11:14:20 -0500
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: Your message of "Thu, 10 Aug 2000 17:00:14 GMT."
             <3992DF9E.BF5A080C@nowonder.de> 
References: <3992DF9E.BF5A080C@nowonder.de> 
Message-ID: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>

> Hi Guido!
> 
> After submitting the patch to smtplib, I got a bad feeling
> about only trying to get the FQDN for the localhost case.
> 
> Shouldn't _get_fdqn_hostname() try to get the FQDN
> for every argument passed? Currently it does so only
> for len(name) == 0
> 
> I think (but couldn't immediately find a reference) it
> is required by some RFC. There is at least an internet
> draft by the the ietf that says it is required
> and a lot of references (mostly from postfix) to some
> RFC, too.
> 
> Of course, automatically trying to get the fully
> qualified domain name would mean that the programmer
> looses some flexibility (by loosing responsibility).
> 
> If that is a problem I would make _get_fqdn_hostname
> a public function (and choose a better name). helo()
> and ehlo() could still call it for the local host case.
> 
> or-should-I-just-leave-things-as-they-are-ly y'rs
> Peter
> 
> P.S.: I am cc'ing the list so everyone and Thomas can
>       rush in and provide their RFC knowledge.

Good idea -- I don't know anything about SMTP!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 10 17:40:26 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 17:40:26 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 10, 2000 at 11:14:20AM -0500
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
Message-ID: <20000810174026.D17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:14:20AM -0500, Guido van Rossum wrote:

> > After submitting the patch to smtplib, I got a bad feeling
> > about only trying to get the FQDN for the localhost case.
> > for len(name) == 0

> > I think (but couldn't immediately find a reference) it
> > is required by some RFC. There is at least an internet
> > draft by the the ietf that says it is required
> > and a lot of references (mostly from postfix) to some
> > RFC, too.

If this is for helo() and ehlo(), screw it. No sane mailer, technician or
abuse desk employee pays any attention what so ever to the HELO message,
except possibly for debugging.

The only use I've ever had for the HELO message is with clients that setup a
WinGate or similar braindead port-forwarding service on their dail-in
machine, and then buy one of our products, batched-SMTP. They then get their
mail passed to them via SMTP when they dial in... except that these
*cough*users*cough* redirect their SMTP port to *our* smtp server, creating
a confusing mail loop. We first noticed that because their server connected
to our server using *our* HELO message ;)

> > If that is a problem I would make _get_fqdn_hostname
> > a public function (and choose a better name). helo()
> > and ehlo() could still call it for the local host case.

I don't think this is worth the trouble. Assembling a FQDN is tricky at
best, and it's not needed in that many cases. (Sometimes you can break
something by trying to FQDN a name and getting it wrong ;) Where would this
function be used ? In SMTP chats ? Not necessary. A 'best guess' is enough
-- the remote SMTP server won't listen to you, anyway, and provide the
ipaddress and it's reverse DNS entry in the mail logs. Mailers that rely on
the HELO message are (rightly!) considered insecure, spam-magnets, and are a
thankfully dying race.

Of course, if anyone else needs a FQDN, it might be worth exposing this
algorithm.... but smtplib doesn't seem like the proper place ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Thu Aug 10 20:13:04 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 18:13:04 +0000
Subject: [Python-Dev] open 'Accepted' patches
Message-ID: <3992F0B0.C8CBF85B@nowonder.de>

Changing the patch view at sf to 'Accepted' in order to find
my patch, I was surprised by the amount of patches that have
been accepted and are still lying around. In an insane attack
of self-destructiveness I decided to bring up the issue<wink>.

I know there can be a lot of issues with patches relative to
another patch etc., but letting them rot won't improve the
situation. "Checked in they should be." <PYoda> If there
are still problems with them or the have already been
checked in, the status should at least be 'Postponed',
'Out of Date', 'Rejected', 'Open' or 'Closed'.

Here is a list of the open 'Accepted' patches that have had
no comment for more than a week and which are not obviously
checked in yet (those that are, I have closed):

patch# | summary                             | last comment
-------+-------------------------------------+--------------
100510 | largefile support for Win64 (and...)| 2000-Jul-31
100511 | test largefile support (test_lar...)| 2000-Jul-31
100851 | traceback.py, with unicode except...| 2000-Aug-01
100874 | Better error message with Unbound...| 2000-Jul-26
100955 | ptags, eptags: regex->re, 4 char ...| 2000-Jul-26
100978 | Minor updates for BeOS R5           | 2000-Jul-25
100994 | Allow JPython to use more tests     | 2000-Jul-27

If I should review, adapt and/or check in some of these,
please tell me which ones.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Thu Aug 10 18:30:10 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 10 Aug 2000 18:30:10 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <200008101614.LAA28785@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 10, 2000 at 11:14:20AM -0500
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com>
Message-ID: <20000810183010.E17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:14:20AM -0500, Guido van Rossum wrote:

> > P.S.: I am cc'ing the list so everyone and Thomas can
> >       rush in and provide their RFC knowledge.

Oh, I forgot to point out: I have some RFC knowledge, but decided not to use
it in the case of the HELO message ;) I do have a lot of hands-on experience
with SMTP, and I know for a fact very little MUA that talk SMTP send a FQDN
in the HELO message. I think that sending the FQDN when we can (like we do,
now) is a good idea, but I don't see a reason to force the HELO message to
be a FQDN. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Thu Aug 10 18:43:41 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 10 Aug 2000 19:43:41 +0300 (IDT)
Subject: [Python-Dev] open 'Accepted' patches
In-Reply-To: <3992F0B0.C8CBF85B@nowonder.de>
Message-ID: <Pine.GSO.4.10.10008101941220.19610-100000@sundial>

(Meta: seems every now and again, a developer has a fit of neurosa. I
think this is a good thing)

On Thu, 10 Aug 2000, Peter Schneider-Kamp wrote:

> patch# | summary                             | last comment
> -------+-------------------------------------+--------------
...
> 100955 | ptags, eptags: regex->re, 4 char ...| 2000-Jul-26

This is the only one I actually know about: Jeremy, Guido has approved it,
I assigned it to you for final eyeballing -- shouldn't be *too* hard to
check it in...
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From DavidA at ActiveState.com  Thu Aug 10 18:47:54 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 10 Aug 2000 09:47:54 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] FYI: Software Carpentry winners announced
Message-ID: <Pine.WNT.4.21.0008100945480.1052-100000@loom>

I wanted to make sure that everyone here knew that the Software Carpentry
winners were announced, and that our very own Ping won in the Track
category.  Winners in the Config and Build category were Linday Todd
(SapCat) and Steven Knight (sccons) respectively.  Congrats to all.

--david

http://software-carpentry.codesourcery.com/entries/second-round/results.html




From trentm at ActiveState.com  Thu Aug 10 18:50:15 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 10 Aug 2000 09:50:15 -0700
Subject: [Python-Dev] open 'Accepted' patches
In-Reply-To: <3992F0B0.C8CBF85B@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 06:13:04PM +0000
References: <3992F0B0.C8CBF85B@nowonder.de>
Message-ID: <20000810095015.A28562@ActiveState.com>

On Thu, Aug 10, 2000 at 06:13:04PM +0000, Peter Schneider-Kamp wrote:
> 
> Here is a list of the open 'Accepted' patches that have had
> no comment for more than a week and which are not obviously
> checked in yet (those that are, I have closed):
> 
> patch# | summary                             | last comment
> -------+-------------------------------------+--------------
> 100510 | largefile support for Win64 (and...)| 2000-Jul-31
> 100511 | test largefile support (test_lar...)| 2000-Jul-31

These two are mine. For a while I just thought that they had been checked in.
Guido poked me to check them in a week or so ago and I will this week.


Trent


-- 
Trent Mick
TrentM at ActiveState.com



From nowonder at nowonder.de  Fri Aug 11 01:29:28 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 23:29:28 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl>
Message-ID: <39933AD8.B8EF5D59@nowonder.de>

Thomas Wouters wrote:
> 
> If this is for helo() and ehlo(), screw it. No sane mailer, technician or
> abuse desk employee pays any attention what so ever to the HELO message,
> except possibly for debugging.

Well, there are some MTAs (like Postfix) that seem to care. Postfix has
an option called "reject_non_fqdn_hostname" with the following description:

"""
Reject the request when the hostname in the client HELO (EHLO) command is not in 
fully-qualified domain form, as required by the RFC. The non_fqdn_reject_code
specifies the response code to rejected requests (default: 504)."""

The submittor of the bug which was addressed by the patch I checked in had
a problem with mailman and a postfix program that seemed to have this option
turned on.

What I am proposing for smtplib is to send every name given to
helo (or ehlo) through the guessing framework of gethostbyaddr()
if possible. Could this hurt anything?

> Of course, if anyone else needs a FQDN, it might be worth exposing this
> algorithm.... but smtplib doesn't seem like the proper place ;P

Agreed. Where could it go?

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From nowonder at nowonder.de  Fri Aug 11 01:34:38 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 10 Aug 2000 23:34:38 +0000
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810183010.E17171@xs4all.nl>
Message-ID: <39933C0E.7A84D6E2@nowonder.de>

Thomas Wouters wrote:
> 
> Oh, I forgot to point out: I have some RFC knowledge, but decided not to use
> it in the case of the HELO message ;) I do have a lot of hands-on experience
> with SMTP, and I know for a fact very little MUA that talk SMTP send a FQDN
> in the HELO message. I think that sending the FQDN when we can (like we do,
> now) is a good idea, but I don't see a reason to force the HELO message to
> be a FQDN.

I don't want to force anything. I think it's time for some
code to speak for itself, rather than me trying to
speak for it <0.8 wink>:

def _get_fqdn_hostname(name):
    name = string.strip(name)
    if len(name) == 0:
        name = socket.gethostname()
    try:
        hostname, aliases, ipaddrs = socket.gethostbyaddr(name)
    except socket.error:
        pass
    else:
        aliases.insert(0, hostname)
        for name in aliases:
            if '.' in name:
                break
        else:
            name = hostname
    return name

This is the same function as the one I checked into
smtplib.py with the exception of executing the try-block
also for names with len(name) != 0.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From bckfnn at worldonline.dk  Fri Aug 11 00:17:47 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Thu, 10 Aug 2000 22:17:47 GMT
Subject: [Python-Dev] Freezing unicode codecs.
Message-ID: <3993287a.1852013@smtp.worldonline.dk>

While porting the unicode API and the encoding modules to JPython I came
across a problem which may also (or maybe not) exists in CPython.

jpythonc is a compiler for jpython which try to track dependencies
between modules in an attempt to detect which modules an application or
applet uses. I have the impression that some of the freeze tools for
CPython does something similar.

A call to unicode("abc", "cp1250") and "abc".encode("cp1250") will cause
the encoding.cp1250 module to be loaded as a side effect. The freeze
tools will have a hard time figuring this out by scanning the python
source.


For JPython I'm leaning towards making it a requirement that the
encodings must be loading explicit from somewhere in application. Adding


   import encoding.cp1250

somewhere in the application will allow jpythonc to include this python
module in the frozen application.

How does CPython solve this?


PS. The latest release of the JPython errata have full unicode support
and includes the "sre" module and unicode codecs.

    http://sourceforge.net/project/filelist.php?group_id=1842


regards,
finn



From thomas at xs4all.net  Fri Aug 11 00:50:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 00:50:13 +0200
Subject: [Python-Dev] 2nd thought: fully qualified host names
In-Reply-To: <39933AD8.B8EF5D59@nowonder.de>; from nowonder@nowonder.de on Thu, Aug 10, 2000 at 11:29:28PM +0000
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl> <39933AD8.B8EF5D59@nowonder.de>
Message-ID: <20000811005013.F17171@xs4all.nl>

On Thu, Aug 10, 2000 at 11:29:28PM +0000, Peter Schneider-Kamp wrote:
> Thomas Wouters wrote:

> > If this is for helo() and ehlo(), screw it. No sane mailer, technician or
> > abuse desk employee pays any attention what so ever to the HELO message,
> > except possibly for debugging.

> Well, there are some MTAs (like Postfix) that seem to care. Postfix has
> an option called "reject_non_fqdn_hostname" with the following description:

> """
> Reject the request when the hostname in the client HELO (EHLO) command is not in 
> fully-qualified domain form, as required by the RFC. The non_fqdn_reject_code
> specifies the response code to rejected requests (default: 504)."""

> The submittor of the bug which was addressed by the patch I checked in had
> a problem with mailman and a postfix program that seemed to have this option
> turned on.

Fine, the patch addresses that. When the hostname passed to smtplib is ""
(which is the default), it should be turned into a FQDN. I agree. However,
if someone passed in a name, we do not know if they even *want* the name
turned into a FQDN. In the face of ambiguity, refuse the temptation to
guess.

Turning on this Postfix feature (which is completely along the lines of
Postfix, and I applaud Wietse(*) for supplying it ;) is a tricky decision at
best. Like I said in the other email, there are a *lot* of MUAs and MTAs and
other throw-away-programs-gone-commercial that don't speak proper SMTP, and
don't even pretend to send a FQDN. Most Windows clients send the machine's
netbios name, for crying out loud. Turning this on would break all those
clients, and more. I'm not too worried about it breaking Python scripts that
are explicitly setting the HELO response -- those scripts are probably doing
it for a reason.

To note, I haven't seen software that uses smtplib that does supply their
own HELO message, except for a little script I saw that was *explicitly*
setting the HELO message in order to test the SMTP server on the other end.
That instance would certainly have been broken by rewriting the name into a
FQDN.

> > Of course, if anyone else needs a FQDN, it might be worth exposing this
> > algorithm.... but smtplib doesn't seem like the proper place ;P

> Agreed. Where could it go?

On second though, I can't imagine anyone needing such a function outside of
smtplib. FQDN's are nice for reporting URIs to the outside world, but for
connecting to a certain service you simply pass the hostname you got (which
can be an ipaddress) through to the OS-provided network layer. Kind of like
not doing type checking on the objects passed to your function, but instead
assuming it conforms to an interface and will work correctly or fail
obviously when attempted to be used as an object of a certain type.

So, make it an exposed function on smtplib, for those people who don't want
to set the HELO message to "", but do want it to be rewritten into a FQDN.

(*) Amazing how all good software came to be through Dutch people. Even
Linux: if it wasn't for Tanenbaum, it wouldn't be what it is today :-)

PS: I'm talking as a sysadmin for a large ISP here, not as a user-friendly
library-implementor. We won't be able to turn on this postfix feature for
many, many years, and I wouldn't advise anyone who expects mail to be sent
from the internet to a postfix machine to enable it, either. But if your
mailserver is internal-only, or with fixed entrypoints that are running
reliable software, I can imagine people turning it on. It would please me no
end if we could turn this on ! I spend on average an hour a day closing
customer-accounts and helping them find out why their mailserver sucks. And
I still discover new mailserver software and new ways for them to suck, it's
really amazing ;)

that-PS-was-a-bit-long-for-a-signoff-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Fri Aug 11 02:44:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 10 Aug 2000 20:44:06 -0400
Subject: Keyword abuse  (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <14737.35195.31385.867664@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOECAGPAA.tim_one@email.msn.com>

[Skip Montanaro]
> Could this be extended to many/most/all current instances of
> keywords in Python?  As Tim pointed out, Fortran has no keywords.
> It annoys me that I (for example) can't define a method named "print".

This wasn't accidental in Fortran, though:  X3J3 spent many tedious hours
fiddling the grammar to guarantee it was always possible.  Python wasn't
designed with this in mind, and e.g. there's no meaningful way to figure out
whether

    raise

is an expression or a "raise stmt" in the absence of keywords.  Fortran is
very careful to make sure such ambiguities can't arise.

A *reasonable* thing is to restrict global keywords to special tokens that
can begin a line.  There's real human and machine parsing value in being
able to figure out what *kind* of stmt a line represents from its first
token.  So letting "print" be a variable name too would, IMO, really suck.

But after that, I don't think users have any problem understanding that
different stmt types can have different syntax.  For example, if "@" has a
special meaning in "print" statments, big deal.  Nobody splits a spleen over
seeing

    a   b, c, d

when "a" happens to be "exec" or "print" today, despite that most stmts
don't allow that syntax, and even between "exec" and "print" it has very
different meanings.  Toss in "global", "del" and "import" too for other
twists on what the "b, c, d" part can look like and mean.

As far as I'm concerned, each stmt type can have any syntax it damn well
likes!   Identifiers with special meaning *after* a keyword-introduced stmt
can usually be anything at all without making them global keywords (be it
"as" after "import", or "indexing" after "for", or ...).  The only thing
Python is missing then is a lisp stmt <wink>:

    lisp (setq a (+ a 1))

Other than that, the JPython hack looks cool too.

Note that SSKs (stmt-specific keywords) present a new problem to colorizers
(or moral equivalents like font-lock), and to other tools that do more than
a trivial parse.

the-road-to-p3k-has-toll-booths-ly y'rs  - tim





From tim_one at email.msn.com  Fri Aug 11 02:44:08 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 10 Aug 2000 20:44:08 -0400
Subject: PEP praise (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <Pine.LNX.4.10.10008091503171.497-100000@skuld.lfw.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCAECBGPAA.tim_one@email.msn.com>

[Ka-Ping Yee]
> ...
> Surely a PEP isn't required for a couple of built-in functions that
> are simple and well understood?  You can just call thumbs-up or
> thumbs-down and be done with it.

Only half of that is true, and even then only partially:  if the verdict is
thumbs-up, *almost* cool, except that newcomers delight in pestering "but
how come it wasn't done *my* way instead?".  You did a bit of that yourself
in your day, you know <wink>.  We're hoping the stream of newcomers never
ends, but the group of old-timers willing and able to take an hour or two to
explain the past in detail is actually dwindling (heck, you can count the
Python-Dev members chipping in on Python-Help with a couple of fingers, and
if anything fewer still active on c.l.py).

If it's thumbs-down, in the absence of a PEP it's much worse:  it will just
come back again, and again, and again, and again.  The sheer repetition in
these endlessly recycled arguments all but guarantees that most old-timers
ignore these threads completely.

A prime purpose of the PEPs is to be the community's collective memory, pro
or con, so I don't have to be <wink>.  You surely can't believe this is the
first time these particular functions have been pushed for core adoption!?
If not, why do we need to have the same arguments all over again?  It's not
because we're assholes, and neither because there's anything truly new here,
it's simply because a mailing list has no coherent memory.

Not so much as a comma gets changed in an ANSI or ISO std without an
elaborate pile of proposal paperwork and formal reviews.  PEPs are a very
lightweight mechanism compared to that.  And it would take you less time to
write a PEP for this than I alone spent reading the 21 msgs waiting for me
in this thread today.  Multiply the savings by billions <wink>.

world-domination-has-some-scary-aspects-ly y'rs  - tim





From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 03:59:30 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 03:59:30 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <200008110159.DAA09540@python.inrialpes.fr>

I'm looking at preventing core dumps due to recursive calls. With
simple nested call counters for every function in object.c, limited to
500 levels deep recursions, I think this works okay for repr, str and
print. It solves most of the complaints, like:

class Crasher:
	def __str__(self): print self

print Crasher()

With such protection, instead of a core dump, we'll get an exception:

RuntimeError: Recursion too deep


So far, so good. 500 nested calls to repr, str or print are likely
to be programming bugs. Now I wonder whether it's a good idea to do
the same thing for getattr and setattr, to avoid crashes like:

class Crasher:
	def __getattr__(self, x): return self.x 

Crasher().bonk

Solving this the same way is likely to slow things down a bit, but
would prevent the crash. OTOH, in a complex object hierarchy with
tons of delegation and/or lookup dispatching, 500 nested calls is
probably not enough. Or am I wondering too much? Opinions?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From bwarsaw at beopen.com  Fri Aug 11 05:00:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 10 Aug 2000 23:00:32 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <ECEPKNMJLHAPFFJHDOJBGEBGDEAA.MarkH@ActiveState.com>
	<200008100036.TAA26235@cj20424-a.reston1.va.home.com>
Message-ID: <14739.27728.960099.342321@anthem.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Alas, I'm not sure how easy it will be.  The parser generator
    GvR> will probably have to be changed to allow you to indicate not
    GvR> to do a resword lookup at certain points in the grammar.  I
    GvR> don't know where to start. :-(

Yet another reason why it would be nice to (eventually) merge the
parsing technology in CPython and JPython.

i-don't-wanna-work-i-jes-wanna-bang-on-my-drum-all-day-ly y'rs,
-Barry



From MarkH at ActiveState.com  Fri Aug 11 08:15:00 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 11 Aug 2000 16:15:00 +1000
Subject: [Python-Dev] Patches and checkins for 1.6
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>

I would like a little guidance on how to handle patches during this 1.6
episode.

My understanding of CVS tells me that 1.6 has forked from the main
development tree.  Any work done in the 1.6 branch will need to also be
done in the main branch.  Is this correct?

If so, it means that all patches assigned to me need to be applied and
tested twice, which involves completely refetching the entire tree, and
rebuilding the world?

Given that 1.6 appears to be mainly an exercise in posturing by CNRI, is it
reasonable that I hold some patches off while I'm working with 1.6, and
check them in when I move back to the main branch?  Surely no one will
stick with 1.6 in the long (or even medium) term, once all active
development of that code ceases?

Of course, this wouldn't include critical bugs, but no one is mad enough to
assign them to me anyway <wink>

Confused-and-in-need-of-a-machine-upgrade ly,

Mark.




From tim_one at email.msn.com  Fri Aug 11 08:48:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 11 Aug 2000 02:48:56 -0400
Subject: [Python-Dev] Patches and checkins for 1.6
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIECPGPAA.tim_one@email.msn.com>

[Mark Hammond]
> I would like a little guidance on how to handle patches during this
> 1.6 episode.
>
> My understanding of CVS tells me that 1.6 has forked from the
> main development tree.  Any work done in the 1.6 branch will need
> to also be done in the main branch.  Is this correct?

Don't look at me -- I first ran screaming in terror from CVS tricks more
than a decade ago, and haven't looked back.  OTOH, I don't know of *any*
work done in the 1.6 branch yet that needs also to be done in the 2.0
branch.  Most of what Fred Drake has been doing is in the other direction,
and the rest has been fixing buglets unique to 1.6.

> If so, it means that all patches assigned to me need to be applied
> and tested twice, which involves completely refetching the entire
> tree, and rebuilding the world?

Patches with new features should *not* go into the 1.6 branch at all!  1.6
is meant to reflect only work that CNRI has clear claims to, plus whatever
bugfixes are needed to make that a good release.  Actual cash dollars for
Unicode development were funneled through CNRI, and that's why the Unicode
features are getting backstitched into it.  They're unique, though.

> Given that 1.6 appears to be mainly an exercise in posturing by
> CNRI,

Speaking on behalf of BeOpen PythonLabs, 1.6 is a milestone in Python
development, worthy of honor, praise and repeated downloading by all.  We at
BeOpen PythonLabs regret the unfortunate misconceptions that have arisen
about its true nature, and fully support CNRI's wise decision to force a
release of Python 1.6 in the public interest.

> is it reasonable that I hold some patches off while I'm working
> with 1.6, and check them in when I move back to the main branch?

I really don't know what you're doing.  If you find a bug in 1.6 that's also
a bug in 2.0, it should go without saying that we'd like that fixed ASAP in
2.0 as well.  But since that went without saying, and you seem to be saying
something else, I'm not sure what you're asking.  If you're asking whether
you're allowed to maximize your own efficiency, well, only Guido can force
you to do something self-damaging <wink>.

> Surely no one will stick with 1.6 in the long (or even
> medium) term, once all active development of that code ceases?

Active development of the 1.6 code has already ceased, far as I can tell.
Maybe some more Unicode patches?  Other than that, just bugfixes as needed.
It's down to a trickle.  We're aiming for a quick beta cycle on 1.6b1, and--
last I heard, and barring scads of fresh bug reports --intending to release
1.6 final next.  Then bugs opened against 1.6 will be answered by "fixed in
2.0".

> Of course, this wouldn't include critical bugs, but no one is mad
> enough to assign them to me anyway <wink>
>
> Confused-and-in-need-of-a-machine-upgrade ly,

And we'll buy you one, too, if you promise to use it to fix the test_fork1
family of bugs on SMP Linux boxes!

don't-forget-that-patches-to-1.6-still-need-cnri-release-forms!-
    and-that-should-clarify-everything-ly y'rs  - tim





From gstein at lyra.org  Fri Aug 11 09:07:29 2000
From: gstein at lyra.org (Greg Stein)
Date: Fri, 11 Aug 2000 00:07:29 -0700
Subject: [Python-Dev] Patches and checkins for 1.6
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 11, 2000 at 04:15:00PM +1000
References: <ECEPKNMJLHAPFFJHDOJBGEFCDEAA.MarkH@ActiveState.com>
Message-ID: <20000811000729.M19525@lyra.org>

On Fri, Aug 11, 2000 at 04:15:00PM +1000, Mark Hammond wrote:
>...
> If so, it means that all patches assigned to me need to be applied and
> tested twice, which involves completely refetching the entire tree, and
> rebuilding the world?

Just fetch two trees.

c:\src16
c:\src20

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From mal at lemburg.com  Fri Aug 11 10:04:48 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 11 Aug 2000 10:04:48 +0200
Subject: [Python-Dev] Freezing unicode codecs.
References: <3993287a.1852013@smtp.worldonline.dk>
Message-ID: <3993B3A0.28500B22@lemburg.com>

Finn Bock wrote:
> 
> While porting the unicode API and the encoding modules to JPython I came
> across a problem which may also (or maybe not) exists in CPython.
> 
> jpythonc is a compiler for jpython which try to track dependencies
> between modules in an attempt to detect which modules an application or
> applet uses. I have the impression that some of the freeze tools for
> CPython does something similar.
> 
> A call to unicode("abc", "cp1250") and "abc".encode("cp1250") will cause
> the encoding.cp1250 module to be loaded as a side effect. The freeze
> tools will have a hard time figuring this out by scanning the python
> source.
> 
> For JPython I'm leaning towards making it a requirement that the
> encodings must be loading explicit from somewhere in application. Adding
> 
>    import encoding.cp1250
> 
> somewhere in the application will allow jpythonc to include this python
> module in the frozen application.
> 
> How does CPython solve this?

It doesn't. The design of the codec registry is such that it
uses search functions which then locate and load the codecs.
These search function can implement whatever scheme they desire
for the lookup and also w/r to loading the codec, e.g. they
could get the data from a ZIP archive.

This design was chosen to allow drop-in configuration of the
Python codecs. Applications can easily add new codecs to the
registry by registering a new search function (and without
having to copy files into the encodings Lib subdir).
 
When it comes to making an application freezable, I'd suggest
adding explicit imports to some freeze support module in the
application. There are other occasions where this is needed
too, e.g. for packages using lazy import of modules such
as mx.DateTime.

This module would then make sure freeze.py finds the right
modules to include in its output.

> PS. The latest release of the JPython errata have full unicode support
> and includes the "sre" module and unicode codecs.
> 
>     http://sourceforge.net/project/filelist.php?group_id=1842

Cool :-)
 
> regards,
> finn
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Fri Aug 11 12:29:04 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 11 Aug 2000 10:29:04 +0000
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl>
Message-ID: <3993D570.7578FE71@nowonder.de>

After sleeping over it, I noticed that at least
BaseHTTPServer and ftplib also use a similar
algorithm to get a fully qualified domain name.

Together with smtplib there are four occurences
of the algorithm (2 in BaseHTTPServer). I think
it would be good not to have four, but one
implementation.

First I thought it could be socket.get_fqdn(),
but it seems a bit troublesome to write it in C.

Should this go somewhere? If yes, where should
it go?

I'll happily prepare a patch as soon as I know
where to put it.

Peter
--
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From moshez at math.huji.ac.il  Fri Aug 11 10:40:08 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 11:40:08 +0300 (IDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <3993D570.7578FE71@nowonder.de>
Message-ID: <Pine.GSO.4.10.10008111136390.27824-100000@sundial>

On Fri, 11 Aug 2000, Peter Schneider-Kamp wrote:

> First I thought it could be socket.get_fqdn(),
> but it seems a bit troublesome to write it in C.
> 
> Should this go somewhere?

Yes. We need some OnceAndOnlyOnce mentality here...

> If yes, where should
> it go?

Good question. You'll notice that SimpleHTTPServer imports shutil for
copyfileobj, because I had no good answer to a similar question. GS seems
to think "put it somewhere" is a good enough answer. I think I might
agree.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From barry at scottb.demon.co.uk  Fri Aug 11 13:42:11 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 11 Aug 2000 12:42:11 +0100
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008110159.DAA09540@python.inrialpes.fr>
Message-ID: <000401c00389$2fa577b0$060210ac@private>

Why not set a limit in the intepreter? Fixing this for every call in object.c
seems a lots of hard work and will always leave holes.

For embedding Python being able to control the recursion depth of the intepreter
is very useful. I would want to be able to set, from C, the max call depth limit
and the current call depth limit. I'd expect Python to set a min call depth limit.

		BArry


> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Vladimir Marangozov
> Sent: 11 August 2000 03:00
> To: Python core developers
> Subject: [Python-Dev] Preventing recursion core dumps
> 
> 
> 
> I'm looking at preventing core dumps due to recursive calls. With
> simple nested call counters for every function in object.c, limited to
> 500 levels deep recursions, I think this works okay for repr, str and
> print. It solves most of the complaints, like:
> 
> class Crasher:
> 	def __str__(self): print self
> 
> print Crasher()
> 
> With such protection, instead of a core dump, we'll get an exception:
> 
> RuntimeError: Recursion too deep
> 
> 
> So far, so good. 500 nested calls to repr, str or print are likely
> to be programming bugs. Now I wonder whether it's a good idea to do
> the same thing for getattr and setattr, to avoid crashes like:
> 
> class Crasher:
> 	def __getattr__(self, x): return self.x 
> 
> Crasher().bonk
> 
> Solving this the same way is likely to slow things down a bit, but
> would prevent the crash. OTOH, in a complex object hierarchy with
> tons of delegation and/or lookup dispatching, 500 nested calls is
> probably not enough. Or am I wondering too much? Opinions?
> 
> -- 
>        Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
> http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From guido at beopen.com  Fri Aug 11 14:47:09 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 07:47:09 -0500
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Your message of "Fri, 11 Aug 2000 03:59:30 +0200."
             <200008110159.DAA09540@python.inrialpes.fr> 
References: <200008110159.DAA09540@python.inrialpes.fr> 
Message-ID: <200008111247.HAA03687@cj20424-a.reston1.va.home.com>

> I'm looking at preventing core dumps due to recursive calls. With
> simple nested call counters for every function in object.c, limited to
> 500 levels deep recursions, I think this works okay for repr, str and
> print. It solves most of the complaints, like:
> 
> class Crasher:
> 	def __str__(self): print self
> 
> print Crasher()
> 
> With such protection, instead of a core dump, we'll get an exception:
> 
> RuntimeError: Recursion too deep
> 
> 
> So far, so good. 500 nested calls to repr, str or print are likely
> to be programming bugs. Now I wonder whether it's a good idea to do
> the same thing for getattr and setattr, to avoid crashes like:
> 
> class Crasher:
> 	def __getattr__(self, x): return self.x 
> 
> Crasher().bonk
> 
> Solving this the same way is likely to slow things down a bit, but
> would prevent the crash. OTOH, in a complex object hierarchy with
> tons of delegation and/or lookup dispatching, 500 nested calls is
> probably not enough. Or am I wondering too much? Opinions?

In your examples there's recursive Python code involved.  There's
*already* a generic recursion check for that, but the limit is too
high (the latter example segfaults for me too, while a simple def f():
f() gives a RuntimeError).

It seems better to tune the generic check than to special-case str,
repr, and getattr.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug 11 14:55:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 07:55:29 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi
 ng amount of data sent.
Message-ID: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>

I just noticed this.  Is this true?  Shouldn't we change send() to
raise an error instead of returning a small number?  (The number of
bytes written can be an attribute of the exception.)

Don't look at me for implementing this, sorry, no time...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

------- Forwarded Message

Date:    Thu, 10 Aug 2000 16:39:48 -0700
From:    noreply at sourceforge.net
To:      scott at chronis.pobox.com, 0 at delerium.i.sourceforge.net,
	 python-bugs-list at python.org
Subject: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi
	  ng amount of data sent.

Bug #111620, was updated on 2000-Aug-10 16:39
Here is a current snapshot of the bug.

Project: Python
Category: Library
Status: Open
Resolution: None
Bug Group: None
Priority: 5
Summary: lots of use of send() without verifying amount of data sent.

Details: a quick grep of the standard python library (below) shows that there
is lots of unchecked use of the send() function.  Every unix system I've every 
used states that send() returns the number of bytes sent, which can be < length
(<string>).  Using socket.send(s) without verifying that the return value is eq
ual to the length of s is careless and can result in loss of data.

I just submitted a patch for smtplib's use of send(), have patched a piece of Z
ope the same way, and get the feeling that it's becoming standard to call send(
) without checking 
that the amount of data sent is the intended amount.  While this is OK for a qu
ick script, I don't feel it's OK for library code or anything that might be use
d in production.

scott

For detailed info, follow this link:
http://sourceforge.net/bugs/?func=detailbug&bug_id=111620&group_id=5470

_______________________________________________
Python-bugs-list maillist  -  Python-bugs-list at python.org
http://www.python.org/mailman/listinfo/python-bugs-list

------- End of Forwarded Message




From gmcm at hypernet.com  Fri Aug 11 14:32:44 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 08:32:44 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
Message-ID: <1246125329-123433164@hypernet.com>

[bug report] 
> Details: a quick grep of the standard python library (below)
> shows that there is lots of unchecked use of the send() 
> function.
[Guido]
> I just noticed this.  Is this true?  Shouldn't we change send()
> to raise an error instead of returning a small number?  (The
> number of bytes written can be an attribute of the exception.)

No way! You'd break 90% of my sockets code! People who 
don't want to code proper sends / recvs can use that sissy 
makefile junk.

- Gordon



From thomas at xs4all.net  Fri Aug 11 14:31:43 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 14:31:43 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 07:55:29AM -0500
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>
Message-ID: <20000811143143.G17171@xs4all.nl>

On Fri, Aug 11, 2000 at 07:55:29AM -0500, Guido van Rossum wrote:

> I just noticed this.  Is this true?  Shouldn't we change send() to
> raise an error instead of returning a small number?  (The number of
> bytes written can be an attribute of the exception.)

This would break a lot of code. (probably all that use send, with or without
return-code checking.) I would propose a 'send_all' or some such instead,
which would keep sending until either a real error occurs, or all data is
sent (possibly with a timeout ?). And most uses of send could be replaced by
send_all, both in the std. library and in user code.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 14:39:36 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 14:39:36 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111247.HAA03687@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 11, 2000 07:47:09 AM
Message-ID: <200008111239.OAA15818@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> It seems better to tune the generic check than to special-case str,
> repr, and getattr.

Right. This would be a step forward, at least for recursive Python code
(which is the most common complaint).  Reducing the current value
by half, i.e. setting MAX_RECURSION_DEPTH = 5000 works for me (Linux & AIX)

Agreement on 5000?

Doesn't solve the problem for C code (extensions) though...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 15:19:38 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 15:19:38 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <000401c00389$2fa577b0$060210ac@private> from "Barry Scott" at Aug 11, 2000 12:42:11 PM
Message-ID: <200008111319.PAA16192@python.inrialpes.fr>

Barry Scott wrote:
> 
> Why not set a limit in the intepreter? Fixing this for every call in object.c
> seems a lots of hard work and will always leave holes.

Indeed.

> 
> For embedding Python being able to control the recursion depth of the
> intepreter is very useful. I would want to be able to set, from C, the
> max call depth limit and the current call depth limit.

Except exporting MAX_RECURSION_DEPTH as a variable (Py_MaxRecursionDepth)
I don't see what you mean by current call depth limit.

> I'd expect Python to set a min call depth limit.

I don't understand this. Could you elaborate?
Are you implying the introduction of a public function
(ex. Py_SetRecursionDepth) that does some value checks?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From paul at prescod.net  Fri Aug 11 15:19:05 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 11 Aug 2000 08:19:05 -0500
Subject: [Python-Dev] Lockstep iteration - eureka!
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."            
		 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
		 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <l03102802b5b71c40f9fc@[193.78.237.121]>
Message-ID: <3993FD49.C7E71108@prescod.net>

Just van Rossum wrote:
> 
> ...
>
>        for <index> indexing <element> in <seq>:
>            ...

 
Let me throw out another idea. What if sequences just had .items()
methods?

j=range(0,10)

for index, element in j.items():
    ...

While we wait for the sequence "base class" we could provide helper
functions that makes the implementation of both eager and lazy versions
easier.

-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Fri Aug 11 16:19:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 09:19:33 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:31:43 +0200."
             <20000811143143.G17171@xs4all.nl> 
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>  
            <20000811143143.G17171@xs4all.nl> 
Message-ID: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>

> > I just noticed this.  Is this true?  Shouldn't we change send() to
> > raise an error instead of returning a small number?  (The number of
> > bytes written can be an attribute of the exception.)
> 
> This would break a lot of code. (probably all that use send, with or without
> return-code checking.) I would propose a 'send_all' or some such instead,
> which would keep sending until either a real error occurs, or all data is
> sent (possibly with a timeout ?). And most uses of send could be replaced by
> send_all, both in the std. library and in user code.

Really?!?!

I just read the man page for send() (Red Hat linux 6.1) and it doesn't
mention sending fewer than all bytes at all.  In fact, while it says
that the return value is the number of bytes sent, it at least
*suggests* that it will return an error whenever not everything can be
sent -- even in non-blocking mode.

Under what circumstances can send() return a smaller number?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From paul at prescod.net  Fri Aug 11 15:25:27 2000
From: paul at prescod.net (Paul Prescod)
Date: Fri, 11 Aug 2000 08:25:27 -0500
Subject: [Python-Dev] Winreg update
Message-ID: <3993FEC7.4E38B4F1@prescod.net>

I am in transit so I don't have time for a lot of back and forth email
relating to winreg. It also seems that there are a lot of people (let's
call them "back seat coders") who have vague ideas of what they want but
don't want to spend a bunch of time in a long discussion about registry
arcana. Therefore I am endevouring to make it as easy and fast to
contribute to the discussion as possible. 

I'm doing this through a Python Module Proposal format. This can also
serve as the basis of documentation.

This is really easy so I want
some real feedback this time. Distutils people, this means you! Mark! I
would love to hear Bill Tutt, Greg Stein and anyone else who claims some
knowledge of Windows!

If you're one of the people who has asked for winreg in the core then
you should respond. It isn't (IMO) sufficient to put in a hacky API to
make your life easier. You need to give something to get something. You
want windows registry support in the core -- fine, let's do it properly.

Even people with a minimal understanding of the registry should be able
to contribute: the registry isn't rocket surgery. I'll include a short
primer in this email.

All you need to do is read this email and comment on whether you agree
with the overall principle and then give your opinion on fifteen
possibly controversial issues. The "overall principle" is to steal
shamelessly from Microsoft's new C#/VB/OLE/Active-X/CRL API instead of
innovating for Python. That allows us to avoid starting the debate from
scratch. It also eliminates the feature that Mark complained about
(which was a Python-specific innovation).

The fifteen issues are mostly extensions to the API to make it easier
(convenience extensions) or more powerful (completeness extensions).
Many of them are binary: "do this, don't do that." Others are choices:
e.g. "Use tuples", "Use lists", "Use an instance".

I will try to make sense of the various responses. Some issues will have
strong consensus and I'll close those quickly. Others will require more
(hopefully not much!) discussion.

Windows Registry Primer:
========================

There are things called "keys". They aren't like Python keys so don't
think of them that way. Keys have a list of subkeys indexed by name.
Keys also have a list of "values". Values have names. Every value has a
type. In some type-definition syntax:

key is (name: string, 
     subkeys: (string : key), 
     values: (string : value ))

value is ( name: string,
       type: enumeration,
       data: (depends on enumeration) )

That's the basic model. There are various helper facilities provided by
the APIs, but really, the model is as above.

=========================================================================
Python Module Proposal
Title: Windows registry
Version: $Revision: 1.0$
Owner: paul at prescod.net (Paul Prescod)
Python-Version: 2.0
Status: Incomplete

Overview

    It is convenient for Windows users to know that a Python module to
    access the registry is always available whenever Python is installed
    on Windows.  This is especially useful for installation programs.
    There is a Windows registry module from the win32 extensions to
    Python. It is based directly on the original Microsoft APIs. This
    means that there are many backwards compatibility hacks, "reserved"
    parameters and other legacy features that are not interesting to
    most Python programmers. Microsoft is moving to a higher level API
    for languages other than C, as part of Microsoft's Common Runtime
    Library (CRL) initiative. This newer, higher level API serves as
    the basis for the module described herein.

    This higher level API would be implemented in Python and based upon 
    the low-level API. They would not be in competition: a user would 
    choose based on their preferences and needs.

Module Exports

    These are taken directly from the Common Runtime Library:

    ClassesRoot     The Windows Registry base key HKEY_CLASSES_ROOT.
    CurrentConfig   The Windows Registry base key HKEY_CURRENT_CONFIG.
    CurrentUser     The Windows Registry base key HKEY_CURRENT_USER.
    LocalMachine    The Windows Registry base key HKEY_LOCAL_MACHINE.
    CurrentUser     The Windows Registry base key HKEY_CURRENT_USER.
    DynData         The Windows Registry base key HKEY_DYN_DATA.
    PerformanceData The Windows Registry base key HKEY_PERFORMANCE_DATA.
    Users           The Windows Registry base key HKEY_USERS.

    RegistryKey     Registry key class (important class in module)

RegistryKey class Data Members

    These are taken directly from the Common Runtime Library:

    Name            Retrieves the name of the key. 
                    [Issue: full path or just name within parent?]
    SubKeyCount     Retrieves the count of subkeys.
    ValueCount      Retrieves the count of values in the key.

RegistryKey Methods

    These are taken directly from the Common Runtime Library:

    Close()
        Closes this key and flushes it to disk if the contents have 
        been modified.

    CreateSubKey( subkeyname )
        Creates a new subkey or opens an existing subkey.

     [Issue: SubKey_full_path]: Should it be possible to create a subkey 
        deeply:
        >>> LocalMachine.CreateSubKey( r"foo\bar\baz" )

        Presumably the result of this issue would also apply to every
        other method that takes a subkey parameter.

        It is not clear what the CRL API says yet (Mark?). If it says
        "yes" then we would follow it of course. If it says "no" then
        we could still consider the feature as an extension.

       [Yes] allow subkey parameters to be full paths
       [No]  require them to be a single alphanumeric name, no slashes

    DeleteSubKey( subkeyname )
        Deletes the specified subkey. To delete subkeys and all their 
        children (recursively), use DeleteSubKeyTree.

    DeleteSubKeyTree( subkeyname )
        Recursively deletes a subkey and any child subkeys. 

    DeleteValue( valuename )
        Deletes the specified value from this key.

    __cmp__( other )
	Determines whether the specified key is the same key as the
	current key.

    GetSubKeyNames()
        Retrieves an array of strings containing all the subkey names.

    GetValue( valuename )
        Retrieves the specified value.

     Registry types are converted according to the following table:

         REG_NONE: None
         REG_SZ: UnicodeType
         REG_MULTI_SZ: [UnicodeType, UnicodeType, ...]
         REG_DWORD: IntegerType
         REG_DWORD_LITTLE_ENDIAN: IntegerType
         REG_DWORD_BIG_ENDIAN: IntegerType
         REG_EXPAND_SZ: Same as REG_SZ
         REG_RESOURCE_LIST: Same as REG_BINARY
         REG_FULL_RESOURCE_DESCRIPTOR: Same as REG_BINARY
         REG_RESOURCE_REQUIREMENTS_LIST: Same as REG_BINARY
         REG_LINK: Same as REG_BINARY??? [Issue: more info needed!]

         REG_BINARY: StringType or array.array( 'c' )

     [Issue: REG_BINARY Representation]:
         How should binary data be represented as Python data?

         [String] The win32 module uses "string".
         [Array] I propose that an array of bytes would be better.

         One benefit of "binary" is that allows SetValue to detect
         string data as REG_SZ and array.array('c') as REG_BINARY

    [Issue: Type getting method]
         Should there be a companion method called GetType that fetches 
         the type of a registry value? Otherwise client code would not
         be able to distinguish between (e.g.) REG_SZ and 
         REG_SZ_BINARY.

         [Yes] Add GetType( string )
         [No]  Do not add GetType

    GetValueNames()
        Retrieves a list of strings containing all the value names.

    OpenRemoteBaseKey( machinename, name )
        Opens a new RegistryKey that represents the requested key on a 
        foreign machine.

    OpenSubKey( subkeyname )
        Retrieves a subkey.

    SetValue( keyname, value )
        Sets the specified value

	Types are automatically mapped according to the following
	algorithm:

          None: REG_NONE
          String: REG_SZ
          UnicodeType: REG_SZ
          [UnicodeType, UnicodeType, ...]: REG_MULTI_SZ
          [StringType, StringType, ...]: REG_MULTI_SZ
          IntegerType: REG_DWORD
          array.array('c'): REG_BINARY

       [Issue: OptionalTypeParameter]

          Should there be an optional parameter that allows you to
          specify the type explicitly? Presume that the types are 
          constants in the winreg modules (perhaps strings or 
          integers).

          [Yes] Allow other types to be specified
          [No]  People who want more control should use the underlying 
                win32 module.

Proposed Extensions

    The API above is a direct transliteration of the .NET API. It is
    somewhat underpowered in some senses and also is not entirely
    Pythonic. It is a good start as a basis for consensus, however,
    and these proposed extensions can be voted up or down individually.

    Two extensions are just the convenience functions (OpenRemoteKey
    and the top-level functions). Other extensions attempt to extend
    the API to support ALL features of the underlying API so that users
    never have to switch from one API to another to get a particular
    feature.

    Convenience Extension: OpenRemoteKey

        It is not clear to me why Microsoft restricts remote key opening
        to base keys. Why does it not allow a full path like this:

        >>> winreg.OpenRemoteKey( "machinename", 
                             r"HKEY_LOCAL_MACHINE\SOFTWARE\Python" )

        [Issue: Add_OpenRemoteKey]: 
              [Yes] add RemoteKey 
              [No] do not add?

        [Issue: Remove_OpenRemoteBaseKey]
              [Remove] It's redundant!
              [Retain] For backwards compatibility

    Convenience Extension: Top-level Functions

        A huge number of registry-manipulating programs treat the
        registry namespace as "flat" and go directly to the interesting
        registry key.  These top-level functions allow the Python user
        to skip all of the OO key object and get directly to what
        they want:

        key=OpenKey( keypath, machinename=None )
        key=CreateKey( keypath, machinename=None )
        DeleteKey( keypath, machinename=None )
        val=GetValue( keypath, valname, machinename=None )
        SetValue( keypath, valname, valdata, machinename=None )

        [Yes] Add these functions
        [No] Do not add
        [Variant] I like the idea but would change the function
                  signatures


    Completeness Extension: Type names

        If the type extensions are added to SetValue and GetValue then
        we need to decide how to represent types. It is fairly clear
        that they should be represented as constants in the module. The
        names of those constants could be the cryptic (but standard)
        Microsoft names or more descriptive, conventional names.

	Microsoft Names:

            REG_NONE
            REG_SZ
            REG_EXPAND_SZ
            REG_BINARY
            REG_DWORD
            REG_DWORD_LITTLE_ENDIAN
            REG_DWORD_BIG_ENDIAN
            REG_LINK
            REG_MULTI_SZ
            REG_RESOURCE_LIST
            REG_FULL_RESOURCE_DESCRIPTOR
            REG_RESOURCE_REQUIREMENTS_LIST

	Proposed Descriptive Names:

            NONE
            STRING
            EXPANDABLE_TEMPLATE_STRING
            BINARY_DATA
            INTEGER
            LITTLE_ENDIAN_INTEGER
            BIG_ENDIAN_INTEGER
            LINK
            STRING_LIST
            RESOURCE_LIST
            FULL_RESOURCE_DESCRIPTOR
            RESOURCE_REQUIREMENTS_LIST
             
        We could also allow both. One set would be aliases for the
        other.

        [Issue: TypeNames]:
            [MS Names]: Use the Microsoft names
            [Descriptive Names]: Use the more descriptive names
            [Both]: Use both

    Completeness Extension: Type representation

        No matter what the types are called, they must have values.

	The simplest thing would be to use the integers provided by the
	Microsoft header files.  Unfortunately integers are not at all
	self-describing so getting from the integer value to something
	human readable requires some sort of switch statement or mapping.
 
        An alternative is to use strings and map them internally to the 
        Microsoft integer constants.

        A third option is to use object instances. These instances would
        be useful for introspection and would have the following 
        attributes:

            msname (e.g. REG_SZ)
            friendlyname (e.g. String)
            msinteger (e.g. 6 )

        They would have only the following method:

            def __repr__( self ):
                "Return a useful representation of the type object"
                return "<RegType %d: %s %s>" % \
                  (self.msinteger, self.msname, self.friendlyname )

        A final option is a tuple with the three attributes described
        above.

        [Issue: Type_Representation]:
            [Integers]: Use Microsoft integers
            [Strings]: Use string names
            [Instances]: Use object instances with three introspective 
                         attributes
            [Tuples]: Use 3-tuples

    Completeness Extension: Type Namespace

        Should the types be declared in the top level of the module 
        (and thus show up in a "dir" or "from winreg import *") or 
        should they live in their own dictionary, perhaps called 
        "types" or "regtypes". They could also be attributes of some 
        instance.

        [Issue: Type_Namespace]:
            [Module]: winreg.REG_SZ
            [Dictionary]: winreg.types["REG_SZ"]
            [Instance]: winreg.types["REG_SZ"]

    Completeness Extension: Saving/Loading Keys

        The underlying win32 registry API allows the loading and saving
        of keys to filenames. Therefore these could be implemented
        easily as methods:

            def save( self, filename ):
                "Save a key to a filename"
                _winreg.SaveKey( self.keyobj, filename )

            def load( self, subkey, filename ):
                "Load a key from a filename"
                return _winreg.RegLoadKey( self.handle, subkey, 
                                           filename )

            >>> key.OpenSubKey("Python").save( "Python.reg" )
            >>> key.load( "Python", "Python.reg" )

        [Issue: Save_Load_Keys]
            [Yes] Support the saving and loading of keys
            [No]  Do not add these methods

    Completeness Extension: Security Access Flags

        The underlying win32 registry API allows security flags to be
        applied to the OpenKey method. The flags are:

             "KEY_ALL_ACCESS"
             "KEY_CREATE_LINK"
             "KEY_CREATE_SUB_KEY"
             "KEY_ENUMERATE_SUB_KEYS"
             "KEY_EXECUTE"
             "KEY_NOTIFY"
             "KEY_QUERY_VALUE"
             "KEY_READ"
             "KEY_SET_VALUE"

        These are not documented in the underlying API but should be for
        this API. This documentation would be derived from the Microsoft
        documentation. They would be represented as integer or string
        constants in the Python API and used something like this:

        key=key.OpenKey( subkeyname, winreg.KEY_READ )

        [Issue: Security_Access_Flags]
             [Yes] Allow the specification of security access flags.
             [No]  Do not allow this specification.

        [Issue: Security_Access_Flags_Representation]
             [Integer] Use the Microsoft integers
             [String]  Use string values
             [Tuples] Use (string, integer) tuples
             [Instances] Use instances with "name", "msinteger"
                         attributes

        [Issue: Security_Access_Flags_Location]
             [Top-Level] winreg.KEY_READ
             [Dictionary] winreg.flags["KEY_READ"]
             [Instance] winreg.flags.KEY_READ

    Completeness Extension: Flush

        The underlying win32 registry API has a flush method for keys.
        The documentation is as follows:

            """Writes all the attributes of a key to the registry.

            It is not necessary to call RegFlushKey to change a key.
            Registry changes are flushed to disk by the registry using
            its lazy flusher.  Registry changes are also flushed to
            disk at system shutdown.  Unlike \function{CloseKey()}, the
            \function{FlushKey()} method returns only when all the data
            has been written to the registry.  An application should
            only call \function{FlushKey()} if it requires absolute
            certainty that registry changes are on disk."""

    If all completeness extensions are implemented, the author believes
    that this API will be as complete as the underlying API so
    programmers can choose which to use based on familiarity rather 
    than feature-completeness.


-- 
 Paul Prescod - Not encumbered by corporate consensus
"I don't want you to describe to me -- not ever -- what you were doing
to that poor boy to make him sound like that; but if you ever do it
again, please cover his mouth with your hand," Grandmother said.
	-- John Irving, "A Prayer for Owen Meany"



From guido at beopen.com  Fri Aug 11 16:28:09 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 09:28:09 -0500
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:39:36 +0200."
             <200008111239.OAA15818@python.inrialpes.fr> 
References: <200008111239.OAA15818@python.inrialpes.fr> 
Message-ID: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>

> > It seems better to tune the generic check than to special-case str,
> > repr, and getattr.
> 
> Right. This would be a step forward, at least for recursive Python code
> (which is the most common complaint).  Reducing the current value
> by half, i.e. setting MAX_RECURSION_DEPTH = 5000 works for me (Linux & AIX)
> 
> Agreement on 5000?

No, the __getattr__ example still dumps core for me.  With 4000 it is
fine, but this indicates that this is totally the wrong approach: I
can change the available stack size with ulimit -s and cause a core
dump anyway.  Or there could be a loger path through the C code where
more C stack is used per recursion.

We could set the maximum to 1000 and assume a "reasonable" stack size,
but that doesn't make me feel comfortable either.

It would be good if there was a way to sense the remaining available
stack, even if it wasn't portable.  Any Linux experts out there?

> Doesn't solve the problem for C code (extensions) though...

That wasn't what started this thread.  Bugs in extensions are just that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gvwilson at nevex.com  Fri Aug 11 15:39:38 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Fri, 11 Aug 2000 09:39:38 -0400 (EDT)
Subject: [Python-Dev] PEP 0211: Linear Algebra Operators
Message-ID: <Pine.LNX.4.10.10008110936390.13482-200000@akbar.nevex.com>

Hi, everyone.  Please find attached the latest version of PEP-0211,
"Adding New Linear Algebra Operators to Python".  As I don't have write
access to the CVS repository, I'd be grateful if someone could check this
in for me.  Please send comments directly to me (gvwilson at nevex.com); I'll
summarize, update the PEP, and re-post.

Thanks,
Greg
-------------- next part --------------
PEP: 211
Title: Adding New Linear Algebra Operators to Python
Version: $Revision$
Owner: gvwilson at nevex.com (Greg Wilson)
Python-Version: 2.1
Created: 15-Jul-2000
Status: Draft
Post-History:


Introduction

    This PEP describes a proposal to add linear algebra operators to
    Python 2.0.  It discusses why such operators are desirable, and
    alternatives that have been considered and discarded.  This PEP
    summarizes discussions held in mailing list forums, and provides
    URLs for further information, where appropriate.  The CVS revision
    history of this file contains the definitive historical record.


Proposal

    Add a single new infix binary operator '@' ("across"), and
    corresponding special methods "__across__()" and "__racross__()".
    This operator will perform mathematical matrix multiplication on
    NumPy arrays, and generate cross-products when applied to built-in
    sequence types.  No existing operator definitions will be changed.


Background

    Computers were invented to do arithmetic, as was the first
    high-level programming language, Fortran.  While Fortran was a
    great advance on its machine-level predecessors, there was still a
    very large gap between its syntax and the notation used by
    mathematicians.  The most influential effort to close this gap was
    APL [1]:

        The language [APL] was invented by Kenneth E. Iverson while at
        Harvard University. The language, originally titled "Iverson
        Notation", was designed to overcome the inherent ambiguities
        and points of confusion found when dealing with standard
        mathematical notation. It was later described in 1962 in a
        book simply titled "A Programming Language" (hence APL).
        Towards the end of the sixties, largely through the efforts of
        IBM, the computer community gained its first exposure to
        APL. Iverson received the Turing Award in 1980 for this work.

    APL's operators supported both familiar algebraic operations, such
    as vector dot product and matrix multiplication, and a wide range
    of structural operations, such as stitching vectors together to
    create arrays.  Its notation was exceptionally cryptic: many of
    its symbols did not exist on standard keyboards, and expressions
    had to be read right to left.

    Most subsequent work on numerical languages, such as Fortran-90,
    MATLAB, and Mathematica, has tried to provide the power of APL
    without the obscurity.  Python's NumPy [2] has most of the
    features that users of such languages expect, but these are
    provided through named functions and methods, rather than
    overloaded operators.  This makes NumPy clumsier than its
    competitors.

    One way to make NumPy more competitive is to provide greater
    syntactic support in Python itself for linear algebra.  This
    proposal therefore examines the requirements that new linear
    algebra operators in Python must satisfy, and proposes a syntax
    and semantics for those operators.


Requirements

    The most important requirement is that there be minimal impact on
    the existing definition of Python.  The proposal must not break
    existing programs, except possibly those that use NumPy.

    The second most important requirement is to be able to do both
    elementwise and mathematical matrix multiplication using infix
    notation.  The nine cases that must be handled are:

        |5 6| *   9   = |45 54|      MS: matrix-scalar multiplication
        |7 8|           |63 72|

          9   * |5 6| = |45 54|      SM: scalar-matrix multiplication
                |7 8|   |63 72|

        |2 3| * |4 5| = |8 15|       VE: vector elementwise multiplication


        |2 3| *  |4|  =   23         VD: vector dot product
                 |5|

         |2|  * |4 5| = | 8 10|      VO: vector outer product
         |3|            |12 15|

        |1 2| * |5 6| = | 5 12|      ME: matrix elementwise multiplication
        |3 4|   |7 8|   |21 32|

        |1 2| * |5 6| = |19 22|      MM: mathematical matrix multiplication
        |3 4|   |7 8|   |43 50|

        |1 2| * |5 6| = |19 22|      VM: vector-matrix multiplication
                |7 8|

        |5 6| *  |1|  =   |17|       MV: matrix-vector multiplication
        |7 8|    |2|      |23|

    Note that 1-dimensional vectors are treated as rows in VM, as
    columns in MV, and as both in VD and VO.  Both are special cases
    of 2-dimensional matrices (Nx1 and 1xN respectively).  It may
    therefore be reasonable to define the new operator only for
    2-dimensional arrays, and provide an easy (and efficient) way for
    users to convert 1-dimensional structures to 2-dimensional.
    Behavior of a new multiplication operator for built-in types may
    then:

    (a) be a parsing error (possible only if a constant is one of the
        arguments, since names are untyped in Python);

    (b) generate a runtime error; or

    (c) be derived by plausible extension from its behavior in the
        two-dimensional case.

    Third, syntactic support should be considered for three other
    operations:

                         T
    (a) transposition:  A   => A[j, i] for A[i, j]

                         -1
    (b) inverse:        A   => A' such that A' * A = I (the identity matrix)

    (c) solution:       A/b => x  such that A * x = b
                        A\b => x  such that x * A = b

    With regard to (c), it is worth noting that the two syntaxes used
    were invented by programmers, not mathematicians.  Mathematicians
    do not have a standard, widely-used notation for matrix solution.

    It is also worth noting that dozens of matrix inversion and
    solution algorithms are widely used.  MATLAB and its kin bind
    their inversion and/or solution operators to one which is
    reasonably robust in most cases, and require users to call
    functions or methods to access others.

    Fourth, confusion between Python's notation and those of MATLAB
    and Fortran-90 should be avoided.  In particular, mathematical
    matrix multiplication (case MM) should not be represented as '.*',
    since:

    (a) MATLAB uses prefix-'.' forms to mean 'elementwise', and raw
        forms to mean "mathematical" [4]; and

    (b) even if the Python parser can be taught how to handle dotted
        forms, '1.*A' will still be visually ambiguous [4].

    One anti-requirement is that new operators are not needed for
    addition, subtraction, bitwise operations, and so on, since
    mathematicians already treat them elementwise.


Proposal:

    The meanings of all existing operators will be unchanged.  In
    particular, 'A*B' will continue to be interpreted elementwise.
    This takes care of the cases MS, SM, VE, and ME, and ensures
    minimal impact on existing programs.

    A new operator '@' (pronounced "across") will be added to Python,
    along with two special methods, "__across__()" and
    "__racross__()", with the usual semantics.

    NumPy will overload "@" to perform mathematical multiplication of
    arrays where shapes permit, and to throw an exception otherwise.
    The matrix class's implementation of "@" will treat built-in
    sequence types as if they were column vectors.  This takes care of
    the cases MM and MV.

    An attribute "T" will be added to the NumPy array type, such that
    "m.T" is:

    (a) the transpose of "m" for a 2-dimensional array

    (b) the 1xN matrix transpose of "m" if "m" is a 1-dimensional
        array; or

    (c) a runtime error for an array with rank >= 3.

    This attribute will alias the memory of the base object.  NumPy's
    "transpose()" function will be extended to turn built-in sequence
    types into row vectors.  This takes care of the VM, VD, and VO
    cases.  We propose an attribute because:

    (a) the resulting notation is similar to the 'superscript T' (at
        least, as similar as ASCII allows), and

    (b) it signals that the transposition aliases the original object.

    No new operators will be defined to mean "solve a set of linear
    equations", or "invert a matrix".  Instead, NumPy will define a
    value "inv", which will be recognized by the exponentiation
    operator, such that "A ** inv" is the inverse of "A".  This is
    similar in spirit to NumPy's existing "newaxis" value.

    (Optional) When applied to sequences, the operator will return a
    list of tuples containing the cross-product of their elements in
    left-to-right order:

    >>> [1, 2] @ (3, 4)
    [(1, 3), (1, 4), (2, 3), (2, 4)]

    >>> [1, 2] @ (3, 4) @ (5, 6)
    [(1, 3, 5), (1, 3, 6), 
     (1, 4, 5), (1, 4, 6),
     (2, 3, 5), (2, 3, 6),
     (2, 4, 5), (2, 4, 6)]

    This will require the same kind of special support from the parser
    as chained comparisons (such as "a<b<c<=d").  However, it would
    permit the following:

    >>> for (i, j) in [1, 2] @ [3, 4]:
    >>>     print i, j
    1 3
    1 4
    2 3
    2 4

    as a short-hand for the common nested loop idiom:

    >>> for i in [1, 2]:
    >>>    for j in [3, 4]:
    >>>        print i, j

    Response to the 'lockstep loop' questionnaire [5] indicated that
    newcomers would be comfortable with this (so comfortable, in fact,
    that most of them interpreted most multi-loop 'zip' syntaxes [6]
    as implementing single-stage nesting).


Alternatives:

    01. Don't add new operators --- stick to functions and methods.

    Python is not primarily a numerical language.  It is not worth
    complexifying the language for this special case --- NumPy's
    success is proof that users can and will use functions and methods
    for linear algebra.

    On the positive side, this maintains Python's simplicity.  Its
    weakness is that support for real matrix multiplication (and, to a
    lesser extent, other linear algebra operations) is frequently
    requested, as functional forms are cumbersome for lengthy
    formulas, and do not respect the operator precedence rules of
    conventional mathematics.  In addition, the method form is
    asymmetric in its operands.

    02. Introduce prefixed forms of existing operators, such as "@*"
        or "~*", or used boxed forms, such as "[*]" or "%*%".

    There are (at least) three objections to this.  First, either form
    seems to imply that all operators exist in both forms.  This is
    more new entities than the problem merits, and would require the
    addition of many new overloadable methods, such as __at_mul__.

    Second, while it is certainly possible to invent semantics for
    these new operators for built-in types, this would be a case of
    the tail wagging the dog, i.e. of letting the existence of a
    feature "create" a need for it.

    Finally, the boxed forms make human parsing more complex, e.g.:

        A[*] = B    vs.    A[:] = B

    03. (From Moshe Zadka [7], and also considered by Huaiyu Zhou [8]
        in his proposal [9]) Retain the existing meaning of all
        operators, but create a behavioral accessor for arrays, such
        that:

            A * B

        is elementwise multiplication (ME), but:

            A.m() * B.m()

        is mathematical multiplication (MM).  The method "A.m()" would
        return an object that aliased A's memory (for efficiency), but
        which had a different implementation of __mul__().

    The advantage of this method is that it has no effect on the
    existing implementation of Python: changes are localized in the
    Numeric module.  The disadvantages are:

    (a) The semantics of "A.m() * B", "A + B.m()", and so on would
        have to be defined, and there is no "obvious" choice for them.

    (b) Aliasing objects to trigger different operator behavior feels
        less Pythonic than either calling methods (as in the existing
        Numeric module) or using a different operator.  This PEP is
        primarily about look and feel, and about making Python more
        attractive to people who are not already using it.

    04. (From a proposal [9] by Huaiyu Zhou [8]) Introduce a "delayed
        inverse" attribute, similar to the "transpose" attribute
        advocated in the third part of this proposal.  The expression
        "a.I" would be a delayed handle on the inverse of the matrix
        "a", which would be evaluated in context as required.  For
        example, "a.I * b" and "b * a.I" would solve sets of linear
        equations, without actually calculating the inverse.

    The main drawback of this proposal it is reliance on lazy
    evaluation, and even more on "smart" lazy evaluation (i.e. the
    operation performed depends on the context in which the evaluation
    is done).  The BDFL has so far resisted introducing LE into
    Python.


Related Proposals

    0203 :  Augmented Assignments

            If new operators for linear algebra are introduced, it may
            make sense to introduce augmented assignment forms for
            them.

    0207 :  Rich Comparisons

            It may become possible to overload comparison operators
            such as '<' so that an expression such as 'A < B' returns
            an array, rather than a scalar value.

    0209 :  Adding Multidimensional Arrays

            Multidimensional arrays are currently an extension to
            Python, rather than a built-in type.


Acknowledgments:

    I am grateful to Huaiyu Zhu [8] for initiating this discussion,
    and for some of the ideas and terminology included below.


References:

    [1] http://www.acm.org/sigapl/whyapl.htm
    [2] http://numpy.sourceforge.net
    [3] PEP-0203.txt "Augmented Assignments".
    [4] http://bevo.che.wisc.edu/octave/doc/octave_9.html#SEC69
    [5] http://www.python.org/pipermail/python-dev/2000-July/013139.html
    [6] PEP-0201.txt "Lockstep Iteration"
    [7] Moshe Zadka is 'moshez at math.huji.ac.il'.
    [8] Huaiyu Zhu is 'hzhu at users.sourceforge.net'
    [9] http://www.python.org/pipermail/python-list/2000-August/112529.html


Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

From fredrik at pythonware.com  Fri Aug 11 15:55:01 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 11 Aug 2000 15:55:01 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com>             <20000811143143.G17171@xs4all.nl>  <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
Message-ID: <016d01c0039b$bfb99a40$0900a8c0@SPIFF>

guido wrote:
> I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> mention sending fewer than all bytes at all.  In fact, while it says
> that the return value is the number of bytes sent, it at least
> *suggests* that it will return an error whenever not everything can be
> sent -- even in non-blocking mode.
> 
> Under what circumstances can send() return a smaller number?

never, it seems:

    The length of the message to be sent is specified by the
    length argument. If the message is too long to pass through
    the underlying protocol, send() fails and no data is transmitted.

    Successful completion of a call to send() does not guarantee
    delivery of the message. A return value of -1 indicates only
    locally-detected errors.

    If space is not available at the sending socket to hold the message
    to be transmitted and the socket file descriptor does not have
    O_NONBLOCK set, send() blocks until space is available. If space
    is not available at the sending socket to hold the message to be
    transmitted and the socket file descriptor does have O_NONBLOCK
    set, send() will fail.

    (from SUSv2)

iow, it either blocks or fails.

</F>




From fredrik at pythonware.com  Fri Aug 11 16:01:17 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 11 Aug 2000 16:01:17 +0200
Subject: [Python-Dev] Preventing recursion core dumps
References: <000401c00389$2fa577b0$060210ac@private>
Message-ID: <018a01c0039c$9f1949b0$0900a8c0@SPIFF>

barry wrote:
> For embedding Python being able to control the recursion depth of the
intepreter
> is very useful. I would want to be able to set, from C, the max call depth
limit
> and the current call depth limit. I'd expect Python to set a min call
depth limit.

+1 (on concept, at least).

</F>




From thomas at xs4all.net  Fri Aug 11 16:08:51 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:08:51 +0200
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 09:28:09AM -0500
References: <200008111239.OAA15818@python.inrialpes.fr> <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <20000811160851.H17171@xs4all.nl>

On Fri, Aug 11, 2000 at 09:28:09AM -0500, Guido van Rossum wrote:

> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

getrlimit and getrusage do what you want to, I think. getrusage() fills a
struct rusage:


            struct rusage
            {
                 struct timeval ru_utime; /* user time used */
                 struct timeval ru_stime; /* system time used */
                 long ru_maxrss;          /* maximum resident set size */
                 long ru_ixrss;      /* integral shared memory size */
                 long ru_idrss;      /* integral unshared data size */
                 long ru_isrss;      /* integral unshared stack size */
                 long ru_minflt;          /* page reclaims */
                 long ru_majflt;          /* page faults */
                 long ru_nswap;      /* swaps */
                 long ru_inblock;         /* block input operations */
                 long ru_oublock;         /* block output operations */
                 long ru_msgsnd;          /* messages sent */
                 long ru_msgrcv;          /* messages received */
                 long ru_nsignals;        /* signals received */
                 long ru_nvcsw;      /* voluntary context switches */
                 long ru_nivcsw;          /* involuntary context switches */
            };

and you can get the actual stack limit with getrlimit(). The availability of
getrusage/getrlimit is already checked by configure, and there's the
resource module which wraps those functions and structs for Python code.
Note that Linux isn't likely to be a problem, most linux distributions have
liberal limits to start with (still the 'single-user OS' ;)

BSDI, for instance, has very strict default limits -- the standard limits
aren't even enough to start 'pine' on a few MB of mailbox. (But BSDI has
rusage/rlimit, so we can 'autodetect' this.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Fri Aug 11 16:13:13 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 17:13:13 +0300 (IDT)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008111708210.3449-100000@sundial>

On Fri, 11 Aug 2000, Guido van Rossum wrote:

> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

I'm far from an expert, but I might have an idea. The question is: must
this works for embedded version of Python, or can I fool around with
main()?

Here's the approach:

 - In main(), get the address of some local variable. Call this
        min
 - Call getrlimit, and see the stack size. Call max = min+ (<stack size )
 - When checking for "too much recursion", take the address of a local 
   variable and compare it against max. If it's higher, stop.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From just at letterror.com  Fri Aug 11 17:14:40 2000
From: just at letterror.com (Just van Rossum)
Date: Fri, 11 Aug 2000 16:14:40 +0100
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <l03102808b5b9c74f316e@[193.78.237.168]>

> > Agreement on 5000?
>
> No, the __getattr__ example still dumps core for me.  With 4000 it is
> fine, but this indicates that this is totally the wrong approach: I
> can change the available stack size with ulimit -s and cause a core
> dump anyway.  Or there could be a loger path through the C code where
> more C stack is used per recursion.
>
> We could set the maximum to 1000 and assume a "reasonable" stack size,
> but that doesn't make me feel comfortable either.
>
> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

Gordon, how's that Stackless PEP coming along?

Sorry, I couldn't resist ;-)

Just





From thomas at xs4all.net  Fri Aug 11 16:21:09 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:21:09 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <016d01c0039b$bfb99a40$0900a8c0@SPIFF>; from fredrik@pythonware.com on Fri, Aug 11, 2000 at 03:55:01PM +0200
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>
Message-ID: <20000811162109.I17171@xs4all.nl>

On Fri, Aug 11, 2000 at 03:55:01PM +0200, Fredrik Lundh wrote:
> guido wrote:
> > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> > mention sending fewer than all bytes at all.  In fact, while it says
> > that the return value is the number of bytes sent, it at least
> > *suggests* that it will return an error whenever not everything can be
> > sent -- even in non-blocking mode.

> > Under what circumstances can send() return a smaller number?

> never, it seems:

[snip manpage]

Indeed. I didn't actually check the story, since Guido was apparently
convinced by its validity. I was just operating under the assumption that
send() did behave like write(). I won't blindly believe Guido anymore ! :)

Someone set the patch to 'rejected' and tell the submittor that 'send'
doesn't return the number of bytes written ;-P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 16:32:45 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 16:32:45 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <Pine.GSO.4.10.10008111708210.3449-100000@sundial> from "Moshe Zadka" at Aug 11, 2000 05:13:13 PM
Message-ID: <200008111432.QAA16648@python.inrialpes.fr>

Moshe Zadka wrote:
> 
> On Fri, 11 Aug 2000, Guido van Rossum wrote:
> 
> > It would be good if there was a way to sense the remaining available
> > stack, even if it wasn't portable.  Any Linux experts out there?
> 
> I'm far from an expert, but I might have an idea. The question is: must
> this works for embedded version of Python, or can I fool around with
> main()?

Probably not main(), but Py_Initialize() for sure.

> 
> Here's the approach:
> 
>  - In main(), get the address of some local variable. Call this
>         min
>  - Call getrlimit, and see the stack size. Call max = min+ (<stack size )
>  - When checking for "too much recursion", take the address of a local 
>    variable and compare it against max. If it's higher, stop.

Sounds good. If getrlimit is not available, we can always fallback to
some (yet to be computed) constant, i.e. the current state.

[Just]
> Gordon, how's that Stackless PEP coming along?
> Sorry, I couldn't resist ;-)

Ah, in this case, we'll get a memory error after filling the whole disk
with frames <wink>

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From akuchlin at mems-exchange.org  Fri Aug 11 16:33:35 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 10:33:35 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811162109.I17171@xs4all.nl>; from thomas@xs4all.net on Fri, Aug 11, 2000 at 04:21:09PM +0200
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl>
Message-ID: <20000811103335.B20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 04:21:09PM +0200, Thomas Wouters wrote:
>Someone set the patch to 'rejected' and tell the submittor that 'send'
>doesn't return the number of bytes written ;-P

What about reviving the idea of raising an exception, then?

--amk



From moshez at math.huji.ac.il  Fri Aug 11 16:40:10 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 17:40:10 +0300 (IDT)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111432.QAA16648@python.inrialpes.fr>
Message-ID: <Pine.GSO.4.10.10008111736300.3449-100000@sundial>

On Fri, 11 Aug 2000, Vladimir Marangozov wrote:

> Moshe Zadka wrote:
> > 
> > On Fri, 11 Aug 2000, Guido van Rossum wrote:
> > 
> > > It would be good if there was a way to sense the remaining available
> > > stack, even if it wasn't portable.  Any Linux experts out there?
> > 
> > I'm far from an expert, but I might have an idea. The question is: must
> > this works for embedded version of Python, or can I fool around with
> > main()?
> 
> Probably not main(), but Py_Initialize() for sure.

Py_Initialize() isn't good enough -- I can put an upper bound on the
difference between "min" and the top of the stack: I can't do so
for the call to Py_Initialize(). Well, I probably can in some *really*
ugly way. I'll have to think about it some more.

> Sounds good. If getrlimit is not available, we can always fallback to
> some (yet to be computed) constant, i.e. the current state.

Well, since Guido asked for a non-portable Linuxish way, I think we
can assume getrusage() is there.

[Vladimir]
> Ah, in this case, we'll get a memory error after filling the whole disk
> with frames <wink>

Which is great! Python promises to always throw an exception....

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Fri Aug 11 16:43:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 16:43:49 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811103335.B20646@kronos.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Aug 11, 2000 at 10:33:35AM -0400
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl> <20000811103335.B20646@kronos.cnri.reston.va.us>
Message-ID: <20000811164349.J17171@xs4all.nl>

On Fri, Aug 11, 2000 at 10:33:35AM -0400, Andrew Kuchling wrote:
> On Fri, Aug 11, 2000 at 04:21:09PM +0200, Thomas Wouters wrote:
> >Someone set the patch to 'rejected' and tell the submittor that 'send'
> >doesn't return the number of bytes written ;-P

> What about reviving the idea of raising an exception, then?

static PyObject *
PySocketSock_send(PySocketSockObject *s, PyObject *args)
{
        char *buf;
        int len, n, flags = 0;
        if (!PyArg_ParseTuple(args, "s#|i:send", &buf, &len, &flags))
                return NULL;
        Py_BEGIN_ALLOW_THREADS
        n = send(s->sock_fd, buf, len, flags);
        Py_END_ALLOW_THREADS
        if (n < 0)
                return PySocket_Err();
        return PyInt_FromLong((long)n);
}

(PySocket_Err() creates an error.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Aug 11 17:56:06 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 10:56:06 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: Your message of "Fri, 11 Aug 2000 16:21:09 +0200."
             <20000811162109.I17171@xs4all.nl> 
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>  
            <20000811162109.I17171@xs4all.nl> 
Message-ID: <200008111556.KAA05068@cj20424-a.reston1.va.home.com>

> On Fri, Aug 11, 2000 at 03:55:01PM +0200, Fredrik Lundh wrote:
> > guido wrote:
> > > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
> > > mention sending fewer than all bytes at all.  In fact, while it says
> > > that the return value is the number of bytes sent, it at least
> > > *suggests* that it will return an error whenever not everything can be
> > > sent -- even in non-blocking mode.
> 
> > > Under what circumstances can send() return a smaller number?
> 
> > never, it seems:
> 
> [snip manpage]
> 
> Indeed. I didn't actually check the story, since Guido was apparently
> convinced by its validity.

I wasn't convinced!  I wrote "is this true?" in my message!!!

> I was just operating under the assumption that
> send() did behave like write(). I won't blindly believe Guido anymore ! :)

I bgelieve they do behave the same: in my mind, write() doesn't write
fewer bytes than you tell it either!  (Except maybe to a tty device
when interrupted by a signal???)

> Someone set the patch to 'rejected' and tell the submittor that 'send'
> doesn't return the number of bytes written ;-P

Done.

Note that send() *does* return the number of bytes written.  It's just
always (supposed to be) the same as the length of the argument string.

Since this is now established, should we change the send() method to
raise an exception when it returns a smaller number?  (The exception
probably should be a subclass of socket.error and should carry the
number of bytes written

Could there be a signal interrupt issue here too?  E.g. I send() a
megabyte, which takes a while due to TCP buffer limits; before I'm
done a signal handler interrupts the system call.  Will send() now:

(1) return a EINTR error
(2) continue
(3) return the number of bytes already written

???

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 11 17:58:45 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 11 Aug 2000 17:58:45 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111428.JAA04464@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 11, 2000 09:28:09 AM
Message-ID: <200008111558.RAA16953@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> We could set the maximum to 1000 and assume a "reasonable" stack size,
> but that doesn't make me feel comfortable either.

Nor me, but it's more comfortable than a core dump, and is the only
easy solution, solving most problems & probably breaking some code...
After all, a max of 1024 seems to be a good suggestion.

> 
> It would be good if there was a way to sense the remaining available
> stack, even if it wasn't portable.  Any Linux experts out there?

On a second thought, I think this would be a bad idea, even if
we manage to tweak the stack limits on most platforms. We would
loose determinism = loose control -- no good. A depth-first algorithm
may succeed on one machine, and fail on another.

I strongly prefer to know that I'm limited to 1024 recursions ("reasonable"
stack size assumptions included) and change my algorithm if it doesn't fly
with my structure, than stumble subsequently on the fact that my algorithm
works half the time.

Changing this now *may* break such scripts, and there doesn't seem
to be an immediate easy solution. But if I were to choose between
breaking some scripts and preventing core dumps, well...

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From moshez at math.huji.ac.il  Fri Aug 11 18:12:21 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 19:12:21 +0300 (IDT)
Subject: [Python-Dev] Cookie.py
Message-ID: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>

This is a continuation of a previous server-side cookie support.
There is a liberally licensed (old-Python license) framework called
Webware, which includes Cookie.py, (apparently the same one by Timothy
O'Malley). How about taking that Cookie.py?

Webware can be found at http://webware.sourceforge.net/
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From gmcm at hypernet.com  Fri Aug 11 18:25:18 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 12:25:18 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
References: Your message of "Fri, 11 Aug 2000 14:31:43 +0200."             <20000811143143.G17171@xs4all.nl> 
Message-ID: <1246111375-124272508@hypernet.com>

[Guido]
> I just read the man page for send() (Red Hat linux 6.1) and it
> doesn't mention sending fewer than all bytes at all.  In fact,
> while it says that the return value is the number of bytes sent,
> it at least *suggests* that it will return an error whenever not
> everything can be sent -- even in non-blocking mode.

It says (at least on RH 5.2): "If the message is too long to 
pass atomically through the underlying protocol...". Hey guys, 
TCP/IP is a stream protocol! For TCP/IP this is all completely 
misleading.

Yes, it returns the number of bytes sent. For TCP/IP it is *not* 
an error to send less than the argument. It's only an error if the 
other end dies at the time of actual send.

Python has been behaving properly all along. The bug report is 
correct. It's the usage of send in the std lib that is improper 
(though with a nearly infinitessimal chance of breaking, since 
it's almost all single threaded blocking usage of sockets).
 
> Under what circumstances can send() return a smaller number?

Just open a TCP/IP connection and send huge (64K or so) 
buffers. Current Python behavior is no different than C on 
Linux, HPUX and Windows.

Look it up in Stevens if you don't believe me. Or try it.

- Gordon



From akuchlin at mems-exchange.org  Fri Aug 11 18:26:08 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 12:26:08 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 11, 2000 at 07:12:21PM +0300
References: <Pine.GSO.4.10.10008111909510.5259-100000@sundial>
Message-ID: <20000811122608.F20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 07:12:21PM +0300, Moshe Zadka wrote:
>This is a continuation of a previous server-side cookie support.
>There is a liberally licensed (old-Python license) framework called
>Webware, which includes Cookie.py, (apparently the same one by Timothy
>O'Malley). How about taking that Cookie.py?

O'Malley got in touch with me and let me know that the license has
been changed to the 1.5.2 license with his departure from BBN.  He
hasn't sent me a URL where the current version can be downloaded,
though.  I don't know if WebWare has the most current version; it
seems not, since O'Malley's was dated 06/21 and WebWare's was checked
in on May 23.

By the way, I'd suggest adding Cookie.py to a new 'web' package, and
taking advantage of the move to break backward compatibility and
remove the automatic usage of pickle (assuming it's still there).

--amk



From nascheme at enme.ucalgary.ca  Fri Aug 11 18:37:01 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 11 Aug 2000 10:37:01 -0600
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <200008111558.RAA16953@python.inrialpes.fr>; from Vladimir Marangozov on Fri, Aug 11, 2000 at 05:58:45PM +0200
References: <200008111428.JAA04464@cj20424-a.reston1.va.home.com> <200008111558.RAA16953@python.inrialpes.fr>
Message-ID: <20000811103701.A25386@keymaster.enme.ucalgary.ca>

On Fri, Aug 11, 2000 at 05:58:45PM +0200, Vladimir Marangozov wrote:
> On a second thought, I think this would be a bad idea, even if
> we manage to tweak the stack limits on most platforms. We would
> loose determinism = loose control -- no good. A depth-first algorithm
> may succeed on one machine, and fail on another.

So what?  We don't limit the amount of memory you can allocate on all
machines just because your program may run out of memory on some
machine.  It seems like the same thing to me.

  Neil



From moshez at math.huji.ac.il  Fri Aug 11 18:40:31 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 11 Aug 2000 19:40:31 +0300 (IDT)
Subject: [Python-Dev] Cookie.py
In-Reply-To: <20000811122608.F20646@kronos.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>

On Fri, 11 Aug 2000, Andrew Kuchling wrote:

> O'Malley got in touch with me and let me know that the license has
> been changed to the 1.5.2 license with his departure from BBN.  He
> hasn't sent me a URL where the current version can be downloaded,
> though.  I don't know if WebWare has the most current version; it
> seems not, since O'Malley's was dated 06/21 and WebWare's was checked
> in on May 23.

Well, as soon as you get a version, let me know: I've started working
on documentation.

> By the way, I'd suggest adding Cookie.py to a new 'web' package, and
> taking advantage of the move to break backward compatibility and
> remove the automatic usage of pickle (assuming it's still there).

Well, depends on what you mean there:

There are now three classes

a) SimpleCookie -- never uses pickle
b) SerilizeCookie -- always uses pickle
c) SmartCookie -- uses pickle based on old heuristic.

About web package: I'm +0. Fred has to think about how to document
things in packages (we never had to until now). Well, who cares <wink>

What is more important is working on documentation (which I'm doing),
and on a regression test (for which the May 23 version is probably good 
enough).

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Fri Aug 11 18:44:07 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 18:44:07 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <200008111556.KAA05068@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 11, 2000 at 10:56:06AM -0500
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF> <20000811162109.I17171@xs4all.nl> <200008111556.KAA05068@cj20424-a.reston1.va.home.com>
Message-ID: <20000811184407.A14470@xs4all.nl>

On Fri, Aug 11, 2000 at 10:56:06AM -0500, Guido van Rossum wrote:

> > Indeed. I didn't actually check the story, since Guido was apparently
> > convinced by its validity.

> I wasn't convinced!  I wrote "is this true?" in my message!!!

I appologize... It's been a busy day for me, I guess I wasn't paying enough
attention. I'll try to keep quiet when that happens, next time :P

> > I was just operating under the assumption that
> > send() did behave like write(). I won't blindly believe Guido anymore ! :)

> I bgelieve they do behave the same: in my mind, write() doesn't write
> fewer bytes than you tell it either!  (Except maybe to a tty device
> when interrupted by a signal???)

Hm, I seem to recall write() could return after less than a full write, but
I might be mistaken. I thought I was confusing send with write, but maybe
I'm confusing both with some other function :-) I'm *sure* there is a
function that behaves that way :P

> Note that send() *does* return the number of bytes written.  It's just
> always (supposed to be) the same as the length of the argument string.

> Since this is now established, should we change the send() method to
> raise an exception when it returns a smaller number?  (The exception
> probably should be a subclass of socket.error and should carry the
> number of bytes written

Ahh, now it's starting to get clear to me. I'm not sure if it's worth it...
It would require a different (non-POSIX) socket layer to return on
'incomplete' writes, and that is likely to break a number of other things,
too. (Lets hope it does... a socket layer which has the same API but a
different meaning would be very confusing !)

> Could there be a signal interrupt issue here too?

No, I don't think so.

> E.g. I send() a megabyte, which takes a while due to TCP buffer limits;
> before I'm done a signal handler interrupts the system call.  Will send()
> now:

> (1) return a EINTR error

Yes. From the manpage:

       If  the  message  is  too  long  to pass atomically
       through the underlying protocol,  the  error  EMSGSIZE  is
       returned, and the message is not transmitted.

[..]

ERRORS

       EINTR   A signal occurred.

[..]

Because send() either completely succeeds or completely fails, I didn't see
why you wanted an exception generated :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 11 18:45:13 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 12:45:13 -0400 (EDT)
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
References: <20000811122608.F20646@kronos.cnri.reston.va.us>
	<Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>

Moshe Zadka writes:
 > About web package: I'm +0. Fred has to think about how to document
 > things in packages (we never had to until now). Well, who cares <wink>

  I'm not aware of any issues with documenting packages; the curses
documentation seems to be coming along nicely, and that's a package.
If you think I've missed something, we can (and should) deal with it
in the Doc-SIG.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From akuchlin at mems-exchange.org  Fri Aug 11 18:48:11 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 11 Aug 2000 12:48:11 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008111936060.5259-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 11, 2000 at 07:40:31PM +0300
References: <20000811122608.F20646@kronos.cnri.reston.va.us> <Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <20000811124811.G20646@kronos.cnri.reston.va.us>

On Fri, Aug 11, 2000 at 07:40:31PM +0300, Moshe Zadka wrote:
>There are now three classes
>a) SimpleCookie -- never uses pickle
>b) SerilizeCookie -- always uses pickle
>c) SmartCookie -- uses pickle based on old heuristic.

Ah, good; never mind, then.

>About web package: I'm +0. Fred has to think about how to document
>things in packages (we never had to until now). Well, who cares <wink>

Hmm... the curses.ascii module is already documented, so documenting
packages shouldn't be a problem.

--amk



From esr at thyrsus.com  Fri Aug 11 19:03:01 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 13:03:01 -0400
Subject: [Python-Dev] Cookie.py
In-Reply-To: <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 11, 2000 at 12:45:13PM -0400
References: <20000811122608.F20646@kronos.cnri.reston.va.us> <Pine.GSO.4.10.10008111936060.5259-100000@sundial> <14740.11673.869664.837436@cj42289-a.reston1.va.home.com>
Message-ID: <20000811130301.A7354@thyrsus.com>

Fred L. Drake, Jr. <fdrake at beopen.com>:
>   I'm not aware of any issues with documenting packages; the curses
> documentation seems to be coming along nicely, and that's a package.
> If you think I've missed something, we can (and should) deal with it
> in the Doc-SIG.

The curses documentation is basically done.  I've fleshed out the
library reference and overhauled the HOWTO.  I shipped the latter to
amk yesterday because I couldn't beat CVS into checking out py-howtos
for me.

The items left on my to-do list are drafting PEP002 and doing something
constructive about the Berkeley DB mess.  I doubt I'll get to these 
things before LinuxWorld.  Anybody else going?
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

It would be thought a hard government that should tax its people one tenth 
part.
	-- Benjamin Franklin



From rushing at nightmare.com  Fri Aug 11 18:59:07 2000
From: rushing at nightmare.com (Sam Rushing)
Date: Fri, 11 Aug 2000 09:59:07 -0700 (PDT)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <284209072@toto.iv>
Message-ID: <14740.12507.587044.121462@seattle.nightmare.com>

Guido van Rossum writes:
 > Really?!?!
 > 
 > I just read the man page for send() (Red Hat linux 6.1) and it doesn't
 > mention sending fewer than all bytes at all.  In fact, while it says
 > that the return value is the number of bytes sent, it at least
 > *suggests* that it will return an error whenever not everything can be
 > sent -- even in non-blocking mode.
 > 
 > Under what circumstances can send() return a smaller number?

It's a feature of Linux... it will send() everything.  Other unixen
act in the classic fashion (it bit me on FreeBSD), and send only what
fits right into the buffer that awaits.

I think this could safely be added to the send method in
socketmodule.c.  Linux users wouldn't even notice.  IMHO this is the
kind of feature that people come to expect from programming in a HLL.
Maybe disable the feature if it's a non-blocking socket?

-Sam




From billtut at microsoft.com  Fri Aug 11 19:01:44 2000
From: billtut at microsoft.com (Bill Tutt)
Date: Fri, 11 Aug 2000 10:01:44 -0700
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
	!)
Message-ID: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>

This is an alternative approach that we should certainly consider. We could
use ANTLR (www.antlr.org) as our parser generator, and have it generate Java
for JPython, and C++ for CPython.  This would be a good chunk of work, and
it's something I really don't have time to pursue. I don't even have time to
pursue the idea about moving keyword recognition into the lexer.

I'm just not sure if you want to bother introducing C++ into the Python
codebase solely to only have one parser for CPython and JPython.

Bill

 -----Original Message-----
From: 	bwarsaw at beopen.com [mailto:bwarsaw at beopen.com] 
Sent:	Thursday, August 10, 2000 8:01 PM
To:	Guido van Rossum
Cc:	Mark Hammond; python-dev at python.org
Subject:	Re: [Python-Dev] Python keywords (was Lockstep iteration -
eureka!)


>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

    GvR> Alas, I'm not sure how easy it will be.  The parser generator
    GvR> will probably have to be changed to allow you to indicate not
    GvR> to do a resword lookup at certain points in the grammar.  I
    GvR> don't know where to start. :-(

Yet another reason why it would be nice to (eventually) merge the
parsing technology in CPython and JPython.

i-don't-wanna-work-i-jes-wanna-bang-on-my-drum-all-day-ly y'rs,
-Barry

_______________________________________________
Python-Dev mailing list
Python-Dev at python.org
http://www.python.org/mailman/listinfo/python-dev



From gmcm at hypernet.com  Fri Aug 11 19:04:26 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 13:04:26 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <1246111375-124272508@hypernet.com>
References: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>
Message-ID: <1246109027-124413737@hypernet.com>

[I wrote, about send()]
> Yes, it returns the number of bytes sent. For TCP/IP it is *not*
> an error to send less than the argument. It's only an error if
> the other end dies at the time of actual send.

[and...]
> Just open a TCP/IP connection and send huge (64K or so) 
> buffers. Current Python behavior is no different than C on 
> Linux, HPUX and Windows.

And I just demonstrated it. Strangely enough, sending from Windows 
(where the dos say "send returns the total number of bytes sent, 
which can be less than the number indicated by len") it always 
sent the whole buffer, even when that was 1M on a non-
blocking socket. (I select()'ed the socket first, to make sure it 
could send something).

But from Linux, the largest buffer sent was 54,020 and typical 
was 27,740. No errors.


- Gordon



From thomas at xs4all.net  Fri Aug 11 19:04:37 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 11 Aug 2000 19:04:37 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <14740.12507.587044.121462@seattle.nightmare.com>; from rushing@nightmare.com on Fri, Aug 11, 2000 at 09:59:07AM -0700
References: <284209072@toto.iv> <14740.12507.587044.121462@seattle.nightmare.com>
Message-ID: <20000811190437.C17176@xs4all.nl>

On Fri, Aug 11, 2000 at 09:59:07AM -0700, Sam Rushing wrote:

> It's a feature of Linux... it will send() everything.  Other unixen
> act in the classic fashion (it bit me on FreeBSD), and send only what
> fits right into the buffer that awaits.

Ahhh, the downsides of working on the Most Perfect OS (writing this while
our Technical Manager, a FreeBSD fan, is looking over my shoulder ;)
Thanx for clearing that up. I was slowly going insane ;-P

> I think this could safely be added to the send method in
> socketmodule.c.  Linux users wouldn't even notice.  IMHO this is the
> kind of feature that people come to expect from programming in a HLL.
> Maybe disable the feature if it's a non-blocking socket?

Hm, I'm not sure if that's the 'right' thing to do, though disabling it for
non-blocking sockets is a nice idea. It shouldn't break anything, but it
doesn't feel too 'right'. The safe option would be to add a function that
resends as long as necessary, and point everyone to that function. But I'm
not sure what the name should be -- send is just so obvious ;-) 

Perhaps you're right, perhaps we should consider this a job for the type of
VHLL that Python is, and provide the opposite function separate instead: a
non-resending send(), for those that really want it. But in the eyes of the
Python programmer, socket.send() would just magically accept and send any
message size you care to give it, so it shouldn't break things. I think ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 11 19:16:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 13:16:43 -0400 (EDT)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811190437.C17176@xs4all.nl>
References: <284209072@toto.iv>
	<14740.12507.587044.121462@seattle.nightmare.com>
	<20000811190437.C17176@xs4all.nl>
Message-ID: <14740.13563.466035.477406@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Perhaps you're right, perhaps we should consider this a job for the type of
 > VHLL that Python is, and provide the opposite function separate instead: a
 > non-resending send(), for those that really want it. But in the eyes of the
 > Python programmer, socket.send() would just magically accept and send any
 > message size you care to give it, so it shouldn't break things. I think ;)

  This version receives my +1.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gmcm at hypernet.com  Fri Aug 11 19:38:01 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 13:38:01 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <20000811190437.C17176@xs4all.nl>
References: <14740.12507.587044.121462@seattle.nightmare.com>; from rushing@nightmare.com on Fri, Aug 11, 2000 at 09:59:07AM -0700
Message-ID: <1246107013-124534915@hypernet.com>

Thomas Wouters wrote:
> On Fri, Aug 11, 2000 at 09:59:07AM -0700, Sam Rushing wrote:
> 
> > It's a feature of Linux... it will send() everything.  Other
> > unixen act in the classic fashion (it bit me on FreeBSD), and
> > send only what fits right into the buffer that awaits.
...
> > I think this could safely be added to the send method in
> > socketmodule.c.  Linux users wouldn't even notice.  IMHO this
> > is the kind of feature that people come to expect from
> > programming in a HLL. Maybe disable the feature if it's a
> > non-blocking socket?
> 
> Hm, I'm not sure if that's the 'right' thing to do, though
> disabling it for non-blocking sockets is a nice idea. 

It's absolutely vital that it be disabled for non-blocking 
sockets. Otherwise you've just made it into a blocking socket.

With that in place, I would be neutral on the change. I still feel 
that Python is already doing the right thing. The fact that 
everyone misunderstood the man page is not a good reason to 
change Python to match that misreading.

> It
> shouldn't break anything, but it doesn't feel too 'right'. The
> safe option would be to add a function that resends as long as
> necessary, and point everyone to that function. But I'm not sure
> what the name should be -- send is just so obvious ;-) 

I've always thought that was why there was a makefile method.
 


- Gordon



From guido at beopen.com  Sat Aug 12 00:05:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:05:32 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: Your message of "Fri, 11 Aug 2000 13:04:26 -0400."
             <1246109027-124413737@hypernet.com> 
References: <200008111419.JAA03948@cj20424-a.reston1.va.home.com>  
            <1246109027-124413737@hypernet.com> 
Message-ID: <200008112205.RAA01218@cj20424-a.reston1.va.home.com>

[Gordon]
> [I wrote, about send()]
> > Yes, it returns the number of bytes sent. For TCP/IP it is *not*
> > an error to send less than the argument. It's only an error if
> > the other end dies at the time of actual send.
> 
> [and...]
> > Just open a TCP/IP connection and send huge (64K or so) 
> > buffers. Current Python behavior is no different than C on 
> > Linux, HPUX and Windows.
> 
> And I just demonstrated it. Strangely enough, sending from Windows 
> (where the dos say "send returns the total number of bytes sent, 
> which can be less than the number indicated by len") it always 
> sent the whole buffer, even when that was 1M on a non-
> blocking socket. (I select()'ed the socket first, to make sure it 
> could send something).
> 
> But from Linux, the largest buffer sent was 54,020 and typical 
> was 27,740. No errors.

OK.  So send() can do a partial write, but only on a stream
connection.  And most standard library code doesn't check for that
condition, nor does (probably) much other code that used the standard
library as an example.  Worse, it seems that on some platforms send()
*never* does a partial write (I couldn't reproduce it on Red Hat 6.1
Linux), so even stress testing may not reveal the lurking problem.

Possible solutions:

1. Do nothing.  Pro: least work.  Con: subtle bugs remain.

2. Fix all code that's broken in the standard library, and try to
encourage others to fix their code.  Book authors need to be
encouraged to add a warning.  Pro: most thorough.  Con: hard to fix
every occurrence, especially in 3rd party code.

3. Fix the socket module to raise an exception when less than the
number of bytes sent occurs.  Pro: subtle bug exposed when it
happens.  Con: breaks code that did the right thing!

4. Fix the socket module to loop back on a partial send to send the
remaining bytes.  Pro: no more short writes.  Con: what if the first
few send() calls succeed and then an error is returned?  Note: code
that checks for partial writes will be redundant!

I'm personally in favor of (4), despite the problem with errors after
the first call.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sat Aug 12 00:14:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:14:23 -0500
Subject: [Python-Dev] missing mail
Message-ID: <200008112214.RAA01257@cj20424-a.reston1.va.home.com>

Just a note to you all.  It seems I'm missing a lot of mail to
python-dev.  I noticed because I got a few mails cc'ed to me and never
saw the copy sent via the list (which normally shows up within a
minute).  I looked in the archives and there were more messages that I
hadn't seen at all (e.g. the entire Cookie thread).

I don't know where the problem is (I *am* getting other mail to
guido at python.org as well as to guido at beopen.com) and I have no time to
research this right now.  I'm going to be mostly off line this weekend
and also all of next week.  (I'll be able to read mail occasionally
but I'll be too busy to keep track of everything.)

So if you need me to reply, please cc me directly -- and please be
patient!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From huaiyu_zhu at yahoo.com  Fri Aug 11 23:13:17 2000
From: huaiyu_zhu at yahoo.com (Huaiyu Zhu)
Date: Fri, 11 Aug 2000 14:13:17 -0700 (PDT)
Subject: [Python-Dev] Re: PEP 0211: Linear Algebra Operators
In-Reply-To: <Pine.LNX.4.10.10008110936390.13482-200000@akbar.nevex.com>
Message-ID: <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com>

As the PEP posted by Greg is substantially different from the one floating
around in c.l.py, I'd like to post the latter here, which covers several
weeks of discussions by dozens of discussants.  I'd like to encourage Greg
to post his version to python-list to seek comments.

I'd be grateful to hear any comments.


        Python Extension Proposal: Adding new math operators 
                Huaiyu Zhu <hzhu at users.sourceforge.net>
                         2000-08-11, draft 3


Introduction
------------

This PEP describes a proposal to add new math operators to Python, and
summarises discussions in the news group comp.lang.python on this topic.
Issues discussed here include:

1. Background.
2. Description of proposed operators and implementation issues.
3. Analysis of alternatives to new operators.
4. Analysis of alternative forms.
5. Compatibility issues
6. Description of wider extensions and other related ideas.

A substantial portion of this PEP describes ideas that do not go into the
proposed extension.  They are presented because the extension is essentially
syntactic sugar, so its adoption must be weighed against various possible
alternatives.  While many alternatives may be better in some aspects, the
current proposal appears to be overall advantageous.



Background
----------

Python provides five basic math operators, + - * / **.  (Hereafter
generically represented by "op").  They can be overloaded with new semantics
for user-defined classes.  However, for objects composed of homogeneous
elements, such as arrays, vectors and matrices in numerical computation,
there are two essentially distinct flavors of semantics.  The objectwise
operations treat these objects as points in multidimensional spaces.  The
elementwise operations treat them as collections of individual elements.
These two flavors of operations are often intermixed in the same formulas,
thereby requiring syntactical distinction.

Many numerical computation languages provide two sets of math operators.
For example, in Matlab, the ordinary op is used for objectwise operation
while .op is used for elementwise operation.  In R, op stands for
elementwise operation while %op% stands for objectwise operation.

In python, there are other methods of representation, some of which already
used by available numerical packages, such as

1. function:   mul(a,b)
2. method:     a.mul(b)
3. casting:    a.E*b 

In several aspects these are not as adequate as infix operators.  More
details will be shown later, but the key points are

1. Readability: Even for moderately complicated formulas, infix operators
   are much cleaner than alternatives.
2. Familiarity: Users are familiar with ordinary math operators.  
3. Implementation: New infix operators will not unduly clutter python
   syntax.  They will greatly ease the implementation of numerical packages.

While it is possible to assign current math operators to one flavor of
semantics, there is simply not enough infix operators to overload for the
other flavor.  It is also impossible to maintain visual symmetry between
these two flavors if one of them does not contain symbols for ordinary math
operators.  



Proposed extension
------------------

1.  New operators ~+ ~- ~* ~/ ~** ~+= ~-= ~*= ~/= ~**= are added to core
    Python.  They parallel the existing operators + - * / ** and the (soon
    to be added) += -= *= /= **= operators.

2.  Operator ~op retains the syntactical properties of operator op,
    including precedence.

3.  Operator ~op retains the semantical properties of operator op on
    built-in number types.  They raise syntax error on other types.

4.  These operators are overloadable in classes with names that prepend
    "alt" to names of ordinary math operators.  For example, __altadd__ and
    __raltadd__ work for ~+ just as __add__ and __radd__ work for +.

5.  As with standard math operators, the __r*__() methods are invoked when
    the left operand does not provide the appropriate method.

The symbol ~ is already used in Python as the unary "bitwise not" operator.
Currently it is not allowed for binary operators.  To allow it as part of
binary operators, the tokanizer would treat ~+ as one token.  This means
that currently valid expression ~+1 would be tokenized as ~+ 1 instead of ~
+ 1.  The compiler would then treat ~+ as composite of ~ +.  

The proposed implementation is to patch several files relating to the parser
and compiler to duplicate the functionality of existing math operators as
necessary.  All new semantics are to be implemented in the application that
overloads them, but they are recommended to be conceptually similar to
existing math operators.

It is not specified which version of operators stands for elementwise or
objectwise operations, leaving the decision to applications.

A prototype implementation already exists.



Alternatives to adding new operators
------------------------------------

Some of the leading alternatives, using the multiplication as an example.

1. Use function mul(a,b).

   Advantage:
   -  No need for new operators.
  
   Disadvantage: 
   - Prefix forms are cumbersome for composite formulas.
   - Unfamiliar to the intended users.
   - Too verbose for the intended users.
   - Unable to use natural precedence rules.
 
2. Use method call a.mul(b)

   Advantage:
   - No need for new operators.
   
   Disadvantage:
   - Asymmetric for both operands.
   - Unfamiliar to the intended users.
   - Too verbose for the intended users.
   - Unable to use natural precedence rules.


3. Use "shadow classes".  For matrix class define a shadow array class
   accessible through a method .E, so that for matrices a and b, a.E*b would
   be a matrix object that is elementwise_mul(a,b). 

   Likewise define a shadow matrix class for arrays accessible through a
   method .M so that for arrays a and b, a.M*b would be an array that is
   matrixwise_mul(a,b).

   Advantage:
   - No need for new operators.
   - Benefits of infix operators with correct precedence rules.
   - Clean formulas in applications.
   
   Disadvantage:
   - Hard to maintain in current Python because ordinary numbers cannot have
     user defined class methods.  (a.E*b will fail if a is a pure number.)
   - Difficult to implement, as this will interfere with existing method
     calls, like .T for transpose, etc.
   - Runtime overhead of object creation and method lookup.
   - The shadowing class cannot replace a true class, because it does not
     return its own type.  So there need to be a M class with shadow E class,
     and an E class with shadow M class.
   - Unnatural to mathematicians.

4. Implement matrixwise and elementwise classes with easy casting to the
   other class.  So matrixwise operations for arrays would be like a.M*b.M
   and elementwise operations for matrices would be like a.E*b.E.  For error
   detection a.E*b.M would raise exceptions.

   Advantage:
   - No need for new operators.
   - Similar to infix notation with correct precedence rules.

   Disadvantage:
   - Similar difficulty due to lack of user-methods for pure numbers.
   - Runtime overhead of object creation and method lookup.
   - More cluttered formulas
   - Switching of flavor of objects to facilitate operators becomes
     persistent.  This introduces long range context dependencies in
     application code that would be extremely hard to maintain.

5. Using mini parser to parse formulas written in arbitrary extension placed
   in quoted strings.

   Advantage:
   - Pure Python, without new operators

   Disadvantage:
   - The actual syntax is within the quoted string, which does not resolve
     the problem itself.
   - Introducing zones of special syntax.
   - Demanding on the mini-parser.

Among these alternatives, the first and second are used in current
applications to some extent, but found inadequate.  The third is the most
favorite for applications, but it will incur huge implementation complexity.
The fourth would make applications codes very contex-sensitive and hard to
maintain.  These two alternatives also share significant implementational
difficulties due to current type/class split.  The fifth appears to create
more problems than it would solve.



Alternative forms of infix operators
------------------------------------

Two major forms and several minor variants of new infix operators were
discussed:

1. Bracketed form

   (op)
   [op]
   {op}
   <op>
   :op:
   ~op~
   %op%

2. Meta character form

   .op
   @op
   ~op
   
   Alternatively the meta character is put after the operator.

3. Less consistent variations of these themes.   These are considered
   unfavorably.  For completeness some are listed here
   - Use @/ and /@ for left and right division
   - Use [*] and (*) for outer and inner products

4. Use __call__ to simulate multiplication.
   a(b)  or (a)(b)


Criteria for choosing among the representations include:

   - No syntactical ambiguities with existing operators.  

   - Higher readability in actual formulas.  This makes the bracketed forms
     unfavorable.  See examples below.

   - Visually similar to existing math operators.

   - Syntactically simple, without blocking possible future extensions.


With these criteria the overall winner in bracket form appear to be {op}.  A
clear winner in the meta character form is ~op.  Comparing these it appears
that ~op is the favorite among them all.  

Some analysis are as follows:

   - The .op form is ambiguous: 1.+a would be different from 1 .+a.

   - The bracket type operators are most favorable when standing alone, but
     not in formulas, as they interfere with visual parsing of parenthesis
     for precedence and function argument.  This is so for (op) and [op],
     and somewhat less so for {op} and <op>.

   - The <op> form has the potential to be confused with < > and =.

   - The @op is not favored because @ is visually heavy (dense, more like a
     letter): a at +b is more readily read as a@ + b than a @+ b.

   - For choosing meta-characters: Most of existing ASCII symbols have
     already been used.  The only three unused are @ $ ?.



Semantics of new operators
--------------------------

There are convincing arguments for using either set of operators as
objectwise or elementwise.  Some of them are listed here:

1. op for element, ~op for object

   - Consistent with current multiarray interface of Numeric package
   - Consistent with some other languages
   - Perception that elementwise operations are more natural
   - Perception that elementwise operations are used more frequently

2. op for object, ~op for element

   - Consistent with current linear algebra interface of MatPy package
   - Consistent with some other languages
   - Perception that objectwise operations are more natural
   - Perception that objectwise operations are used more frequently
   - Consistent with the current behavior of operators on lists
   - Allow ~ to be a general elementwise meta-character in future extensions.

It is generally agreed upon that 

   - there is no absolute reason to favor one or the other
   - it is easy to cast from one representation to another in a sizable
     chunk of code, so the other flavor of operators is always minority
   - there are other semantic differences that favor existence of
     array-oriented and matrix-oriented packages, even if their operators
     are unified.
   - whatever the decision is taken, codes using existing interfaces should
     not be broken for a very long time.

Therefore not much is lost, and much flexibility retained, if the semantic
flavors of these two sets of operators are not dictated by the core
language.  The application packages are responsible for making the most
suitable choice.  This is already the case for NumPy and MatPy which use
opposite semantics.  Adding new operators will not break this.  See also
observation after subsection 2 in the Examples below.

The issue of numerical precision was raised, but if the semantics is left to
the applications, the actual precisions should also go there.



Examples
--------

Following are examples of the actual formulas that will appear using various
operators or other representations described above.

1. The matrix inversion formula:

   - Using op for object and ~op for element:
     
     b = a.I - a.I * u / (c.I + v/a*u) * v / a

     b = a.I - a.I * u * (c.I + v*a.I*u).I * v * a.I

   - Using op for element and ~op for object:
   
     b = a.I @- a.I @* u @/ (c.I @+ v@/a@*u) @* v @/ a

     b = a.I ~- a.I ~* u ~/ (c.I ~+ v~/a~*u) ~* v ~/ a

     b = a.I (-) a.I (*) u (/) (c.I (+) v(/)a(*)u) (*) v (/) a

     b = a.I [-] a.I [*] u [/] (c.I [+] v[/]a[*]u) [*] v [/] a

     b = a.I <-> a.I <*> u </> (c.I <+> v</>a<*>u) <*> v </> a

     b = a.I {-} a.I {*} u {/} (c.I {+} v{/}a{*}u) {*} v {/} a

   Observation: For linear algebra using op for object is preferable.

   Observation: The ~op type operators look better than (op) type in
   complicated formulas.

   - using named operators

     b = a.I @sub a.I @mul u @div (c.I @add v @div a @mul u) @mul v @div a

     b = a.I ~sub a.I ~mul u ~div (c.I ~add v ~div a ~mul u) ~mul v ~div a

   Observation: Named operators are not suitable for math formulas.


2. Plotting a 3d graph

   - Using op for object and ~op for element:

     z = sin(x~**2 ~+ y~**2);    plot(x,y,z)

   - Using op for element and ~op for object:

     z = sin(x**2 + y**2);   plot(x,y,z)

    Observation: Elementwise operations with broadcasting allows much more
    efficient implementation than Matlab.

    Observation: Swapping the semantics of op and ~op (by casting the
    objects) is often advantageous, as the ~op operators would only appear
    in chunks of code where the other flavor dominate.


3. Using + and - with automatic broadcasting

     a = b - c;  d = a.T*a

   Observation: This would silently produce hard-to-trace bugs if one of b
   or c is row vector while the other is column vector.



Miscellaneous issues:
---------------------

1. Need for the ~+ ~- operators.  The objectwise + - are important because
   they provide important sanity checks as per linear algebra.  The
   elementwise + - are important because they allow broadcasting that are
   very efficient in applications.

2. Left division (solve).  For matrix, a*x is not necessarily equal to x*a.
   The solution of a*x==b, denoted x=solve(a,b), is therefore different from
   the solution of x*a==b, denoted x=div(b,a).  There are discussions about
   finding a new symbol for solve.  [Background: Matlab use b/a for div(b,a)
   and a\b for solve(a,b).]

   It is recognized that Python provides a better solution without requiring
   a new symbol: the inverse method .I can be made to be delayed so that
   a.I*b and b*a.I are equivalent to Matlab's a\b and b/a.  The
   implementation is quite simple and the resulting application code clean.

3. Power operator.  Python's use of a**b as pow(a,b) has two perceived
   disadvantages:
   - Most mathematicians are more familiar with a^b for this purpose.
   - It results in long augmented assignment operator ~**=.
   However, this issue is distinct from the main issue here.

4. Additional multiplication operators.  Several forms of multiplications
   are used in (multi-)linear algebra.  Most can be seen as variations of
   multiplication in linear algebra sense (such as Kronecker product).  But
   two forms appear to be more fundamental: outer product and inner product.
   However, their specification includes indices, which can be either

   - associated with the operator, or
   - associated with the objects.

   The latter (the Einstein notation) is used extensively on paper, and is
   also the easier one to implement.  By implementing a tensor-with-indices
   class, a general form of multiplication would cover both outer and inner
   products, and specialize to linear algebra multiplication as well.  The
   index rule can be defined as class methods, like,

     a = b.i(1,2,-1,-2) * c.i(4,-2,3,-1)   # a_ijkl = b_ijmn c_lnkm

   Therefore one objectwise multiplication is sufficient.

5. Bitwise operators.  Currently Python assigns six operators to bitwise
   operations: and (&), or (|), xor (^), complement (~), left shift (<<) and
   right shift (>>), with their own precedence levels.  This has some
   barings on the new math operators in several ways:

   - The proposed new math operators use the symbol ~ that is "bitwise not"
     operator.  This poses no compatibility problem but somewhat complicates
     implementation.

   - The symbol ^ might be better used for pow than bitwise xor.  But this
     depends on the future of bitwise operators.  It does not immediately
     impact on the proposed math operator.

   - The symbol | was suggested to be used for matrix solve.  But the new
     solution of using delayed .I is better in several ways.

   - The bitwise operators assign special syntactical and semantical
     structures to operations, which could be more consistently regarded as
     elementwise lattice operators. (see below) Most of their usage could be
     replaced by a bitwise module with named functions.  Removing ~ as a
     single operator could also allow notations that link them to logical
     operators (see below).  However, this issue is separate from the
     current proposed extension.

6. Lattice operators.  It was suggested that similar operators be combined
   with bitwise operators to represent lattice operations.  For example, ~|
   and ~& could represent "lattice or" and "lattice and".  But these can
   already be achieved by overloading existing logical or bitwise operators.
   On the other hand, these operations might be more deserving for infix
   operators than the built-in bitwise operators do (see below).

7. Alternative to special operator names used in definition,

   def "+"(a, b)      in place of       def __add__(a, b)

   This appears to require greater syntactical change, and would only be
   useful when arbitrary additional operators are allowed.

8. There was a suggestion to provide a copy operator :=, but this can
   already be done by a=b.copy.



Impact on possible future extensions:
-------------------------------------

More general extensions could lead from the current proposal. Although they
would be distinct proposals, they might have syntactical or semantical
implications on each other.  It is prudent to ensure that the current
extension do not restrict any future possibilities.


1. Named operators. 

The news group discussion made it generally clear that infix operators is a
scarce resource in Python, not only in numerical computation, but in other
fields as well.  Several proposals and ideas were put forward that would
allow infix operators be introduced in ways similar to named functions.

The idea of named infix operators is essentially this: Choose a meta
character, say @, so that for any identifier "opname", the combination
"@opname" would be a binary infix operator, and

a @opname b == opname(a,b)

Other representations mentioned include .name ~name~ :name: (.name) %name%
and similar variations.  The pure bracket based operators cannot be used
this way.

This requires a change in the parser to recognize @opname, and parse it into
the same structure as a function call.  The precedence of all these
operators would have to be fixed at one level, so the implementation would
be different from additional math operators which keep the precedence of
existing math operators.

The current proposed extension do not limit possible future extensions of
such form in any way.


2. More general symbolic operators.

One additional form of future extension is to use meta character and
operator symbols (symbols that cannot be used in syntactical structures
other than operators).  Suppose @ is the meta character.  Then

      a + b,    a @+ b,    a @@+ b,  a @+- b

would all be operators with a hierarchy of precedence, defined by

   def "+"(a, b)
   def "@+"(a, b)
   def "@@+"(a, b)
   def "@+-"(a, b)

One advantage compared with named operators is greater flexibility for
precedences based on either the meta character or the ordinary operator
symbols.  This also allows operator composition.  The disadvantage is that
they are more like "line noise".  In any case the current proposal does not
impact its future possibility.

These kinds of future extensions may not be necessary when Unicode becomes
generally available.


3. Object/element dichotomy for other types of objects.

The distinction between objectwise and elementwise operations are meaningful
in other contexts as well, where an object can be conceptually regarded as a
collection of homogeneous elements.  Several examples are listed here:
   
   - List arithmetics
   
      [1, 2] + [3, 4]        # [1, 2, 3, 4]
      [1, 2] ~+ [3, 4]       # [4, 6]
                             
      ['a', 'b'] * 2         # ['a', 'b', 'a', 'b']
      'ab' * 2               # 'abab'
      ['a', 'b'] ~* 2        # ['aa', 'bb']
      [1, 2] ~* 2            # [2, 4]

     It is also consistent to Cartesian product

      [1,2]*[3,4]            # [(1,3),(1,4),(2,3),(2,4)]
   
   - Tuple generation
   
      [1, 2, 3], [4, 5, 6]   # ([1,2, 3], [4, 5, 6])
      [1, 2, 3]~,[4, 5, 6]   # [(1,4), (2, 5), (3,6)]
   
      This has the same effect as the proposed zip function.
   
   - Bitwise operation (regarding integer as collection of bits, and
      removing the dissimilarity between bitwise and logical operators)
   
      5 and 6       # 6
      5 or 6        # 5
                    
      5 ~and 6      # 4
      5 ~or 6       # 7
   
   - Elementwise format operator (with broadcasting)
   
      a = [1,2,3,4,5]
      print ["%5d "] ~% a     # print ("%5s "*len(a)) % tuple(a)
      a = [[1,2],[3,4]]
      print ["%5d "] ~~% a
   
   - Using ~ as generic elementwise meta-character to replace map
   
      ~f(a, b)      # map(f, a, b)
      ~~f(a, b)     # map(lambda *x:map(f, *x), a, b)
   
      More generally,
   
      def ~f(*x): return map(f, *x)
      def ~~f(*x): return map(~f, *x)
      ...

    - Rich comparison

      [1, 2, 3, 4]  ~< [4, 3, 2, 1]  # [1, 1, 0, 0]
   
   There are probably many other similar situations.  This general approach
   seems well suited for most of them, in place of several separated
   proposals for each of them (parallel and cross iteration, list
   comprehension, rich comparison, and some others).

   Of course, the sementics of "elementwise" depends on applications.  For
   example an element of matrix is two levels down from list of list point
   of view.  In any case, the current proposal will not negatively impact on
   future possibilities of this nature.

Note that this section discusses compatibility of the proposed extension
with possible future extensions.  The desirability or compatibility of these
other extensions themselves are specifically not considered here.




-- 
Huaiyu Zhu                       hzhu at users.sourceforge.net
Matrix for Python Project        http://MatPy.sourceforge.net 





From trentm at ActiveState.com  Fri Aug 11 23:30:31 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 11 Aug 2000 14:30:31 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
Message-ID: <20000811143031.A13790@ActiveState.com>

These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files. Why,
then, do we treat them as binary files.

Would it not be preferable to have those files be handled like a normal text
files, i.e. check it out on Unix and it uses Unix line terminators, check it
out on Windows and it uses DOS line terminators.

This way you are using the native line terminator format and text processing
tools you use on them a less likely to screw them up. (Anyone see my
motivation?).

Does anybody see any problems treating them as text files? And, if not, who
knows how to get rid of the '-kb' sticky tag on those files.

Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From gmcm at hypernet.com  Fri Aug 11 23:34:54 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 11 Aug 2000 17:34:54 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <200008112205.RAA01218@cj20424-a.reston1.va.home.com>
References: Your message of "Fri, 11 Aug 2000 13:04:26 -0400."             <1246109027-124413737@hypernet.com> 
Message-ID: <1246092799-125389828@hypernet.com>

[Guido]
> OK.  So send() can do a partial write, but only on a stream
> connection.  And most standard library code doesn't check for
> that condition, nor does (probably) much other code that used the
> standard library as an example.  Worse, it seems that on some
> platforms send() *never* does a partial write (I couldn't
> reproduce it on Red Hat 6.1 Linux), so even stress testing may
> not reveal the lurking problem.

I'm quite sure you can force it with a non-blocking socket (on 
RH 5.2  64K blocks did it - but that's across a 10baseT 
ethernet connection).
 
> Possible solutions:
> 
> 1. Do nothing.  Pro: least work.  Con: subtle bugs remain.

Yes, but they're application-level bugs, even if they're in the 
std lib. They're not socket-support level bugs.
 
> 2. Fix all code that's broken in the standard library, and try to
> encourage others to fix their code.  Book authors need to be
> encouraged to add a warning.  Pro: most thorough.  Con: hard to
> fix every occurrence, especially in 3rd party code.

As far as I can tell, Linux and Windows can't fail with the std 
lib code (it's all blocking sockets). Sam says BSDI could fail, 
and as I recall HPUX could too.
 
> 3. Fix the socket module to raise an exception when less than the
> number of bytes sent occurs.  Pro: subtle bug exposed when it
> happens.  Con: breaks code that did the right thing!
> 
> 4. Fix the socket module to loop back on a partial send to send
> the remaining bytes.  Pro: no more short writes.  Con: what if
> the first few send() calls succeed and then an error is returned?
>  Note: code that checks for partial writes will be redundant!

If you can exempt non-blocking sockets, either 3 or 4 
(preferably 4) is OK. But if you can't exempt non-blocking 
sockets, I'll scream bloody murder. It would mean you could 
not write high performance socket code in Python (which it 
currently is very good for). For one thing, you'd break Medusa.
 
> I'm personally in favor of (4), despite the problem with errors
> after the first call.

The sockets HOWTO already documents the problem. Too 
bad I didn't write it before that std lib code got written <wink>.

I still prefer leaving it alone and telling people to use makefile if 
they can't deal with it. I'll vote +0 on 4 if and only if it exempts 
nonblocking sockets.

- Gordon



From nowonder at nowonder.de  Sat Aug 12 01:48:20 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 11 Aug 2000 23:48:20 +0000
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <399490C4.F234D68A@nowonder.de>

Bill Tutt wrote:
> 
> This is an alternative approach that we should certainly consider. We could
> use ANTLR (www.antlr.org) as our parser generator, and have it generate Java
> for JPython, and C++ for CPython.  This would be a good chunk of work, and
> it's something I really don't have time to pursue. I don't even have time to
> pursue the idea about moving keyword recognition into the lexer.

<disclaimer val="I have only used ANTLR to generate Java code and not
for
 a parser but for a Java source code checker that tries to catch
possible
 runtime errors.">

ANTLR is a great tool. Unfortunately - although trying hard to change
it this morning in order to suppress keyword lookup in certain places -
I don't know anything about the interface between Python and its
parser. Is there some documentation on that (or can some divine deity
guide me with a few hints where to look in Parser/*)?

> I'm just not sure if you want to bother introducing C++ into the Python
> codebase solely to only have one parser for CPython and JPython.

Which compilers/platforms would this affect? VC++/Windows
won't be a problem, I guess; gcc mostly comes with g++,
but not always as a default. Probably more problematic.

don't-know-about-VMS-and-stuff-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From guido at beopen.com  Sat Aug 12 00:56:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 17:56:23 -0500
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:30:31 MST."
             <20000811143031.A13790@ActiveState.com> 
References: <20000811143031.A13790@ActiveState.com> 
Message-ID: <200008112256.RAA01675@cj20424-a.reston1.va.home.com>

> These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files. Why,
> then, do we treat them as binary files.

DevStudio doesn't (or at least 5.x didn't) like it if not all lines
used CRLF terminators.

> Would it not be preferable to have those files be handled like a normal text
> files, i.e. check it out on Unix and it uses Unix line terminators, check it
> out on Windows and it uses DOS line terminators.

I think I made them binary during the period when I was mounting the
Unix source directory on a Windows machine.  I don't do that any more
and I don't know anyone who does, so I think it's okay to change.

> This way you are using the native line terminator format and text processing
> tools you use on them a less likely to screw them up. (Anyone see my
> motivation?).
> 
> Does anybody see any problems treating them as text files? And, if not, who
> knows how to get rid of the '-kb' sticky tag on those files.

cvs admin -kkv file ...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Sat Aug 12 01:00:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 11 Aug 2000 18:00:29 -0500
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: Your message of "Fri, 11 Aug 2000 17:34:54 -0400."
             <1246092799-125389828@hypernet.com> 
References: Your message of "Fri, 11 Aug 2000 13:04:26 -0400." <1246109027-124413737@hypernet.com>  
            <1246092799-125389828@hypernet.com> 
Message-ID: <200008112300.SAA01726@cj20424-a.reston1.va.home.com>

> > 4. Fix the socket module to loop back on a partial send to send
> > the remaining bytes.  Pro: no more short writes.  Con: what if
> > the first few send() calls succeed and then an error is returned?
> >  Note: code that checks for partial writes will be redundant!
> 
> If you can exempt non-blocking sockets, either 3 or 4 
> (preferably 4) is OK. But if you can't exempt non-blocking 
> sockets, I'll scream bloody murder. It would mean you could 
> not write high performance socket code in Python (which it 
> currently is very good for). For one thing, you'd break Medusa.

Of course.  Don't worry.

> > I'm personally in favor of (4), despite the problem with errors
> > after the first call.
> 
> The sockets HOWTO already documents the problem. Too 
> bad I didn't write it before that std lib code got written <wink>.
> 
> I still prefer leaving it alone and telling people to use makefile if 
> they can't deal with it. I'll vote +0 on 4 if and only if it exempts 
> nonblocking sockets.

Understood.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fdrake at beopen.com  Sat Aug 12 00:21:18 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 18:21:18 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
In-Reply-To: <399490C4.F234D68A@nowonder.de>
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
	<399490C4.F234D68A@nowonder.de>
Message-ID: <14740.31838.336790.710005@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > parser. Is there some documentation on that (or can some divine deity
 > guide me with a few hints where to look in Parser/*)?

  Not that I'm aware of!  Feel free to write up any overviews you
think appropriate, and it can become part of the standard
documentation or be a README in the Parser/ directory.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Sat Aug 12 02:59:22 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 11 Aug 2000 20:59:22 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000811143031.A13790@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>

[Trent Mick]
> These files (PCbuild/*.dsw PCbuild/*.dsp) are just normal text files.
> Why, then, do we treat them as binary files.
>
> Would it not be preferable to have those files be handled like
> a normal text files, i.e. check it out on Unix and it uses Unix
> line terminators, check it out on Windows and it uses DOS line
> terminators.
>
> This way you are using the native line terminator format and text
> processing tools you use on them a less likely to screw them up.
> (Anyone see my motivation?).

Not really.  They're not human-editable!  Leave 'em alone.  Keeping them in
binary mode is a clue to people that they aren't *supposed* to go mucking
with them via text processing tools.

> Does anybody see any problems treating them as text files? And,
> if not, who knows how to get rid of the '-kb' sticky tag on those
> files.

Well, whatever you did didn't work.  I'm dead in the water on Windows now --
VC6 refuses to open the new & improved .dsw and .dsp files.  I *imagine*
it's because they've got Unix line-ends now, but haven't yet checked.  Can
you fix it or back it out?





From skip at mojam.com  Sat Aug 12 03:07:31 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 11 Aug 2000 20:07:31 -0500 (CDT)
Subject: [Python-Dev] list comprehensions
Message-ID: <14740.41811.590487.13187@beluga.mojam.com>

I believe the latest update to the list comprehensions patch by Ping
resolved the last concert the BDFL(*) had.  As the owner of the patch is it my
responsibility to check it in or do I need to assign it to Guido for final
dispensation.

Skip

(*) Took me a week or so to learn what BDFL meant.  :-) I tried a lot of
"somewhat inaccurate" expansions before seeing it expanded in a message from
Barry...



From esr at thyrsus.com  Sat Aug 12 04:50:17 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 22:50:17 -0400
Subject: [Python-Dev] Stop the presses!
Message-ID: <20000811225016.A18449@thyrsus.com>

The bad news: I've found another reproducible core-dump bug in
Python-2.0 under Linux.  Actually I found it in 1.5.2 while making
some changes to CML2, and just verified that the CVS snapshot of
Python 2.0 bombs identically.

The bad news II: it really seems to be in the Python core, not one of
the extensions like Tkinter.  My curses and Tk front ends both
segfault in the same place, the guard of an innocuous-looking if
statement.

The good news: the patch to go from code-that-runs to code-that-bombs
is pretty small and clean.  I suspect anybody who really knows the ceval
internals will be able to use it to nail this bug fairly quickly.

Damn, seems like I found the core dump in Pickle just yesterday.  This
is getting to be a habit I don't enjoy much :-(.

I'm putting together a demonstration package now.  Stay tuned; I'll 
ship it tonight.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"One of the ordinary modes, by which tyrants accomplish their purposes
without resistance, is, by disarming the people, and making it an
offense to keep arms."
        -- Constitutional scholar and Supreme Court Justice Joseph Story, 1840



From ping at lfw.org  Sat Aug 12 04:56:50 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 19:56:50 -0700 (PDT)
Subject: [Python-Dev] Stop the presses!
In-Reply-To: <20000811225016.A18449@thyrsus.com>
Message-ID: <Pine.LNX.4.10.10008111956070.2615-100000@localhost>

On Fri, 11 Aug 2000, Eric S. Raymond wrote:
> The good news: the patch to go from code-that-runs to code-that-bombs
> is pretty small and clean.  I suspect anybody who really knows the ceval
> internals will be able to use it to nail this bug fairly quickly.
[...]
> I'm putting together a demonstration package now.  Stay tuned; I'll 
> ship it tonight.

Oooh, i can't wait.  How exciting!  Post it, post it!  :)


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From fdrake at beopen.com  Sat Aug 12 05:30:23 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 11 Aug 2000 23:30:23 -0400 (EDT)
Subject: [Python-Dev] list comprehensions
In-Reply-To: <14740.41811.590487.13187@beluga.mojam.com>
References: <14740.41811.590487.13187@beluga.mojam.com>
Message-ID: <14740.50383.386575.806754@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:

 > I believe the latest update to the list comprehensions patch by
 > Ping resolved the last concert the BDFL(*) had.  As the owner of
 > the patch is it my responsibility to check it in or do I need to
 > assign it to Guido for final dispensation.

  Given the last comment added to the patch, check it in and close the
patch.  Then finish the PEP so we don't have to explain it over and
over and ...  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From esr at thyrsus.com  Sat Aug 12 05:56:33 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 11 Aug 2000 23:56:33 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
Message-ID: <20000811235632.A19358@thyrsus.com>

Here are the directions to reproduce the core dump.

1. Download and unpack CML2 version 0.7.6 from 
   <http://www.tuxedo.org/~esr/kbuild/>.  Change directory into it.

2. Do `cmlcompile.py kernel-rules.cml' to generate a pickled rulebase.

3. Run `make xconfig'.  Ignore the error message about the arch defconfig.

4. Set NETDEVICES on the main menu to 'y'.
5. Select the "Network Device Configuration" menu that appears below.
6. Set PPP to 'y'.
7. Select the "PPP options" menu that appears below it.
8. Set PPP_BSDCOMP to 'y'.

9. Observe and dismiss the pop-up window.  Quit the configurator using the
   "File" menu on the menu bar.

10. Now apply the attached patch.

11. Repeat steps 2-10.  

12. Observe the core dump.  If you look near cmlsystem.py:770, you'll see
    that the patch inserted two print statements that bracket the apparent
    point of the core dump.

13. To verify that this core dump is neither a Tkinter nor an ncurses problem,
    run `make menuconfig'.

14. Repeat steps 2-8.  To set symbols in the curses interface, use the arrow
    keys to select each one and type "y".  To select a menu, use the arrow
    keys and type a space or Enter when the selection bar is over the entry.

15. Observe the core dump at the same spot.

This bug bites both a stock 1.5.2 and today's CVS snapshoot of 2.0.

--- cml.py	2000/08/12 03:21:40	1.97
+++ cml.py	2000/08/12 03:25:45
@@ -111,6 +111,21 @@
         res = res + self.dump()
         return res[:-1] + "}"
 
+class Requirement:
+    "A requirement, together with a message to be shown if it's violated."
+    def __init__(self, wff, message=None):
+        self.predicate = wff
+        self.message = message
+
+    def str(self):
+        return display_expression(self.predicate)
+
+    def __repr__(self):
+        if self.message:
+            return self.message
+        else:
+            return str(self)
+
 # This describes an entire configuration.
 
 class CMLRulebase:
--- cmlcompile.py	2000/08/10 16:22:39	1.131
+++ cmlcompile.py	2000/08/12 03:24:31
@@ -12,7 +12,7 @@
 
 _keywords = ('symbols', 'menus', 'start', 'unless', 'suppress',
 	    'dependent', 'menu', 'choices', 'derive', 'default',
-	    'range', 'require', 'prohibit', 'private', 'debug',
+	    'range', 'require', 'prohibit', 'explanation', 'private', 'debug',
 	    'helpfile', 'prefix', 'banner', 'icon', 'condition',
 	    'trits', 'on', 'warndepend')
 
@@ -432,7 +432,14 @@
             expr = parse_expr(input)
             if leader.type == "prohibit":
                 expr = ('==', expr, cml.n.value)
-	    requirements.append(expr)	    
+            msg = None
+            #next = input.lex_token()
+            #if next.type != 'explanation':
+            #    input.push_token(next)
+            #    continue
+            #else:
+            #    msg = input.demand("word")
+	    requirements.append(cml.Requirement(expr, msg))	    
 	    bool_tests.append((expr, input.infile, input.lineno))
 	elif leader.type == "default":
 	    symbol = input.demand("word")
@@ -746,7 +753,7 @@
             entry.visibility = resolve(entry.visibility)
 	if entry.default:
 	    entry.default = resolve(entry.default)
-    requirements = map(resolve, requirements)
+    requirements = map(lambda x: cml.Requirement(resolve(x.predicate), x.message), requirements)
     if bad_symbols:
 	sys.stderr.write("cmlcompile: %d symbols could not be resolved:\n"%(len(bad_symbols),))
 	sys.stderr.write(`bad_symbols.keys()` + "\n")
@@ -868,7 +875,7 @@
     # rule file are not consistent, it's not likely the user will make
     # a consistent one.
     for wff in requirements:
-	if not cml.evaluate(wff, debug):
+	if not cml.evaluate(wff.predicate, debug):
 	    print "cmlcompile: constraint violation:", wff
 	    errors = 1
 
--- cmlsystem.py	2000/07/25 04:24:53	1.98
+++ cmlsystem.py	2000/08/12 03:29:21
@@ -28,6 +28,7 @@
     "INCAUTOGEN":"/*\n * Automatically generated, don't edit\n */\n",
     "INCDERIVED":"/*\n * Derived symbols\n */\n",
     "ISNOTSET":"# %s is not set\n",
+    "NOTRITS":"Trit values are not currently allowed.",
     "RADIOINVIS":"    Query of choices menu %s elided, button pressed",
     "READING":"Reading configuration from %s",
     "REDUNDANT":"    Redundant assignment forced by %s", 
@@ -100,10 +101,10 @@
         "Assign constraints to their associated symbols."
         for entry in self.dictionary.values():
             entry.constraints = []
-        for wff in self.constraints:
-            for symbol in cml.flatten_expr(wff):
-                if not wff in symbol.constraints:
-                    symbol.constraints.append(wff)
+        for requirement in self.constraints:
+            for symbol in cml.flatten_expr(requirement.predicate):
+                if not requirement.predicate in symbol.constraints:
+                    symbol.constraints.append(requirement)
         if self.debug:
             cc = dc = tc = 0
             for symbol in self.dictionary.values():
@@ -436,8 +437,8 @@
         if symbol.constraints:
             self.set_symbol(symbol, value)
             for constraint in symbol.constraints:
-                if not cml.evaluate(constraint, self.debug):
-                    self.debug_emit(1, self.lang["CONSTRAINT"] % (value, symbol.name, constraint))
+                if not cml.evaluate(constraint.predicate, self.debug):
+                    self.debug_emit(1, self.lang["CONSTRAINT"] % (value, symbol.name, str(constraint)))
                     self.rollback()
                     return 0
             self.rollback()
@@ -544,7 +545,7 @@
         # be unfrozen.  Simplify constraints to remove newly frozen variables.
         # Then rerun optimize_constraint_access.
         if freeze:
-            self.constraints = map(lambda wff, self=self: self.simplify(wff), self.constraints)
+            self.constraints = map(lambda requirement, self=self: cml.Requirement(self.simplify(requirement.predicate), requirement.message), self.constraints)
             self.optimize_constraint_access()
             for entry in self.dictionary.values():
                 if self.bindcheck(entry, self.newbindings) and entry.menu and entry.menu.type=="choices":
@@ -559,7 +560,7 @@
         violations = []
         # First, check the explicit constraints.
         for constraint in self.constraints:
-            if not cml.evaluate(constraint, self.debug):
+            if not cml.evaluate(constraint.predicate, self.debug):
                 violations.append(constraint);
                 self.debug_emit(1, self.lang["FAILREQ"] % (constraint,))
         # If trits are disabled, any variable having a trit value is wrong.
@@ -570,7 +571,7 @@
                     mvalued = ('and', ('!=', entry,cml.m), mvalued)
             if mvalued != cml.y:
                mvalued = self.simplify(mvalued)
-               violations.append(('implies', ('==', self.trit_tie, cml.n), mvalued))
+               violations.append(cml.Requirement(('implies', ('==', self.trit_tie, cml.n), mvalued), self.lang["NOTRITS"]))
         return violations
 
     def set_symbol(self, symbol, value, source=None):
@@ -631,10 +632,10 @@
         dups = {}
         relevant = []
         for csym in touched:
-            for wff in csym.constraints:
-                if not dups.has_key(wff):
-                    relevant.append(wff)
-                    dups[wff] = 1
+            for requirement in csym.constraints:
+                if not dups.has_key(requirement.predicate):
+                    relevant.append(requirement.predicate)
+                    dups[requirement.predicate] = 1
         # Now loop through the constraints, simplifying out assigned
         # variables and trying to freeze more variables each time.
         # The outer loop guarantees that as long as the constraints
@@ -765,7 +766,9 @@
                     else:
                         self.set_symbol(left, cml.n.value, source)
                         return 1
+                print "Just before the core-dump point"
                 if right_mutable and left == cml.n.value:
+                    print "Just after the core-dump point"
                     if rightnonnull == cml.n.value:
                         self.debug_emit(1, self.lang["REDUNDANT"] % (wff,))
                         return 0

End of diffs,

-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders, give
orders, cooperate, act alone, solve equations, analyze a new problem,
pitch manure, program a computer, cook a tasty meal, fight efficiently,
die gallantly. Specialization is for insects.
	-- Robert A. Heinlein, "Time Enough for Love"



From nhodgson at bigpond.net.au  Sat Aug 12 06:16:54 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Sat, 12 Aug 2000 14:16:54 +1000
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net>
Message-ID: <045901c00414$27a67010$8119fea9@neil>

> .... It also seems that there are a lot of people (let's
> call them "back seat coders") who have vague ideas of what they want but
> don't want to spend a bunch of time in a long discussion about registry
> arcana. Therefore I am endevouring to make it as easy and fast to
> contribute to the discussion as possible.

   I think a lot of the registry using people are unwilling to spend too
much energy on this because, while it looks useless, its not really going to
be a problem so long as the low level module is available.

> If you're one of the people who has asked for winreg in the core then
> you should respond. It isn't (IMO) sufficient to put in a hacky API to
> make your life easier. You need to give something to get something. You
> want windows registry support in the core -- fine, let's do it properly.

   Hacky API only please.

   The registry is just not important enough to have this much attention or
work.

> All you need to do is read this email and comment on whether you agree
> with the overall principle and then give your opinion on fifteen
> possibly controversial issues. The "overall principle" is to steal
> shamelessly from Microsoft's new C#/VB/OLE/Active-X/CRL API instead of
> innovating for Python. That allows us to avoid starting the debate from
> scratch. It also eliminates the feature that Mark complained about
> (which was a Python-specific innovation).

   The Microsoft.Win32.Registry* API appears to be a hacky legacy API to me.
Its there for compatibility during the transition to the
System.Configuration API. Read the blurb for ConfigManager to understand the
features of System.Configuration. Its all based on XML files. What a
surprise.

   Neil




From ping at lfw.org  Sat Aug 12 06:52:54 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 21:52:54 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000811235632.A19358@thyrsus.com>
Message-ID: <Pine.LNX.4.10.10008112149280.2615-100000@localhost>

On Fri, 11 Aug 2000, Eric S. Raymond wrote:
> Here are the directions to reproduce the core dump.

I have successfully reproduced the core dump.


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From ping at lfw.org  Sat Aug 12 06:57:02 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 11 Aug 2000 21:57:02 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008112149280.2615-100000@localhost>
Message-ID: <Pine.LNX.4.10.10008112156180.2615-100000@localhost>

On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> I have successfully reproduced the core dump.

I'm investigating.  Top of the stack looks like:

#0  0x40061e39 in __pthread_lock (lock=0x0, self=0x40067f20) at spinlock.c:41
#1  0x4005f8aa in __pthread_mutex_lock (mutex=0xbfe0277c) at mutex.c:92
#2  0x400618cb in __flockfile (stream=0xbfe02794) at lockfile.c:32
#3  0x400d2955 in _IO_vfprintf (s=0xbfe02794, 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'", 
    ap=0xbfe02a54) at vfprintf.c:1041
#4  0x400e00b3 in _IO_vsprintf (string=0xbfe02850 "?/??", 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'", 
    args=0xbfe02a54) at iovsprintf.c:47
#5  0x80602c5 in PyErr_Format (exception=0x819783c, 
    format=0x80d1500 "'%.50s' instance has no attribute '%.400s'")
    at errors.c:377
#6  0x806eac4 in instance_getattr1 (inst=0x84ecdd4, name=0x81960a8)
    at classobject.c:594
#7  0x806eb97 in instance_getattr (inst=0x84ecdd4, name=0x81960a8)
    at classobject.c:639
#8  0x807b445 in PyObject_GetAttrString (v=0x84ecdd4, name=0x80d306b "__str__")
    at object.c:595
#9  0x807adf8 in PyObject_Str (v=0x84ecdd4) at object.c:291
#10 0x8097d1e in builtin_str (self=0x0, args=0x85adc3c) at bltinmodule.c:2034
#11 0x805a490 in call_builtin (func=0x81917e0, arg=0x85adc3c, kw=0x0)
    at ceval.c:2369


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From tim_one at email.msn.com  Sat Aug 12 08:29:42 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 02:29:42 -0400
Subject: [Python-Dev] list comprehensions
In-Reply-To: <14740.41811.590487.13187@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFPGPAA.tim_one@email.msn.com>

[Skip Montanaro]
> I believe the latest update to the list comprehensions patch by Ping
> resolved the last concert the BDFL(*) had.  As the owner of the
> patch is it my responsibility to check it in or do I need to assign
> it to Guido for final dispensation.

As the owner of the listcomp PEP, I both admonish you to wait until the PEP
is complete, and secretly encourage you to check it in anyway (unlike most
PEPs, this one is pre-approved no matter what I write <0.5 wink> -- better
to get the code out there now!  if anything changes due to the PEP, should
be easy to twiddle).

acting-responsibly-despite-appearances-ly y'rs  - tim





From tim_one at email.msn.com  Sat Aug 12 09:32:17 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 03:32:17 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000811235632.A19358@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>

[Eric S. Raymond, with a lot of code that dies in a lot of pain]

Eric, as I'm running on a Windows laptop right now, there's not much I can
do to try to run this code.  However, something struck me in your patch "by
eyeball", and here's a self-contained program that crashes under Windows:

# This is esr's new class.
class Requirement:
    "A requirement, together with a message to be shown if it's violated."
    def __init__(self, wff, message=None):
        self.predicate = wff
        self.message = message

    def str(self):
        return display_expression(self.predicate)

    def __repr__(self):
        if self.message:
            return self.message
        else:
            return str(self)

# This is my driver.
r = Requirement("trust me, I'm a wff!")
print r


Could that be related to your problem?  I think you really wanted to name
"str" as "__str__" in this class (or if not, comment in extreme detail why
you want to confuse the hell out of the reader <wink>).  As is, my

    print r

attempts to get look up r.__str__, which isn't found, so Python falls back
to using r.__repr__.  That *is* found, but r.message is None, so
Requirement.__repr__ executes

    return str(self)

And then we go thru the whole "no __str__" -> "try __repr__" -> "message is
None" -> "return str(self)" business again, and end up with unbounded
recursion.  The top of the stacktrace Ping posted *does* show that the code
is failing to find a "__str__" attr, so that's consistent with the scenario
described here.

If this is the problem, note that ways to detect such kinds of unbounded
recursion have been discussed here within the last week.  You're a clever
enough fellow that I have to suspect you concocted this test case as a way
to support the more extreme of those proposals without just saying "+1"
<wink>.





From ping at lfw.org  Sat Aug 12 10:09:53 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 12 Aug 2000 01:09:53 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008112156180.2615-100000@localhost>
Message-ID: <Pine.LNX.4.10.10008120107140.2615-100000@localhost>

On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> On Fri, 11 Aug 2000, Ka-Ping Yee wrote:
> > I have successfully reproduced the core dump.
> 
> I'm investigating.  Top of the stack looks like:

This chunk of stack repeats lots and lots of times.
The problem is due to infinite recursion in your __repr__ routine:

    class Requirement:
        "A requirement, together with a message to be shown if it's violated."
        def __init__(self, wff, message=None):
            self.predicate = wff
            self.message = message

        def str(self):
            return display_expression(self.predicate)

        def __repr__(self):
            if self.message:
                return self.message
            else:
                return str(self)

Notice that Requirement.__repr__ calls str(self), which triggers
Requirement.__repr__ again because there is no __str__ method.

If i change "def str(self)" to "def __str__(self)", the problem goes
away and everything works properly.

With a reasonable stack depth limit in place, this would produce
a run-time error rather than a segmentation fault.


-- ?!ng

"This code is better than any code that doesn't work has any right to be."
    -- Roger Gregory, on Xanadu




From ping at lfw.org  Sat Aug 12 10:22:40 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 12 Aug 2000 01:22:40 -0700 (PDT)
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>

On Sat, 12 Aug 2000, Tim Peters wrote:
> Could that be related to your problem?  I think you really wanted to name
> "str" as "__str__" in this class

Oops.  I guess i should have just read the code before going
through the whole download procedure.

Uh, yeah.  What he said.  :)  That wise man with the moustache over there.


One thing i ran into as a result of trying to run it under the
debugger, though: turning on cursesmodule was slightly nontrivial.
There's no cursesmodule.c; it's _cursesmodule.c instead; but
Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
wasn't sufficient; i had to edit and insert the underscores by hand
to get curses to work.


-- ?!ng




From effbot at telia.com  Sat Aug 12 11:12:19 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 11:12:19 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of data sent.
References: <200008111255.HAA03735@cj20424-a.reston1.va.home.com> <20000811143143.G17171@xs4all.nl> <200008111419.JAA03948@cj20424-a.reston1.va.home.com> <016d01c0039b$bfb99a40$0900a8c0@SPIFF>              <20000811162109.I17171@xs4all.nl>  <200008111556.KAA05068@cj20424-a.reston1.va.home.com>
Message-ID: <00d301c0043d$7eb0b540$f2a6b5d4@hagrid>

guido wrote:
> > Indeed. I didn't actually check the story, since Guido was apparently
> > convinced by its validity.
> 
> I wasn't convinced!  I wrote "is this true?" in my message!!!
> 
> > I was just operating under the assumption that
> > send() did behave like write(). I won't blindly believe Guido anymore ! :)
> 
> I bgelieve they do behave the same: in my mind, write() doesn't write
> fewer bytes than you tell it either!  (Except maybe to a tty device
> when interrupted by a signal???)

SUSv2 again:

    If a write() requests that more bytes be written than there
    is room for (for example, the ulimit or the physical end of a
    medium), only as many bytes as there is room for will be
    written. For example, suppose there is space for 20 bytes
    more in a file before reaching a limit. A write of 512 bytes
    will return 20. The next write of a non-zero number of bytes
    will give a failure return (except as noted below)  and the
    implementation will generate a SIGXFSZ signal for the thread. 

    If write() is interrupted by a signal before it writes any data,
    it will return -1 with errno set to [EINTR]. 

    If write() is interrupted by a signal after it successfully writes
    some data, it will return the number of bytes written. 

sockets are an exception:

    If fildes refers to a socket, write() is equivalent to send() with
    no flags set.

fwiw, if "send" may send less than the full buffer in blocking
mode on some platforms (despite what the specification implies),
it's quite interesting that *nobody* has ever noticed before...

</F>




From effbot at telia.com  Sat Aug 12 11:13:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 11:13:45 +0200
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
References: <20000811143031.A13790@ActiveState.com>  <200008112256.RAA01675@cj20424-a.reston1.va.home.com>
Message-ID: <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>

guido wrote:
> I think I made them binary during the period when I was mounting the
> Unix source directory on a Windows machine.  I don't do that any more
> and I don't know anyone who does

we do.

trent wrote:
> > Does anybody see any problems treating them as text files?

developer studio 5.0 does:

    "This makefile was not generated by Developer Studio"

    "Continuing will create a new Developer Studio project to
    wrap this makefile. You will be prompted to save after the
    new project has been created".

    "Do you want to continue"

    (click yes)

    "The options file (.opt) for this workspace specified a project
    configuration "... - Win32 Alpha Release" that no longer exists.
    The configuration will be set to "... - Win32 Debug"

    (click OK)

    (click build)

    "MAKE : fatal error U1052: file '....mak' not found"

</F>




From thomas at xs4all.net  Sat Aug 12 11:21:19 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 11:21:19 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>; from ping@lfw.org on Sat, Aug 12, 2000 at 01:22:40AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost>
Message-ID: <20000812112119.C14470@xs4all.nl>

On Sat, Aug 12, 2000 at 01:22:40AM -0700, Ka-Ping Yee wrote:

> One thing i ran into as a result of trying to run it under the
> debugger, though: turning on cursesmodule was slightly nontrivial.
> There's no cursesmodule.c; it's _cursesmodule.c instead; but
> Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> wasn't sufficient; i had to edit and insert the underscores by hand
> to get curses to work.

You should update your Setup file, then ;) Compare it with Setup.in and see
what else changed since the last time you configured Python.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From martin at loewis.home.cs.tu-berlin.de  Sat Aug 12 11:29:25 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 12 Aug 2000 11:29:25 +0200
Subject: [Python-Dev] Processing CVS diffs
Message-ID: <200008120929.LAA01434@loewis.home.cs.tu-berlin.de>

While looking at the comments for Patch #100654, I noticed a complaint
about the patch being a CVS diff, which is not easily processed by
patch.

There is a simple solution to that: process the patch with the script
below. It will change the patch in-place, and it works well for me
even though it is written in the Evil Language :-)

Martin

#! /usr/bin/perl -wi
# Propagate the full pathname from the Index: line in CVS output into
# the diff itself so that patch will use it.
#  Thrown together by Jason Merrill <jason at cygnus.com>

while (<>)
{
  if (/^Index: (.*)/) 
    {
      $full = $1;
      print;
      for (1..7)
	{
	  $_ = <>;
	  s/ [^\t]+\t/ $full\t/;
	  print;
	}
    }
  else
    {
      print;
    }
}





From mal at lemburg.com  Sat Aug 12 11:48:25 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 12 Aug 2000 11:48:25 +0200
Subject: [Python-Dev] Python Announcements ???
Message-ID: <39951D69.45D01703@lemburg.com>

Could someone at BeOpen please check what happened to the
python-announce mailing list ?!

Messages to that list don't seem to show up anywhere and I've
been getting strange reports from the mail manager software in
the past when I've tried to post there.

Also, what happened to the idea of hooking that list onto
the c.l.p.a newsgroup. I don't remember the details of
how this is done (had to do something with adding some
approved header), but this would be very helpful.

The Python community currently has no proper way of
announcing new projects, software or gigs. A post to
c.l.p which has grown to be a >4K posts/month list does
not have the same momentum as pushing it through c.l.p.a
had in the past.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From just at letterror.com  Sat Aug 12 13:51:31 2000
From: just at letterror.com (Just van Rossum)
Date: Sat, 12 Aug 2000 12:51:31 +0100
Subject: [Python-Dev] Preventing recursion core dumps
Message-ID: <l03102803b5bae5eb2fe1@[193.78.237.125]>

(Sorry for the late reply, that's what you get when you don't Cc me...)

Vladimir Marangozov wrote:
> [Just]
> > Gordon, how's that Stackless PEP coming along?
> > Sorry, I couldn't resist ;-)
>
> Ah, in this case, we'll get a memory error after filling the whole disk
> with frames <wink>

No matter how much we wink to each other, that was a cheap shot; especially
since it isn't true: Stackless has a MAX_RECURSION_DEPTH value. Someone who
has studied Stackless "in detail" (your words ;-) should know that.

Admittedly, that value is set way too high in the last stackless release
(123456 ;-), but that doesn't change the principle that Stackless could
solve the problem discussed in this thread in a reliable and portable
manner.

Of course there's be work to do:
- MAX_RECURSION_DEPTH should be changeable at runtime
- __str__ (and a bunch of others) isn't yet stackless
- ...

But the hardest task seems to be to get rid of the hostility and prejudices
against Stackless :-(

Just





From esr at thyrsus.com  Sat Aug 12 13:22:55 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:22:55 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 03:32:17AM -0400
References: <20000811235632.A19358@thyrsus.com> <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com>
Message-ID: <20000812072255.C20109@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> If this is the problem, note that ways to detect such kinds of unbounded
> recursion have been discussed here within the last week.  You're a clever
> enough fellow that I have to suspect you concocted this test case as a way
> to support the more extreme of those proposals without just saying "+1"
> <wink>.

I may be that clever, but I ain't that devious.  I'll try the suggested
fix.  Very likely you're right, though the location of the core dump
is peculiar if this is the case.  It's inside bound_from_constraint(),
whereas in your scenario I'd expect it to be in the Requirement method code.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The day will come when the mystical generation of Jesus by the Supreme
Being as his father, in the womb of a virgin, will be classed with the
fable of the generation of Minerva in the brain of Jupiter.
	-- Thomas Jefferson, 1823



From esr at thyrsus.com  Sat Aug 12 13:34:19 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:34:19 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <Pine.LNX.4.10.10008120117500.2615-100000@localhost>; from ping@lfw.org on Sat, Aug 12, 2000 at 01:22:40AM -0700
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost>
Message-ID: <20000812073419.D20109@thyrsus.com>

Ka-Ping Yee <ping at lfw.org>:
> One thing i ran into as a result of trying to run it under the
> debugger, though: turning on cursesmodule was slightly nontrivial.
> There's no cursesmodule.c; it's _cursesmodule.c instead; but
> Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> wasn't sufficient; i had to edit and insert the underscores by hand
> to get curses to work.

Your Setup is out of date.

But this reminds me.  There's way too much hand-hacking in the Setup
mechanism.  It wouldn't be hard to enhance the Setup format to support
#if/#endif so that config.c generation could take advantage of
configure tests.  That way, Setup could have constructs in it like
this:

#if defined(CURSES)
#if defined(linux)
_curses _cursesmodule.c -lncurses
#else
_curses _cursesmodule.c -lcurses -ltermcap
#endif
#endif

I'm willing to do and test this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The right of the citizens to keep and bear arms has justly been considered as
the palladium of the liberties of a republic; since it offers a strong moral
check against usurpation and arbitrary power of rulers; and will generally,
even if these are successful in the first instance, enable the people to resist
and triumph over them."
        -- Supreme Court Justice Joseph Story of the John Marshall Court



From esr at snark.thyrsus.com  Sat Aug 12 13:44:54 2000
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:44:54 -0400
Subject: [Python-Dev] Core dump is dead, long live the core dump
Message-ID: <200008121144.HAA20230@snark.thyrsus.com>

Tim's diagnosis of fatal recursion was apparently correct; apologies,
all.  This still leaves the question of why the core dump happened so
far from the actual scene of the crime.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

In every country and in every age, the priest has been hostile to
liberty. He is always in alliance with the despot, abetting his abuses
in return for protection to his own.
	-- Thomas Jefferson, 1814



From mal at lemburg.com  Sat Aug 12 13:36:14 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 12 Aug 2000 13:36:14 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com>
Message-ID: <399536AE.309D456C@lemburg.com>

"Eric S. Raymond" wrote:
> 
> Ka-Ping Yee <ping at lfw.org>:
> > One thing i ran into as a result of trying to run it under the
> > debugger, though: turning on cursesmodule was slightly nontrivial.
> > There's no cursesmodule.c; it's _cursesmodule.c instead; but
> > Modules/Setup says "#curses cursesmodule.c".  Taking out the "#"
> > wasn't sufficient; i had to edit and insert the underscores by hand
> > to get curses to work.
> 
> Your Setup is out of date.
> 
> But this reminds me.  There's way too much hand-hacking in the Setup
> mechanism.  It wouldn't be hard to enhance the Setup format to support
> #if/#endif so that config.c generation could take advantage of
> configure tests.  That way, Setup could have constructs in it like
> this:
> 
> #if defined(CURSES)
> #if defined(linux)
> _curses _cursesmodule.c -lncurses
> #else
> _curses _cursesmodule.c -lcurses -ltermcap
> #endif
> #endif
> 
> I'm willing to do and test this.

This would be a *cool* thing to have :-) 

Definitely +1 from me if it's done in a portable way.

(Not sure how you would get this to run without the C preprocessor
though -- and Python's Makefile doesn't provide any information
on how to call it in a platform independent way. It's probably
cpp on most platforms, but you never know...)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From esr at thyrsus.com  Sat Aug 12 13:50:57 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 07:50:57 -0400
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <399536AE.309D456C@lemburg.com>; from mal@lemburg.com on Sat, Aug 12, 2000 at 01:36:14PM +0200
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com> <399536AE.309D456C@lemburg.com>
Message-ID: <20000812075056.A20245@thyrsus.com>

M.-A. Lemburg <mal at lemburg.com>:
> > But this reminds me.  There's way too much hand-hacking in the Setup
> > mechanism.  It wouldn't be hard to enhance the Setup format to support
> > #if/#endif so that config.c generation could take advantage of
> > configure tests.  That way, Setup could have constructs in it like
> > this:
> > 
> > #if defined(CURSES)
> > #if defined(linux)
> > _curses _cursesmodule.c -lncurses
> > #else
> > _curses _cursesmodule.c -lcurses -ltermcap
> > #endif
> > #endif
> > 
> > I'm willing to do and test this.
> 
> This would be a *cool* thing to have :-) 
> 
> Definitely +1 from me if it's done in a portable way.
> 
> (Not sure how you would get this to run without the C preprocessor
> though -- and Python's Makefile doesn't provide any information
> on how to call it in a platform independent way. It's probably
> cpp on most platforms, but you never know...)

Ah.  The Makefile may not provide this information -- but I believe 
configure can be made to!
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Ideology, politics and journalism, which luxuriate in failure, are
impotent in the face of hope and joy.
	-- P. J. O'Rourke



From thomas at xs4all.net  Sat Aug 12 13:53:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 13:53:46 +0200
Subject: [Python-Dev] Directions for reproducing the coredump
In-Reply-To: <20000812073419.D20109@thyrsus.com>; from esr@thyrsus.com on Sat, Aug 12, 2000 at 07:34:19AM -0400
References: <LNBBLJKPBEHFEDALKOLCMEGAGPAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008120117500.2615-100000@localhost> <20000812073419.D20109@thyrsus.com>
Message-ID: <20000812135346.D14470@xs4all.nl>

On Sat, Aug 12, 2000 at 07:34:19AM -0400, Eric S. Raymond wrote:

> But this reminds me.  There's way too much hand-hacking in the Setup
> mechanism.  It wouldn't be hard to enhance the Setup format to support
> #if/#endif so that config.c generation could take advantage of
> configure tests.  That way, Setup could have constructs in it like
> this:

> #if defined(CURSES)
> #if defined(linux)
> _curses _cursesmodule.c -lncurses
> #else
> _curses _cursesmodule.c -lcurses -ltermcap
> #endif
> #endif

Why go through that trouble ? There already is a 'Setup.config' file, which
is used to pass Setup info for the thread and gc modules. It can easily be
extended to include information on all other locatable modules, leaving
'Setup' or 'Setup.local' for people who have their modules in strange
places. What would be a cool idea as well would be a configuration tool. Not
as complex as the linux kernel config tool, but something to help people
select the modules they want. Though it might not be necessary if configure
finds out what modules can be safely built.

I'm willing to write some autoconf tests to locate modules as well, if this
is deemed a good idea.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Sat Aug 12 13:54:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 13:54:31 +0200
Subject: [Python-Dev] Core dump is dead, long live the core dump
In-Reply-To: <200008121144.HAA20230@snark.thyrsus.com>; from esr@snark.thyrsus.com on Sat, Aug 12, 2000 at 07:44:54AM -0400
References: <200008121144.HAA20230@snark.thyrsus.com>
Message-ID: <20000812135431.E14470@xs4all.nl>

On Sat, Aug 12, 2000 at 07:44:54AM -0400, Eric S. Raymond wrote:

> Tim's diagnosis of fatal recursion was apparently correct; apologies,
> all.  This still leaves the question of why the core dump happened so
> far from the actual scene of the crime.

Blame it on your stack :-) It could have been that the appropriate error was
generated, and that the stack overflowed *again* during the processing of
that error :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Sat Aug 12 15:16:47 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sat, 12 Aug 2000 09:16:47 -0400
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <00d301c0043d$7eb0b540$f2a6b5d4@hagrid>
Message-ID: <1246036275-128789882@hypernet.com>

Fredrik wrote:

> fwiw, if "send" may send less than the full buffer in blocking
> mode on some platforms (despite what the specification implies),
> it's quite interesting that *nobody* has ever noticed before...

I noticed, but I expected it, so had no reason to comment. The 
Linux man pages are the only specification of send that I've 
seen that don't make a big deal out it. And clearly I'm not the 
only one, otherwise there would never have been a bug report 
(he didn't experience it, he just noticed sends without checks).

- Gordon



From guido at beopen.com  Sat Aug 12 16:48:11 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 12 Aug 2000 09:48:11 -0500
Subject: [Python-Dev] Re: PEP 0211: Linear Algebra Operators
In-Reply-To: Your message of "Fri, 11 Aug 2000 14:13:17 MST."
             <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com> 
References: <Pine.LNX.4.10.10008111255560.1058-100000@rocket.knowledgetrack.com> 
Message-ID: <200008121448.JAA03545@cj20424-a.reston1.va.home.com>

> As the PEP posted by Greg is substantially different from the one floating
> around in c.l.py, I'd like to post the latter here, which covers several
> weeks of discussions by dozens of discussants.  I'd like to encourage Greg
> to post his version to python-list to seek comments.

A procedural suggestion: let's have *two* PEPs, one for Huaiyu's
proposal, one for Greg's.  Each PEP should in its introduction briefly
mention the other as an alternative.  I don't generally recommend that
alternative proposals develop separate PEPs, but in this case the
potential impact on Python is so large that I think it's the only way
to proceed that doesn't give one group an unfair advantage over the
other.

I haven't had the time to read either proposal yet, so I can't comment
on their (in)compatibility, but I would surmise that at most one can
be accepted -- with the emphasis on *at most* (possibly neither is
ready for prime time), and with the understanding that each proposal
may be able to borrow ideas or code from the other anyway.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 16:21:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 16:21:50 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000811103701.A25386@keymaster.enme.ucalgary.ca> from "Neil Schemenauer" at Aug 11, 2000 10:37:01 AM
Message-ID: <200008121421.QAA20095@python.inrialpes.fr>

Neil Schemenauer wrote:
> 
> On Fri, Aug 11, 2000 at 05:58:45PM +0200, Vladimir Marangozov wrote:
> > On a second thought, I think this would be a bad idea, even if
> > we manage to tweak the stack limits on most platforms. We would
> > loose determinism = loose control -- no good. A depth-first algorithm
> > may succeed on one machine, and fail on another.
> 
> So what?

Well, the point is that people like deterministic behavior and tend to
really dislike unpredictable systems, especially when the lack of
determinism is due to platform heterogeneity.

> We don't limit the amount of memory you can allocate on all
> machines just because your program may run out of memory on some
> machine.

We don't because we can't do it portably. But if we could, this would
have been a very useful setting -- there has been demand for Python on
embedded systems where memory size is a constraint. And note that after
the malloc cleanup, we *can* do this with a specialized Python malloc
(control how much memory is allocated from Python).

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From effbot at telia.com  Sat Aug 12 16:29:57 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 16:29:57 +0200
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug #111620] lots of use of send() without verifyi ng amount of d
References: <1246036275-128789882@hypernet.com>
Message-ID: <001301c00469$cb380fe0$f2a6b5d4@hagrid>

gordon wrote:
> Fredrik wrote:
> 
> > fwiw, if "send" may send less than the full buffer in blocking
> > mode on some platforms (despite what the specification implies),
> > it's quite interesting that *nobody* has ever noticed before...
> 
> I noticed, but I expected it, so had no reason to comment. The 
> Linux man pages are the only specification of send that I've 
> seen that don't make a big deal out it. And clearly I'm not the 
> only one, otherwise there would never have been a bug report 
> (he didn't experience it, he just noticed sends without checks).

I meant "I wonder why my script fails" rather than "that piece
of code looks funky".

:::

fwiw, I still haven't found a single reference (SUSv2 spec, man-
pages, Stevens, the original BSD papers) that says that a blocking
socket may do anything but sending all the data, or fail.

if that's true, I'm not sure we really need to "fix" anything here...

</F>




From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 16:46:40 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 16:46:40 +0200 (CEST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <l03102803b5bae5eb2fe1@[193.78.237.125]> from "Just van Rossum" at Aug 12, 2000 12:51:31 PM
Message-ID: <200008121446.QAA20112@python.inrialpes.fr>

Just van Rossum wrote:
> 
> (Sorry for the late reply, that's what you get when you don't Cc me...)
> 
> Vladimir Marangozov wrote:
> > [Just]
> > > Gordon, how's that Stackless PEP coming along?
> > > Sorry, I couldn't resist ;-)
> >
> > Ah, in this case, we'll get a memory error after filling the whole disk
> > with frames <wink>
> 
> No matter how much we wink to each other, that was a cheap shot;

I can't say that yours was more expensive <wink>.

> especially since it isn't true: Stackless has a MAX_RECURSION_DEPTH value.
> Someone who has studied Stackless "in detail" (your words ;-) should know
> that.

As I said - it has been years ago. Where's that PEP draft?
Please stop dreaming about hostility <wink>. I am all for Stackless, but
the implementation wasn't mature enough at the time when I looked at it.
Now I hear it has evolved and does not allow graph cycles. Okay, good --
tell me more in a PEP and submit a patch.

> 
> Admittedly, that value is set way too high in the last stackless release
> (123456 ;-), but that doesn't change the principle that Stackless could
> solve the problem discussed in this thread in a reliable and portable
> manner.

Indeed, if it didn't reduce the stack dependency in a portable way, it
couldn't have carried the label "Stackless" for years. BTW, I'm more
interested in the stackless aspect than the call/cc aspect of the code.

> 
> Of course there's be work to do:
> - MAX_RECURSION_DEPTH should be changeable at runtime
> - __str__ (and a bunch of others) isn't yet stackless
> - ...

Tell me more in the PEP.

> 
> But the hardest task seems to be to get rid of the hostility and prejudices
> against Stackless :-(

Dream on <wink>.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From skip at mojam.com  Sat Aug 12 19:56:23 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 12 Aug 2000 12:56:23 -0500 (CDT)
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
Message-ID: <14741.36807.101870.221890@beluga.mojam.com>

With Thomas's patch to the top-level Makefile that makes Grammar a more
first-class directory, are the generated graminit.h and graminit.c files
needed any longer?

Skip



From guido at beopen.com  Sat Aug 12 21:12:23 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 12 Aug 2000 14:12:23 -0500
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
In-Reply-To: Your message of "Sat, 12 Aug 2000 12:56:23 EST."
             <14741.36807.101870.221890@beluga.mojam.com> 
References: <14741.36807.101870.221890@beluga.mojam.com> 
Message-ID: <200008121912.OAA00807@cj20424-a.reston1.va.home.com>

> With Thomas's patch to the top-level Makefile that makes Grammar a more
> first-class directory, are the generated graminit.h and graminit.c files
> needed any longer?

I still like to keep them around.  Most people don't hack the grammar.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From trentm at ActiveState.com  Sat Aug 12 20:39:00 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 11:39:00 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>; from effbot@telia.com on Sat, Aug 12, 2000 at 11:13:45AM +0200
References: <20000811143031.A13790@ActiveState.com> <200008112256.RAA01675@cj20424-a.reston1.va.home.com> <00d601c0043d$a2e66c20$f2a6b5d4@hagrid>
Message-ID: <20000812113900.D3528@ActiveState.com>

On Sat, Aug 12, 2000 at 11:13:45AM +0200, Fredrik Lundh wrote:
> guido wrote:
> > I think I made them binary during the period when I was mounting the
> > Unix source directory on a Windows machine.  I don't do that any more
> > and I don't know anyone who does
> 
> we do.
> 
> trent wrote:
> > > Does anybody see any problems treating them as text files?
> 
> developer studio 5.0 does:
> 
>     "This makefile was not generated by Developer Studio"
> 
>     "Continuing will create a new Developer Studio project to
>     wrap this makefile. You will be prompted to save after the
>     new project has been created".
> 
>     "Do you want to continue"
> 
>     (click yes)
> 
>     "The options file (.opt) for this workspace specified a project
>     configuration "... - Win32 Alpha Release" that no longer exists.
>     The configuration will be set to "... - Win32 Debug"
> 
>     (click OK)
> 
>     (click build)
> 
>     "MAKE : fatal error U1052: file '....mak' not found"
> 
> </F>

I admit that I have not tried a clean checkout and used DevStudio 5 (I will
try at home alter today). Howver, I *do* think that the problem here is that
you grabbed in the short iterim before patch:

http://www.python.org/pipermail/python-checkins/2000-August/007072.html


I hope I hope.
If it is broken for MSVC 5 when I tried in a little bit I will back out.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Sat Aug 12 20:47:19 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 11:47:19 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 11, 2000 at 08:59:22PM -0400
References: <20000811143031.A13790@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEFKGPAA.tim_one@email.msn.com>
Message-ID: <20000812114719.E3528@ActiveState.com>

On Fri, Aug 11, 2000 at 08:59:22PM -0400, Tim Peters wrote:
> Not really.  They're not human-editable!  Leave 'em alone.  Keeping them in
> binary mode is a clue to people that they aren't *supposed* to go mucking
> with them via text processing tools.

I think that putting them in binary mode is a misleading clue that people
should not muck with them. The *are* text files. Editable or not the are not
binary. I shouldn't go mucking with 'configure' either, because it is a generated
file, but we shouldn't call it binary.

Yes, I agree, people should not muck with .dsp files. I am not suggesting
that we do. The "text-processing" I was referring to are my attempts to keep
a local repository of Python in our local SCM tool (Perforce) in sync with
Python-CVS. When I suck in Python-CVS on linux and them shove it in Perforce:
 - the .dsp's land on my linux box with DOS terminators
 - I check everything into Perforce
 - I check Python out of Perforce on a Windows box and the .dsp's are all
   terminated with \r\n\n. This is because the .dsp were not marked as binary
   in Perforce because I logically didn't think that they *should* be marked
   as binary.
Having them marked as binary is just misleading I think.
 
Anyway, as Guido said, this is not worth arguing over too much and it should
have been fixed for you about an hour after I broke it (sorry).

If it is still broken for you then I will back out.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From nascheme at enme.ucalgary.ca  Sat Aug 12 20:58:20 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 12:58:20 -0600
Subject: [Python-Dev] Lib/symbol.py needs update after listcomp
Message-ID: <20000812125820.A567@keymaster.enme.ucalgary.ca>

Someone needs to run:

    ./python Lib/symbol.py

and check in the changes.

  Neil



From akuchlin at mems-exchange.org  Sat Aug 12 21:09:44 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Sat, 12 Aug 2000 15:09:44 -0400
Subject: [Python-Dev] Lib/symbol.py needs update after listcomp
In-Reply-To: <20000812125820.A567@keymaster.enme.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Sat, Aug 12, 2000 at 12:58:20PM -0600
References: <20000812125820.A567@keymaster.enme.ucalgary.ca>
Message-ID: <20000812150944.A9653@kronos.cnri.reston.va.us>

On Sat, Aug 12, 2000 at 12:58:20PM -0600, Neil Schemenauer wrote:
>Someone needs to run:
>    ./python Lib/symbol.py
>and check in the changes.

Done.  

--amk



From tim_one at email.msn.com  Sat Aug 12 21:10:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 15:10:30 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000812113900.D3528@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>

Note that an update isn't enough to get you going again on Windows, and
neither is (the moral equivalent of) "rm *" in PCbuild followed by an
update.  But "rm -rf PCbuild" followed by an update was enough for me (I'm
working via phone modem -- a fresh full checkout is too time-consuming for
me).





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 21:16:17 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 21:16:17 +0200 (CEST)
Subject: [Python-Dev] minimal stackless
Message-ID: <200008121916.VAA20873@python.inrialpes.fr>

I'd like to clarify my position about the mythical Stackless issue.

I would be okay to evaluate a minimal stackless implementation of the
current VM, and eventually consider it for inclusion if it doesn't slow
down the interpreter (and if it does, I don't know yet how much would be
tolerable).

However, I would be willing to do this only if such implementation is
distilled from the call/cc stuff completely.

That is, a minimal stackless implementation which gives us an equivalent
VM as we have it today with the C stack. This is what I'd like to see
first in the stackless PEP too. No mixtures with continuations & co.

The call/cc issue is "application domain" for me -- it relies on top of
the minimal stackless and would come only as an exported interface to the
control flow management of the VM. Therefore, it must be completely
optional (both in terms of lazy decision on whether it should be included
someday).

So, if such distilled, minimal stackless implementation hits the
SourceForge shelves by the next week, I, at least, will give it a try
and will report impressions. By that time, it would also be nice to see a
clear summary of the frame management ideas in the 1st draft of the PEP.

If the proponents of Stackless are ready for the challenge, give it a go
(this seems to be a required first step in the right direction anyway).

I can't offer any immediate help though, given the list of Python-related
tasks I'd like to finish (as always, done in my spare minutes) and I'll
be almost, if not completely, unavailable the last week of August.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Sat Aug 12 21:22:58 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 12:22:58 -0700
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 03:10:30PM -0400
References: <20000812113900.D3528@ActiveState.com> <LNBBLJKPBEHFEDALKOLCAEGNGPAA.tim_one@email.msn.com>
Message-ID: <20000812122258.A4684@ActiveState.com>

On Sat, Aug 12, 2000 at 03:10:30PM -0400, Tim Peters wrote:
> Note that an update isn't enough to get you going again on Windows, and
> neither is (the moral equivalent of) "rm *" in PCbuild followed by an
> update.  But "rm -rf PCbuild" followed by an update was enough for me (I'm
> working via phone modem -- a fresh full checkout is too time-consuming for
> me).

Oh right. The '-kb' is sticky to you checked out version. I forgot
about that.

Thanks, Tim.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From esr at thyrsus.com  Sat Aug 12 21:37:42 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 15:37:42 -0400
Subject: [Python-Dev] minimal stackless
In-Reply-To: <200008121916.VAA20873@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sat, Aug 12, 2000 at 09:16:17PM +0200
References: <200008121916.VAA20873@python.inrialpes.fr>
Message-ID: <20000812153742.A25529@thyrsus.com>

Vladimir Marangozov <Vladimir.Marangozov at inrialpes.fr>:
> That is, a minimal stackless implementation which gives us an equivalent
> VM as we have it today with the C stack. This is what I'd like to see
> first in the stackless PEP too. No mixtures with continuations & co.
> 
> The call/cc issue is "application domain" for me -- it relies on top of
> the minimal stackless and would come only as an exported interface to the
> control flow management of the VM. Therefore, it must be completely
> optional (both in terms of lazy decision on whether it should be included
> someday).

I'm certainly among the call/cc fans, and I guess I'm weakly in the
"Stackless proponent" camp, and I agree.  These issues should be
separated.  If minimal stackless mods to ceval can solve (for example) the
stack overflow problem I just got bitten by, we ought to integrate
them for 2.0 and then give any new features a separate and thorough debate.

I too will be happy to test a minimal-stackless patch.  Come on, Christian,
the ball's in your court.  This is your best chance to get stackless
accepted.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

When only cops have guns, it's called a "police state".
        -- Claire Wolfe, "101 Things To Do Until The Revolution" 



From effbot at telia.com  Sat Aug 12 21:40:15 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 12 Aug 2000 21:40:15 +0200
Subject: [Python-Dev] Include/graminit.h, Python/graminit.c obsolete?
References: <14741.36807.101870.221890@beluga.mojam.com>
Message-ID: <002b01c00495$32df3120$f2a6b5d4@hagrid>

skip wrote:

> With Thomas's patch to the top-level Makefile that makes Grammar a more
> first-class directory, are the generated graminit.h and graminit.c files
> needed any longer?

yes please -- thomas' patch only generates those files on
unix boxes.  as long as we support other platforms too, the
files should be in the repository, and new versions should be
checked in whenever the grammar is changed.

</F>




From tim_one at email.msn.com  Sat Aug 12 21:39:03 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 15:39:03 -0400
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <20000812114719.E3528@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEGOGPAA.tim_one@email.msn.com>

[Trent Mick]
> I think that putting them in binary mode is a misleading clue that
> people should not muck with them. The *are* text files.

But you don't know that.  They're internal Microsoft files in an
undocumented, proprietary format.  You'll find nothing in MS docs
guaranteeing they're text files, but will find the direst warnings against
attempting to edit them.  MS routinely changes *scads* of things about
DevStudio-internal files across releases.

For all the rest, you created your own problems by insisting on telling
Perforce they're text files, despite that they're clearly marked binary
under CVS.

I'm unstuck now, but Fredrik will likely still have new problems
cross-mounting file systems between Windows and Linux (see his msg).  Since
nothing here *was* broken (except for your private and self-created problems
under Perforce), "fixing" it was simply a bad idea.  We're on a very tight
schedule, and the CVS tree isn't a playground.





From thomas at xs4all.net  Sat Aug 12 21:45:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 21:45:24 +0200
Subject: [Python-Dev] 're' crashes ?
Message-ID: <20000812214523.H14470@xs4all.nl>

I'm not trying to sound like Eric (though I don't mind if I do ;) but my
Python crashes. Or rather, test_re fails with a coredump, since this
afternoon or so. I'm fairly certain it was working fine yesterday, and it's
an almost-vanilla CVS tree (I was about to check-in the fixes to
Tools/compiler, and tried to use the compiler on the std lib and the test
suite, when I noticed the coredump.)

The coredump says this:

#0  eval_code2 (co=0x824ba50, globals=0x82239b4, locals=0x0, args=0x827e18c, 
    argcount=2, kws=0x827e194, kwcount=0, defs=0x82211c0, defcount=1, 
    owner=0x0) at ceval.c:1474
1474                                    Py_DECREF(w);

Which is part of the FOR_LOOP opcode:

1461                    case FOR_LOOP:
1462                            /* for v in s: ...
1463                               On entry: stack contains s, i.
1464                               On exit: stack contains s, i+1, s[i];
1465                               but if loop exhausted:
1466                                    s, i are popped, and we jump */
1467                            w = POP(); /* Loop index */
1468                            v = POP(); /* Sequence object */
1469                            u = loop_subscript(v, w);
1470                            if (u != NULL) {
1471                                    PUSH(v);
1472                                    x = PyInt_FromLong(PyInt_AsLong(w)+1);
1473                                    PUSH(x);
1474                                    Py_DECREF(w);
1475                                    PUSH(u);
1476                                    if (x != NULL) continue;
1477                            }
1478                            else {
1479                                    Py_DECREF(v);
1480                                    Py_DECREF(w);
1481                                    /* A NULL can mean "s exhausted"
1482                                       but also an error: */
1483                                    if (PyErr_Occurred())
1484                                            why = WHY_EXCEPTION;

I *think* this isn't caused by this code, but rather by a refcounting bug
somewhere. 'w' should be an int, and it's used on line 1472, and doesn't
cause an error there (unless passing a NULL pointer to PyInt_AsLong() isn't
an error ?) But it's NULL at line 1474. Is there an easy way to track an
error like this ? Otherwise I'll play around a bit using breakpoints and
such in gdb.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Sat Aug 12 22:03:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 16:03:20 -0400
Subject: [Python-Dev] Feature freeze!
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com>

The 2.0 release manager (Jeremy) is on vacation.  In his absence, here's a
reminder from the 2.0 release schedule:

    Aug. 14: All 2.0 PEPs finished / feature freeze

See the rest at:

    http://python.sourceforge.net/peps/pep-0200.html

Note that that's Monday!  Any new "new feature" patches submitted after
Sunday will be mindlessly assigned Postponed status.  New "new feature"
patches submitted after this instant but before Monday will almost certainly
be assigned Postponed status too -- just not *entirely* mindlessly <wink>.
"Sunday" and "Monday" are defined by wherever Guido happens to be.  "This
instant" is defined by me, and probably refers to some time in the past from
your POV; it's negotiable.





From akuchlin at mems-exchange.org  Sat Aug 12 22:06:28 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Sat, 12 Aug 2000 16:06:28 -0400
Subject: [Python-Dev] Location of compiler code
Message-ID: <E13NhY4-00087X-00@kronos.cnri.reston.va.us>

I noticed that Jeremy checked in his compiler code; however, it lives
in Tools/compiler/compiler.  Any reason it isn't in Lib/compiler?

--amk



From tim_one at email.msn.com  Sat Aug 12 22:11:50 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 16:11:50 -0400
Subject: [Python-Dev] Location of compiler code
In-Reply-To: <E13NhY4-00087X-00@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHBGPAA.tim_one@email.msn.com>

[Andrew Kuchling]
> I noticed that Jeremy checked in his compiler code; however, it lives
> in Tools/compiler/compiler.  Any reason it isn't in Lib/compiler?

Suggest waiting for Jeremy to return from vacation (22 Aug).





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 12 23:08:44 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 12 Aug 2000 23:08:44 +0200 (CEST)
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 04:03:20 PM
Message-ID: <200008122108.XAA21412@python.inrialpes.fr>

Tim Peters wrote:
> 
> The 2.0 release manager (Jeremy) is on vacation.  In his absence, here's a
> reminder from the 2.0 release schedule:
> 
>     Aug. 14: All 2.0 PEPs finished / feature freeze
> 
> See the rest at:
> 
>     http://python.sourceforge.net/peps/pep-0200.html
> 
> Note that that's Monday!  Any new "new feature" patches submitted after
> Sunday will be mindlessly assigned Postponed status.  New "new feature"
> patches submitted after this instant but before Monday will almost certainly
> be assigned Postponed status too -- just not *entirely* mindlessly <wink>.
> "Sunday" and "Monday" are defined by wherever Guido happens to be.  "This
> instant" is defined by me, and probably refers to some time in the past from
> your POV; it's negotiable.

This reminder comes JIT!

Then please make coincide the above dates/instants with the status of
the open patches and take a stance on them: assign them to people, postpone,
whatever.

I deliberately postponed my object malloc patch.

PS: this is also JIT as per the stackless discussion -- I mentioned
"consider for inclusion" which was interpreted as "inclusion for 2.0"
<frown>. God knows that I tried to be very careful when writing my
position statement... OTOH, there's still a valid deadline for 2.0!

PPS: is the pep-0200.html referenced above up to date? For instance,
I see it mentions SET_LINENO pointing to old references, while a newer
postponed patch is at SourceForge.

A "last modified <date>" stamp would be nice.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Sat Aug 12 23:51:55 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 14:51:55 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
Message-ID: <20000812145155.A7629@ActiveState.com>

from Objects/listobject.c:

static int
ins1(PyListObject *self, int where, PyObject *v)
{
    int i;
    PyObject **items;
    if (v == NULL) {
        PyErr_BadInternalCall();
        return -1;
    }
    items = self->ob_item;
    NRESIZE(items, PyObject *, self->ob_size+1);
    if (items == NULL) {
        PyErr_NoMemory();
        return -1;
    }
    if (where < 0)
        where = 0;
    if (where > self->ob_size)
        where = self->ob_size;
    for (i = self->ob_size; --i >= where; )
        items[i+1] = items[i];
    Py_INCREF(v);
    items[where] = v;
    self->ob_item = items;
    self->ob_size++;         <-------------- can this overflow?
    return 0;
}


In the case of sizeof(int) < sizeof(void*), can this overflow. I have a small
patch to text self->ob_size against INT_MAX and I was going to submit it but
I am not so sure that overflow is not checked by some other mechanism for
list insert. Is it or was this relying on sizeof(ob_size) == sizeof(void*),
hence a list being able to hold as many items as there is addressable memory?

scared-to-patch-ly yours,
Trent


proposed patch:

*** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
--- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
***************
*** 149,155 ****
        Py_INCREF(v);
        items[where] = v;
        self->ob_item = items;
!       self->ob_size++;
        return 0;
  }

--- 149,159 ----
        Py_INCREF(v);
        items[where] = v;
        self->ob_item = items;
!       if (self->ob_size++ == INT_MAX) {
!               PyErr_SetString(PyExc_OverflowError,
!                       "cannot add more objects to list");
!               return -1;
!       }
        return 0;
  }




-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Sat Aug 12 23:52:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 12 Aug 2000 23:52:47 +0200
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <200008122108.XAA21412@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sat, Aug 12, 2000 at 11:08:44PM +0200
References: <LNBBLJKPBEHFEDALKOLCCEHAGPAA.tim_one@email.msn.com> <200008122108.XAA21412@python.inrialpes.fr>
Message-ID: <20000812235247.I14470@xs4all.nl>

On Sat, Aug 12, 2000 at 11:08:44PM +0200, Vladimir Marangozov wrote:

> PPS: is the pep-0200.html referenced above up to date? For instance,
> I see it mentions SET_LINENO pointing to old references, while a newer
> postponed patch is at SourceForge.

I asked similar questions about PEP 200, in particular on which new features
were considered for 2.0 and what their status is (PEP 200 doesn't mention
augmented assignment, which as far as I know has been on Guido's "2.0" list
since 2.0 and 1.6 became different releases.) I apparently caught Jeremy
just before he left for his holiday, and directed me towards Guido regarding
those questions, and Guido has apparently been too busy (or he missed that
email as well as some python-dev email.)

All my PEPs are in, though, unless I should write a PEP on 'import as',
which I really think should go in 2.0. I'd be suprised if 'import as' needs
a PEP, since the worst vote on 'import as' was Eric's '+0', and there seems
little concern wrt. syntax or implementation. It's more of a fix for
overlooked syntax than it is a new feature<0.6 wink>.

I just assumed the PyLabs team (or at least 4/5th of it) were too busy with
getting 1.6 done and finished to be concerned with non-pressing 2.0 issues,
and didn't want to press them on these issues until 1.6 is truely finished.
Pity 1.6-beta-cycle and 2.0-feature-freeze overlap :P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nascheme at enme.ucalgary.ca  Sun Aug 13 00:03:57 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 18:03:57 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
Message-ID: <20000812180357.A18816@acs.ucalgary.ca>

With all the recent proposed and accepted language changes, we
should be a careful to keep everything up to date.  The parser
module, Jeremy's compiler, and I suspect JPython have been left
behind by the recent changes.  In the past we have been blessed
by a very stable core language.  Times change. :)

I'm trying to keep Jeremy's compiler up to date.  Modifying the
parser module to understand list comprehensions seems to be none
trivial however.  Does anyone else have the time and expertise to
make these changes?  The compiler/transformer.py module will also
have to be modified to understand the new parse tree nodes.  That
part should be somewhat easier.

On a related note, I think the SyntaxError messages generated by
the compile() function and the parser module could be improved.
This is annoying:

    >>> compile("$x", "myfile.py", "eval")
    Traceback (most recent call last):
      File "<stdin>", line 1, in ?
      File "<string>", line 1
        $x
        ^
    SyntaxError: invalid syntax

Is there any reason why the error message does not say
"myfile.py" instead of "<string>"?  If not I can probably put
together a patch to fix it.

As far as I can tell, the parser ParserError exceptions are
almost useless.  At least a line number could be given.  I'm not
sure how much work that is to fix though.

  Neil



From nascheme at enme.ucalgary.ca  Sun Aug 13 00:06:07 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sat, 12 Aug 2000 18:06:07 -0400
Subject: [Python-Dev] compiler package in Lib?
Message-ID: <20000812180607.A18938@acs.ucalgary.ca>

Shouldn't the compiler package go in Lib instead of Tools?  The
AST used the by compiler should be very useful to things like
lint checkers, optimizers, and "refactoring" tools.  

  Neil



From Vladimir.Marangozov at inrialpes.fr  Sun Aug 13 00:24:39 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 00:24:39 +0200 (CEST)
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <20000812145155.A7629@ActiveState.com> from "Trent Mick" at Aug 12, 2000 02:51:55 PM
Message-ID: <200008122224.AAA21816@python.inrialpes.fr>

Trent Mick wrote:
>
> [listobject.c/ins1()]
> ...
>     self->ob_item = items;
>     self->ob_size++;         <-------------- can this overflow?
>     return 0;
> }
> 
> 
> In the case of sizeof(int) < sizeof(void*), can this overflow. I have a small
> patch to text self->ob_size against INT_MAX and I was going to submit it but
> I am not so sure that overflow is not checked by some other mechanism for
> list insert.

+0.

It could overflow but if it does, this is a bad sign about using a list
for such huge amount of data.

And this is the second time in a week that I see an attempt to introduce
a bogus counter due to post-increments embedded in an if statement!

> Is it or was this relying on sizeof(ob_size) == sizeof(void*),
> hence a list being able to hold as many items as there is addressable memory?
> 
> scared-to-patch-ly yours,
> Trent

And you're right <wink>

> 
> 
> proposed patch:
> 
> *** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
> --- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
> ***************
> *** 149,155 ****
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       self->ob_size++;
>         return 0;
>   }
> 
> --- 149,159 ----
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       if (self->ob_size++ == INT_MAX) {
> !               PyErr_SetString(PyExc_OverflowError,
> !                       "cannot add more objects to list");
> !               return -1;
> !       }
>         return 0;
>   }
> 
> 
> 
> 
> -- 
> Trent Mick
> TrentM at ActiveState.com
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From esr at thyrsus.com  Sun Aug 13 00:31:32 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 18:31:32 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812180357.A18816@acs.ucalgary.ca>; from nascheme@enme.ucalgary.ca on Sat, Aug 12, 2000 at 06:03:57PM -0400
References: <20000812180357.A18816@acs.ucalgary.ca>
Message-ID: <20000812183131.A26660@thyrsus.com>

Neil Schemenauer <nascheme at enme.ucalgary.ca>:
> I'm trying to keep Jeremy's compiler up to date.  Modifying the
> parser module to understand list comprehensions seems to be none
> trivial however. 

Last I checked, list comprehensions hadn't been accepted.  I think
there's at least one more debate waiting there...
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

If a thousand men were not to pay their tax-bills this year, that would
... [be] the definition of a peaceable revolution, if any such is possible.
	-- Henry David Thoreau



From trentm at ActiveState.com  Sun Aug 13 00:33:12 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 15:33:12 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <200008122224.AAA21816@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sun, Aug 13, 2000 at 12:24:39AM +0200
References: <20000812145155.A7629@ActiveState.com> <200008122224.AAA21816@python.inrialpes.fr>
Message-ID: <20000812153312.B7629@ActiveState.com>

On Sun, Aug 13, 2000 at 12:24:39AM +0200, Vladimir Marangozov wrote:
> +0.
> 
> It could overflow but if it does, this is a bad sign about using a list
> for such huge amount of data.

Point taken.

> 
> And this is the second time in a week that I see an attempt to introduce
> a bogus counter due to post-increments embedded in an if statement!
>

If I read you correctly then I think that you are mistaking my intention. Do
you mean that I am doing the comparison *before* the increment takes place
here:

> > !       if (self->ob_size++ == INT_MAX) {
> > !               PyErr_SetString(PyExc_OverflowError,
> > !                       "cannot add more objects to list");
> > !               return -1;
> > !       }

That is my intention. You can increment up to INT_MAX but not over.....

... heh heh actually my code *is* wrong. But for a slightly different reason.
I trash the value of self->ob_size on overflow. You are right, I made a
mistake trying to be cute with autoincrement in an 'if' statement. I should
do the check and *then* increment if okay.

Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Sun Aug 13 00:34:43 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 13 Aug 2000 00:34:43 +0200
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812183131.A26660@thyrsus.com>; from esr@thyrsus.com on Sat, Aug 12, 2000 at 06:31:32PM -0400
References: <20000812180357.A18816@acs.ucalgary.ca> <20000812183131.A26660@thyrsus.com>
Message-ID: <20000813003443.J14470@xs4all.nl>

On Sat, Aug 12, 2000 at 06:31:32PM -0400, Eric S. Raymond wrote:
> Neil Schemenauer <nascheme at enme.ucalgary.ca>:
> > I'm trying to keep Jeremy's compiler up to date.  Modifying the
> > parser module to understand list comprehensions seems to be none
> > trivial however. 

> Last I checked, list comprehensions hadn't been accepted.  I think
> there's at least one more debate waiting there...

Check again, they're already checked in. The implementation may change
later, but the syntax has been decided (by Guido):

[(x, y) for y in something for x in somewhere if y in x]

The parentheses around the leftmost expression are mandatory. It's currently
implemented something like this:

L = []
__x__ = [].append
for y in something:
	for x in somewhere:
		if y in x:
			__x__((x, y))
del __x__

(where 'x' is a number, chosen to *probably* not conflict with any other
local vrbls or other (nested) list comprehensions, and the result of the
expression is L, which isn't actually stored anywhere during evaluation.)

See the patches list archive and the SF patch info about the patch (#100654)
for more information on how and why.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Sun Aug 13 01:01:54 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 12 Aug 2000 19:01:54 -0400
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000813003443.J14470@xs4all.nl>; from thomas@xs4all.net on Sun, Aug 13, 2000 at 12:34:43AM +0200
References: <20000812180357.A18816@acs.ucalgary.ca> <20000812183131.A26660@thyrsus.com> <20000813003443.J14470@xs4all.nl>
Message-ID: <20000812190154.B26719@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> > Last I checked, list comprehensions hadn't been accepted.  I think
> > there's at least one more debate waiting there...
> 
> Check again, they're already checked in. The implementation may change
> later, but the syntax has been decided (by Guido):
> 
> [(x, y) for y in something for x in somewhere if y in x]

Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
to believe that having special syntax for this (rather than constructor
functions a la zip()) is a bad mistake.  I predict it's going to come
back to bite us hard in the future.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

I cannot undertake to lay my finger on that article of the
Constitution which grant[s] a right to Congress of expending, on
objects of benevolence, the money of their constituents.
	-- James Madison, 1794



From bckfnn at worldonline.dk  Sun Aug 13 01:29:14 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Sat, 12 Aug 2000 23:29:14 GMT
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <20000812180357.A18816@acs.ucalgary.ca>
References: <20000812180357.A18816@acs.ucalgary.ca>
Message-ID: <3995dd8b.34665776@smtp.worldonline.dk>

[Neil Schemenauer]

>With all the recent proposed and accepted language changes, we
>should be a careful to keep everything up to date.  The parser
>module, Jeremy's compiler, and I suspect JPython have been left
>behind by the recent changes. 

WRT JPython, the list comprehensions have not yet been added. Then
again, the feature was only recently checked in.

You raise a good point however. There are many compilers/parsers that
have to be updated before we can claim that a feature is fully
implemented. 


[Thomas Wouters]

>[(x, y) for y in something for x in somewhere if y in x]
>
>The parentheses around the leftmost expression are mandatory. It's currently
>implemented something like this:
>
>L = []
>__x__ = [].append
>for y in something:
>	for x in somewhere:
>		if y in x:
>			__x__((x, y))
>del __x__

Thank you for the fine example. At least I now think that know what the
feature is about.

regards,
finn



From tim_one at email.msn.com  Sun Aug 13 01:37:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 19:37:14 -0400
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <20000812145155.A7629@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com>

[Trent Mick]
> from Objects/listobject.c:
>
> static int
> ins1(PyListObject *self, int where, PyObject *v)
> {
>     ...
>     self->ob_size++;         <-------------- can this overflow?
>     return 0;
> }

> ...
> Is it or was this relying on sizeof(ob_size) == sizeof(void*),
> hence a list being able to hold as many items as there is
> addressable memory?

I think it's more relying on the product of two other assumptions:  (a)
sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
elements in Python.  But you're right, sooner or later that's going to bite
us.

> proposed patch:
>
> *** python/dist/src/Objects/listobject.c Fri Aug 11 16:25:08 2000
> --- Python/dist/src/Objects/listobject.c Fri Aug 11 16:25:36 2000
> ***************
> *** 149,155 ****
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       self->ob_size++;
>         return 0;
>   }
>
> --- 149,159 ----
>         Py_INCREF(v);
>         items[where] = v;
>         self->ob_item = items;
> !       if (self->ob_size++ == INT_MAX) {
> !               PyErr_SetString(PyExc_OverflowError,
> !                       "cannot add more objects to list");
> !               return -1;
> !       }
>         return 0;
>   }

+1 on catching it, -1 on this technique.  You noted later that this will
make trash out of ob_size if it triggers, but the list has already grown and
been shifted by this time too, so it's left in an insane state (to the user,
the last element will appear to vanish).

Suggest checking at the *start* of the routine instead:

       if (self->ob_size == INT_MAX) {
              PyErr_SetString(PyExc_OverflowError,
                      "cannot add more objects to list");
              return -1;
      }

Then the list isn't harmed.





From tim_one at email.msn.com  Sun Aug 13 01:57:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 19:57:29 -0400
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <200008122108.XAA21412@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHJGPAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> This reminder comes JIT!
>
> Then please make coincide the above dates/instants with the status of
> the open patches and take a stance on them: assign them to people,
> postpone, whatever.
>
> I deliberately postponed my object malloc patch.

I don't know why.  It's been there quite a while, and had non-trivial
support for inclusion in 2.0.  A chance to consider the backlog of patches
as a stable whole is why the two weeks between "feature freeze" and 2.0b1
exists!

> PS: this is also JIT as per the stackless discussion -- I mentioned
> "consider for inclusion" which was interpreted as "inclusion for 2.0"
> <frown>. God knows that I tried to be very careful when writing my
> position statement... OTOH, there's still a valid deadline for 2.0!

I doubt any variant of Stackless has a real shot for 2.0 at this point,
although if a patch shows up before Sunday ends I won't Postpone it without
reason (like, say, Guido tells me to).

> PPS: is the pep-0200.html referenced above up to date? For instance,
> I see it mentions SET_LINENO pointing to old references, while a newer
> postponed patch is at SourceForge.
>
> A "last modified <date>" stamp would be nice.

I agree, but yaaaawn <wink>.  CVS says it was last modified before Jeremy
went on vacation.  It's not up to date.  The designated release manager in
Jeremy's absence apparently didn't touch it.  I can't gripe about that,
though, because he's my boss <wink>.  He sent me email today saying "tag,
now you're it!" (Guido will be gone all next week).  My plate is already
full, though, and I won't get around to updating it today.

Yes, this is no way to run a release, and so I don't have any confidence
that the release dates in pep200 will be met.  Still, I was arguing for
feature freeze two months ago, and so as long as "I'm it" I'm not inclined
to slip the schedule on *that* part.  I bet it will be at least 3 weeks
before 2.0b1 hits the streets, though.

in-any-case-feature-freeze-is-on-the-critical-path-so-the-sooner-
    the-better-ly y'rs  - tim





From tim_one at email.msn.com  Sun Aug 13 02:11:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 20:11:30 -0400
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <20000812235247.I14470@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>

[Thomas Wouters]
> I asked similar questions about PEP 200, in particular on which
> new features were considered for 2.0 and what their status is
> (PEP 200 doesn't mention augmented assignment, which as far as I
> know has been on Guido's "2.0" list since 2.0 and 1.6 became
> different releases.)

Yes, augmented assignment is golden for 2.0.

> I apparently caught Jeremy just before he left for his holiday,
> and directed me towards Guido regarding those questions, and
> Guido has apparently been too busy (or he missed that email as
> well as some python-dev email.)

Indeed, we're never going to let Guido be Release Manager again <wink>.

> All my PEPs are in, though, unless I should write a PEP on 'import as',
> which I really think should go in 2.0. I'd be suprised if 'import
> as' needs a PEP, since the worst vote on 'import as' was Eric's '+0',
> and there seems little concern wrt. syntax or implementation. It's
> more of a fix for overlooked syntax than it is a new feature<0.6 wink>.

Why does everyone flee from the idea of writing a PEP?  Think of it as a
chance to let future generations know what a cool idea you had.  I agree
this change is too simple and too widely loved to *need* a PEP, but if you
write one anyway you can add it to your resume under your list of
peer-reviewed publications <wink>.

> I just assumed the PyLabs team (or at least 4/5th of it) were too
> busy with getting 1.6 done and finished to be concerned with non-
> pressing 2.0 issues, and didn't want to press them on these issues
> until 1.6 is truely finished.

Actually, Fred Drake has done almost everything in the 1.6 branch by
himself, while Guido has done all the installer and web-page work for that.
The rest of our time has been eaten away by largely invisible cruft, from
haggling over the license to haggling over where to put python.org next.
Lots of haggling!  You guys get to do the *fun* parts (btw, it's occurred to
me often that I did more work on Python proper when I had a speech
recognition job!).

> Pity 1.6-beta-cycle and 2.0-feature-freeze overlap :P

Ya, except it's too late to stop 1.6 now <wink>.

but-not-too-late-to-stop-2.0-ly y'rs  - tim





From MarkH at ActiveState.com  Sun Aug 13 03:02:36 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sun, 13 Aug 2000 11:02:36 +1000
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000812190154.B26719@thyrsus.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com>

ESR, responding to 

[(x, y) for y in something for x in somewhere if y in x]

for list comprehension syntax:

> Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
> to believe that having special syntax for this (rather than constructor
> functions a la zip()) is a bad mistake.  I predict it's going to come
> back to bite us hard in the future.

FWIW, these are my thoughts exactly (for this particular issue, anyway).

Wont-bother-voting-cos-nobody-is-counting ly,

Mark.




From trentm at ActiveState.com  Sun Aug 13 03:25:18 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 18:25:18 -0700
Subject: [Python-Dev] parsers and compilers for 2.0
In-Reply-To: <3995dd8b.34665776@smtp.worldonline.dk>; from bckfnn@worldonline.dk on Sat, Aug 12, 2000 at 11:29:14PM +0000
References: <20000812180357.A18816@acs.ucalgary.ca> <3995dd8b.34665776@smtp.worldonline.dk>
Message-ID: <20000812182518.B10528@ActiveState.com>

On Sat, Aug 12, 2000 at 11:29:14PM +0000, Finn Bock wrote:
> [Thomas Wouters]
> 
> >[(x, y) for y in something for x in somewhere if y in x]
> >
> >The parentheses around the leftmost expression are mandatory. It's currently
> >implemented something like this:
> >
> >L = []
> >__x__ = [].append
> >for y in something:
> >	for x in somewhere:
> >		if y in x:
> >			__x__((x, y))
> >del __x__
> 
> Thank you for the fine example. At least I now think that know what the
> feature is about.
> 

Maybe that example should get in the docs for list comprehensions.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Sun Aug 13 03:30:02 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 12 Aug 2000 18:30:02 -0700
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 08:11:30PM -0400
References: <20000812235247.I14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCAEHKGPAA.tim_one@email.msn.com>
Message-ID: <20000812183002.C10528@ActiveState.com>

On Sat, Aug 12, 2000 at 08:11:30PM -0400, Tim Peters wrote:
> You guys get to do the *fun* parts 

Go give Win64 a whirl for a while. <grumble>

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Sun Aug 13 03:33:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 12 Aug 2000 21:33:43 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>

[ESR, responding to

  [(x, y) for y in something for x in somewhere if y in x]

 for list comprehension syntax:
]
> Damn.  That's unfortunate.  With all due respect to the BDFL, I've come
> to believe that having special syntax for this (rather than constructor
> functions a la zip()) is a bad mistake.  I predict it's going to come
> back to bite us hard in the future.

[Mark Hammond]
> FWIW, these are my thoughts exactly (for this particular issue,
> anyway).
>
> Wont-bother-voting-cos-nobody-is-counting ly,

Counting, no; listening, yes; persuaded, no.  List comprehensions are one of
the best-loved features of Haskell (really!), and Greg/Skip/Ping's patch
implements as an exact a parallel to Haskell's syntax and semantics as is
possible in Python.  Predictions of doom thus need to make a plausible case
for why a rousing success in Haskell is going to be a disaster in Python.
The only basis I can see for such a claim (and I have to make one up myself
because nobody else has <wink>) is that Haskell is lazy, while Python is
eager.  I can't get from there to "disaster", though, or even "plausible
regret".

Beyond that, Guido dislikes the way Lisp spells most things, so it's this or
nothing.  I'm certain I'll use it, and with joy.  Do an update and try it.

C:\Code\python\dist\src\PCbuild>python
Python 2.0b1 (#0, Aug 12 2000, 14:57:27) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> [x**2 for x in range(10) if x & 1]
[1, 9, 25, 49, 81]
>>> [x**2 if 3]
[81]
>>>

Now even as a fan, I'll admit that last line sucks <wink>.

bug-in-the-grammar-ly y'rs  - tim





From thomas at xs4all.net  Sun Aug 13 09:53:57 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 13 Aug 2000 09:53:57 +0200
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 12, 2000 at 09:33:43PM -0400
References: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com> <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com>
Message-ID: <20000813095357.K14470@xs4all.nl>

On Sat, Aug 12, 2000 at 09:33:43PM -0400, Tim Peters wrote:

[ ESR and Mark griping about list comprehensions syntax, which I can relate
to, so I'll bother to try and exlain what bothers *me* wrt list
comprehensions. Needn't be the same as what bothers them, though ]

> List comprehensions are one of the best-loved features of Haskell
> (really!), and Greg/Skip/Ping's patch implements as an exact a parallel to
> Haskell's syntax and semantics as is possible in Python.

I don't see "it's cool in language X" as a particular good reason to include
a feature... We don't add special syntax for regular expressions, support
for continuations or direct access to hardware because of that, do we ?

> Predictions of doom thus need to make a plausible case for why a rousing
> success in Haskell is going to be a disaster in Python. The only basis I
> can see for such a claim (and I have to make one up myself because nobody
> else has <wink>) is that Haskell is lazy, while Python is eager.  I can't
> get from there to "disaster", though, or even "plausible regret".

My main beef with the syntax is that it is, in my eyes, unpythonic. It has
an alien, forced feel to it, much more so than the 'evil' map/filter/reduce.
It doesn't 'fit' into Python the way most of the other features do; it's
simply syntactic sugar for a specific kind of for-loop. It doesn't add any
extra functionality, and for that large a syntactic change, I guess that
scares me.

Those doubts were why I was glad you were going to write the PEP. I was
looking forward to you explaining why I had those doubts and giving sound
arguments against them :-)

> Beyond that, Guido dislikes the way Lisp spells most things, so it's this or
> nothing.  I'm certain I'll use it, and with joy.  Do an update and try it.

Oh, I've tried it. It's not included in the 'heavily patched Python 2.0b1' I
have running on a couple of machines to impress my colleagues, (which
includes the obmalloc patch, augmented assignment, range literals, import
as, indexing-for, and extended-slicing-on-lists) but that's mostly
because I was expecting, like ESR, a huge debate on its syntax. Lets say
that most my doubts arose after playing with it for a while. I fear people
will start using it in odd construct, and in odd ways, expecting other
aspects of for-loops to be included in list comprehensions (break, else,
continue, etc.) And then there's the way it's hard to parse because of the
lack of punctuation in it:

[((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in S] for b in
[b for b in B if mean(b)] for b,c in C for a,d in D for e in [Egg(a, b, c,
d, e) for e in E]]

I hope anyone writing something like that (notice the shadowing of some of
the outer vrbls in the inner loops) will either add some newlines and
indentation by themselves, or will be hunted down and shot (or at least
winged) by the PSU.

I'm not arguing to remove list comprehensions. I think they are cool
features that can replace map/filter, I just don't think they're that much
better than the use of map/filter.

Write-that-PEP-Tim-it-will-look-good-on-your-resume-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at thyrsus.com  Sun Aug 13 10:13:40 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sun, 13 Aug 2000 04:13:40 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>; from thomas@xs4all.net on Sun, Aug 13, 2000 at 09:53:57AM +0200
References: <ECEPKNMJLHAPFFJHDOJBGEIEDEAA.MarkH@ActiveState.com> <LNBBLJKPBEHFEDALKOLCIEHMGPAA.tim_one@email.msn.com> <20000813095357.K14470@xs4all.nl>
Message-ID: <20000813041340.B27949@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> My main beef with the syntax is that it is, in my eyes, unpythonic. It has
> an alien, forced feel to it, much more so than the 'evil' map/filter/reduce.
> It doesn't 'fit' into Python the way most of the other features do; it's
> simply syntactic sugar for a specific kind of for-loop. It doesn't add any
> extra functionality, and for that large a syntactic change, I guess that
> scares me.

I agree 100% with all of this.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"This country, with its institutions, belongs to the people who
inhabit it. Whenever they shall grow weary of the existing government,
they can exercise their constitutional right of amending it or their
revolutionary right to dismember it or overthrow it."
	-- Abraham Lincoln, 4 April 1861



From moshez at math.huji.ac.il  Sun Aug 13 10:15:15 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 13 Aug 2000 11:15:15 +0300 (IDT)
Subject: [Python-Dev] *.dsp and *.dsw are treated by CVS as binary. Why?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEGOGPAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008131114190.20886-100000@sundial>

On Sat, 12 Aug 2000, Tim Peters wrote:

> [Trent Mick]
> > I think that putting them in binary mode is a misleading clue that
> > people should not muck with them. The *are* text files.
> 
> But you don't know that.  They're internal Microsoft files in an
> undocumented, proprietary format.  You'll find nothing in MS docs
> guaranteeing they're text files, but will find the direst warnings against
> attempting to edit them.  MS routinely changes *scads* of things about
> DevStudio-internal files across releases.

Hey, I parsed those beasts, and edited them by hand. 

of-course-my-co-workers-hated-me-for-that-ly y'rs, Z.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From Vladimir.Marangozov at inrialpes.fr  Sun Aug 13 11:16:50 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 11:16:50 +0200 (CEST)
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 07:37:14 PM
Message-ID: <200008130916.LAA29139@python.inrialpes.fr>

Tim Peters wrote:
> 
> I think it's more relying on the product of two other assumptions:  (a)
> sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
> elements in Python.  But you're right, sooner or later that's going to bite
> us.

+1 on your patch, but frankly, if we reach a situation to be bitten
by this overflow, chances are that we've already dumped core or will
be very soon -- billions of objects = soon to be overflowing
ob_refcnt integer counters. Py_None looks like a fine candidate for this.

Now I'm sure you're going to suggest again making the ob_refcnt a long,
as you did before <wink>.


> Suggest checking at the *start* of the routine instead:
> 
>        if (self->ob_size == INT_MAX) {
>               PyErr_SetString(PyExc_OverflowError,
>                       "cannot add more objects to list");
>               return -1;
>       }
> 
> Then the list isn't harmed.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Sun Aug 13 11:32:25 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sun, 13 Aug 2000 11:32:25 +0200 (CEST)
Subject: [Python-Dev] Feature freeze!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEHJGPAA.tim_one@email.msn.com> from "Tim Peters" at Aug 12, 2000 07:57:29 PM
Message-ID: <200008130932.LAA29181@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Vladimir Marangozov]
> > This reminder comes JIT!
> >
> > Then please make coincide the above dates/instants with the status of
> > the open patches and take a stance on them: assign them to people,
> > postpone, whatever.
> >
> > I deliberately postponed my object malloc patch.
> 
> I don't know why.  It's been there quite a while, and had non-trivial
> support for inclusion in 2.0.  A chance to consider the backlog of patches
> as a stable whole is why the two weeks between "feature freeze" and 2.0b1
> exists!

Because the log message reads that I'm late with the stat interface
which shows what the situation is with and without it. If I want to
finish that part, I'll need to block my Sunday afternoon. Given that
now it's 11am, I have an hour to think what to do about it -- resurrect
or leave postponed.

> I doubt any variant of Stackless has a real shot for 2.0 at this point,
> although if a patch shows up before Sunday ends I won't Postpone it without
> reason (like, say, Guido tells me to).

I'm doubtful too, but if there's a clean & solid minimal implementation
which removes the stack dependency -- I'll have a look.

> 
> > PPS: is the pep-0200.html referenced above up to date? For instance,
> > I see it mentions SET_LINENO pointing to old references, while a newer
> > postponed patch is at SourceForge.
> >
> > A "last modified <date>" stamp would be nice.
> 
> I agree, but yaaaawn <wink>.  CVS says it was last modified before Jeremy
> went on vacation.  It's not up to date.  The designated release manager in
> Jeremy's absence apparently didn't touch it.  I can't gripe about that,
> though, because he's my boss <wink>.  He sent me email today saying "tag,
> now you're it!" (Guido will be gone all next week).  My plate is already
> full, though, and I won't get around to updating it today.

Okay - just wanted to make this point clear, since your reminder reads
"see the details there".

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Sun Aug 13 20:04:49 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 13 Aug 2000 11:04:49 -0700
Subject: [Python-Dev] can this overflow (list insertion)?
In-Reply-To: <200008130916.LAA29139@python.inrialpes.fr>; from Vladimir.Marangozov@inrialpes.fr on Sun, Aug 13, 2000 at 11:16:50AM +0200
References: <LNBBLJKPBEHFEDALKOLCGEHIGPAA.tim_one@email.msn.com> <200008130916.LAA29139@python.inrialpes.fr>
Message-ID: <20000813110449.A23269@ActiveState.com>

On Sun, Aug 13, 2000 at 11:16:50AM +0200, Vladimir Marangozov wrote:
> Tim Peters wrote:
> > 
> > I think it's more relying on the product of two other assumptions:  (a)
> > sizeof(int) >= 4, and (b) nobody is going to make a list with 2 billion
> > elements in Python.  But you're right, sooner or later that's going to bite
> > us.
> 
> +1 on your patch, but frankly, if we reach a situation to be bitten

I'll check it in later today.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Mon Aug 14 01:08:43 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 13 Aug 2000 16:08:43 -0700
Subject: [Python-Dev] you may have some PCbuild hiccups
Message-ID: <20000813160843.A27104@ActiveState.com>

Hello all,

Recently I spearheaded a number of screw ups in the PCbuild/ directory.
PCbuild/*.dsp and *.dsw went from binary to text to binary again. These are
sticky CVS attributes on files in your checked out Python tree.

If you care about the PCbuild/ content (i.e. you build Python on Windows)
then you may need to completely delete the PCbuild directory and
re-get it from CVS. You can tell if you *need* to by doing a 'cvs status
*.dsw *.dsp'. If any of those files *don't* have the "Sticky Option: -kb",
they should. If they all do and MSDEV loads the project files okay, then you
are fine.

NOTE: You have to delete the *whole* PCbuild\ directory, not just its
contents. The PCbuild\CVS control directory is part of what you have to
re-get.


Sorry,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Mon Aug 14 02:08:45 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 13 Aug 2000 20:08:45 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>

[Tim]
>> List comprehensions are one of the best-loved features of Haskell
>> (really!), and Greg/Skip/Ping's patch implements as an exact a
>> parallel to Haskell's syntax and semantics as is possible in Python.

[Thomas Wouters]
> I don't see "it's cool in language X" as a particular good reason
> to include a feature... We don't add special syntax for regular
> expressions, support for continuations or direct access to hardware
> because of that, do we ?

As Guido (overly!) modestly says, the only language idea he ever invented
was an "else" clause on loops.  He decided listcomps "were Pythonic" before
knowing anything about Haskell (or SETL, from which Haskell took the idea).
Given that he *already* liked them, the value in looking at Haskell is for
its actual experience with them.  It would be pretty stupid *not* to look at
experience with other languages that already have it!  And that's whether
you're pro or con.

So it's not "cool in language X" that drives it at all, it's "cool in
language X" that serves to confirm or refute the prior judgment that "it's
Pythonic, period".  When, e.g., Eric predicts it will bite us hard someday,
I can point to Haskell and legitimately ask "why here and not there?".

There was once a great push for adding some notion of "protected" class
members to Python.  Guido was initially opposed, but tempted to waffle
because proponents kept pointing to C++.  Luckily, Stroustrup had something
to say about this in his "Design and Evolution of C++" book, including that
he thought adding "protected" was a mistake, driven by relentless "good
arguments" that opposed his own initial intuition.  So in that case,
*really* looking at C++ may have saved Guido from making the same mistake.

As another example, few arguments are made more frequently than that Python
should add embedded assignments in conditionals.  *Lots* of other languages
have that -- but they mostly serve to tell us it's a bug factory in
practice!  The languages that avoid the bugs point to ways to get the effect
safely (although none yet Pythonically enough for Guido).

So this is a fact:  language design is very little about wholesale
invention, and that's especially true of Python.  It's a mystically
difficult blending of borrowed ideas, and it's rare as readable Perl <wink>
that an idea will get borrowed if it didn't even work well in its native
home.  listcomps work great where they came from, and that plus "hey, Guido
likes 'em!" makes it 99% a done deal.

> My main beef with the syntax is that it is, in my eyes, unpythonic.
> It has an alien, forced feel to it, much more so than the 'evil'
> map/filter/reduce.  It doesn't 'fit' into Python the way most of
> the other features do;

Guido feels exactly the opposite:  the business about "alien, forced feel,
not fitting" is exactly what he's said about map/filter/reduce/lambda on
many occasions.  listcomps strike him (me too, for that matter) as much more
Pythonic than those.

> it's simply syntactic sugar for a specific kind of for-loop. It
> doesn't add any extra functionality,

All overwhelmingly true of augmented assignments, and range literals, and
three-argument getattr, and list.pop, etc etc etc too.  Python has lots of
syntactic sugar -- making life pleasant is not a bad thing.

> and for that large a syntactic change, I guess that scares me.

The only syntactic change is to add a new form of list constructor.  It's
isolated and self-contained, and so "small" in that sense.

> Those doubts were why I was glad you were going to write the PEP. I
> was looking forward to you explaining why I had those doubts and
> giving sound arguments against them :-)

There is no convincing argument to made either way on whether "it's
Pythonic", which I think is your primary worry.  People *never* reach
consensus on whether a given feature X "is Pythonic".  That's why it's
always Guido's job.  You've been here long enough to see that -1 and +1 are
about evenly balanced, except on (in recent memory) "import x as y" -- which
I conviently neglected to mention had been dismissed as unPythonic by Guido
just a couple weeks ago <wink -- but he didn't really mean it then,
according to me>.

> ...
> but that's mostly because I was expecting, like ESR, a huge debate
> on its syntax.

Won't happen, because it already did.  This was debated to death long ago,
and more than once, and Guido likes what he likes now.  Greg Wilson made the
only new point on listcomps I've seen since two weeks after they were first
proposed by Greg Ewing (i.e., that the ";" notation *really* sucked).

> Lets say that most my doubts arose after playing with it for
> a while. I fear people will start using it in odd construct, and
> in odd ways,

Unlike, say, filter, map and reduce <wink>?

> expecting other aspects of for-loops to be included
> in list comprehensions (break, else, continue, etc.)

Those ideas were rejected long ago too (and that Haskell and SETL also
rejected them independently shows that, whether we can explain it or not,
they're simply bad ideas).

> And then there's the way it's hard to parse because of the
> lack of punctuation in it:
>
> [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> for e in [Egg(a, b, c, d, e) for e in E]]

That isn't a serious argument, to my eyes.  Write that as a Lisp one-liner
and see what it looks like then -- nuts is nuts, and a "scare example" could
just as easily be concocted out of insane nesting of subscripts and/or
function calls and/or parenthesized arithmetic.  Idiotic nesting is a
possibility for any construct that nests!  BTW, you're missing the
possibility to nest listcomps in "the expression" position too, a la

>>> [[1 for i in range(n)] for n in range(10)]
[[],
 [1],
 [1, 1],
 [1, 1, 1],
 [1, 1, 1, 1],
 [1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1, 1],
 [1, 1, 1, 1, 1, 1, 1, 1, 1]]
>>>

I know you missed that possibility above because, despite your claim of
being hard to parse, it's dead easy to spot where your listcomps begin:  "["
is easy for the eye to find.

> I hope anyone writing something like that (notice the shadowing of
> some of the outer vrbls in the inner loops)

You can find the same in nested lambdas littering map/reduce/etc today.

> will either add some newlines and indentation by themselves, or
> will be hunted down and shot (or at least winged) by the PSU.

Nope!  We just shun them.  Natural selection will rid the Earth of them
without violence <wink>.

> I'm not arguing to remove list comprehensions. I think they are cool
> features that can replace map/filter, I just don't think they're that
> much better than the use of map/filter.

Haskell programmers have map/filter too, and Haskell programmers routinely
favor using listcomps.  This says something about what people who have both
prefer.  I predict that once you're used to them, you'll find them much more
expressive:  "[" tells you immediately you're getting a list, then the next
thing you see is what the list is built out of, and then there's a bunch of
lower-level detail.  It's great.

> Write-that-PEP-Tim-it-will-look-good-on-your-resume-ly y'rs,

except-i'm-too-old-to-need-a-resume-anymore<wink>-ly y'rs  - tim





From tim_one at email.msn.com  Mon Aug 14 03:31:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 13 Aug 2000 21:31:20 -0400
Subject: [Python-Dev] you may have some PCbuild hiccups
In-Reply-To: <20000813160843.A27104@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJHGPAA.tim_one@email.msn.com>

[Trent Mick]
> Recently I spearheaded a number of screw ups in the PCbuild/
> directory.

Let's say the intent was laudable but the execution a bit off the mark
<wink>.

[binary -> text -> binary again]
> ...
> NOTE: You have to delete the *whole* PCbuild\ directory, not just
> its contents. The PCbuild\CVS control directory is part of what you
> have to re-get.

Actually, I don't think you have to bother this time -- just do a regular
update.  The files *were* marked as text this time around, but there is no
"sticky bit" saying so in the local config, so a plain update replaces them
now.

OK, I made most of that up.  But a plain update did work fine for me ...





From nowonder at nowonder.de  Mon Aug 14 06:27:07 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Mon, 14 Aug 2000 04:27:07 +0000
Subject: [*].items() (was: Re: [Python-Dev] Lockstep iteration - eureka!)
References: Your message of "Wed, 09 Aug 2000 02:37:07 MST."            
			 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
			 <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org> <l03102802b5b71c40f9fc@[193.78.237.121]> <3993FD49.C7E71108@prescod.net>
Message-ID: <3997751B.9BB1D9FA@nowonder.de>

Paul Prescod wrote:
> 
> Just van Rossum wrote:
> >
> >        for <index> indexing <element> in <seq>:
> 
> Let me throw out another idea. What if sequences just had .items()
> methods?
> 
> j=range(0,10)
> 
> for index, element in j.items():

I like the idea and so I've uploaded a patch for this to SF:
https://sourceforge.net/patch/?func=detailpatch&patch_id=101178&group_id=5470

For ease of reading:
This patch adds a .items() method to the list object.
.items() returns a list with of tuples. E.g.:

  for index, value in ["a", "b", "c"].items(): 
      print index, ":", value 

will print: 

  0: a 
  1: b 
  2: c 

I think this is an easy way to achieve looping over
index AND elements in parallel. Semantically the
following two expressions should be equivalent: 

for index, value in zip(range(len(mylist)), mylist):

for index, value in mylist.items():

In opposition to patch #110138 I would call this: 
"Adding syntactic sugar without adding syntax (or sugar<wink>):"

this-doesn't-deserve-new-syntax-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From greg at cosc.canterbury.ac.nz  Mon Aug 14 06:01:35 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:01:35 +1200 (NZST)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug
 #111620] lots of use of send() without verifyi ng amount of data sent.
In-Reply-To: <20000811184407.A14470@xs4all.nl>
Message-ID: <200008140401.QAA14955@s454.cosc.canterbury.ac.nz>

> ERRORS
>
>       EINTR   A signal occurred.

Different unices seem to have manpages which differ considerably
in these areas. The Solaris manpage says:

     EINTR     The operation was interrupted  by  delivery  of  a
               signal  before  any  data  could be buffered to be
               sent.

which suggests that you won't get EINTR if some data *has* been
sent before the signal arrives. It seems to me the only thing that
could possibly happen in this case is to return with fewer bytes
than requested, whether the socket is non-blocking or not.

So it seems that, in the presence of signals, neither write()
nor send() can be relied upon to either completely succeed
or completely fail. 

Perhaps the reason this hasn't caused anyone a problem is that the
combination of blocking sockets and signals that you want to handle
and then carry on after are fairly rare.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Aug 14 05:51:45 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 15:51:45 +1200 (NZST)
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000811103701.A25386@keymaster.enme.ucalgary.ca>
Message-ID: <200008140351.PAA14951@s454.cosc.canterbury.ac.nz>

> We don't limit the amount of memory you can allocate on all
> machines just because your program may run out of memory on some
> machine.

Legend has it that Steve Jobs tried to do something like that
with the original 128K Macintosh. He was against making the
machine expandable in any way, so that any program which ran
one Mac would run on all Macs.

Didn't stay that way for very long...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Aug 14 06:17:30 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:17:30 +1200 (NZST)
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers
 for 2.0)
In-Reply-To: <20000813095357.K14470@xs4all.nl>
Message-ID: <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>

Two reasons why list comprehensions fit better in Python
than the equivalent map/filter/lambda constructs:

1) Scoping. The expressions in the LC have direct access to the
   enclosing scope, which is not true of lambdas in Python.

2) Efficiency. An LC with if-clauses which weed out many potential
   list elements can be much more efficient than the equivalent
   filter operation, which must build the whole list first and
   then remove unwanted items.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Mon Aug 14 06:24:43 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Aug 2000 16:24:43 +1200 (NZST)
Subject: [Python-Dev] noreply@sourceforge.net: [Python-bugs-list] [Bug
 #111620] lots of use of send() without verifyi ng amount of d
In-Reply-To: <001301c00469$cb380fe0$f2a6b5d4@hagrid>
Message-ID: <200008140424.QAA14962@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <effbot at telia.com>:

> fwiw, I still haven't found a single reference (SUSv2 spec, man-
> pages, Stevens, the original BSD papers) that says that a blocking
> socket may do anything but sending all the data, or fail.

The Solaris manpage sort of seems to indirectly suggest that
it might conceivabley be possible:

     EMSGSIZE  The socket requires that message  be  sent  atomi-
               cally, and the message was too long.

Which suggests that some types of socket may not require the
message to be sent atomically. (TCP/IP, for example.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Mon Aug 14 07:38:55 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 07:38:55 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/api api.tex,1.76,1.77
In-Reply-To: <200008140250.TAA31549@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Sun, Aug 13, 2000 at 07:50:23PM -0700
References: <200008140250.TAA31549@slayer.i.sourceforge.net>
Message-ID: <20000814073854.O14470@xs4all.nl>

On Sun, Aug 13, 2000 at 07:50:23PM -0700, Fred L. Drake wrote:

> In the section on the "Very High Level Layer", address concerns brought up
> by Edward K. Ream <edream at users.sourceforge.net> about FILE* values and
> incompatible C libraries in dynamically linked extensions.  It is not clear
> (to me) how realistic the issue is, but it is better documented than not.

> + Note also that several of these functions take \ctype{FILE*}
> + parameters.  On particular issue which needs to be handled carefully
> + is that the \ctype{FILE} structure for different C libraries can be
> + different and incompatible.  Under Windows (at least), it is possible
> + for dynamically linked extensions to actually use different libraries,
> + so care should be taken that \ctype{FILE*} parameters are only passed
> + to these functions if it is certain that they were created by the same
> + library that the Python runtime is using.

I saw a Jitterbug 'suggestion' bugthread, where Guido ended up liking the
idea of wrapping fopen() and fclose() in the Python library, so that you got
the right FILE structures when linking with another libc/compiler. Whatever
happened to that idea ? Or does it just await implementation ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Mon Aug 14 07:57:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 07:57:13 +0200
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 13, 2000 at 08:08:45PM -0400
References: <20000813095357.K14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com>
Message-ID: <20000814075713.P14470@xs4all.nl>

Well, Tim, thanx for that mini-PEP, if I can call your recap of years of
discussion that ;-) It did clear up my mind, though I have a few comments to
make. This is the last I have to say about it, though, I didn't intend to
drag you into a new long discussion ;)

On Sun, Aug 13, 2000 at 08:08:45PM -0400, Tim Peters wrote:

> Guido feels exactly the opposite:  the business about "alien, forced feel,
> not fitting" is exactly what he's said about map/filter/reduce/lambda on
> many occasions. 

Note that I didn't mention lambda, and did so purposely ;) Yes, listcomps
are much better than lambda. And I'll grant the special case of 'None' as
the function is unpythonic, in map/filter/reduce. Other than that, they're
just functions, which I hope aren't too unpythonic<wink>

> > [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> > S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> > for e in [Egg(a, b, c, d, e) for e in E]]

> That isn't a serious argument, to my eyes.

Well, it's at the core of my doubts :) 'for' and 'if' start out of thin air.
I don't think any other python statement or expression can be repeated and
glued together without any kind of separator, except string literals (which
I can see the point of, but scared me a little none the less.)

I don't know enough lisp to write this expression in that, but I assume you
could still match the parentheses to find out how they are grouped.

> I know you missed that possibility above because, despite your claim of
> being hard to parse, it's dead easy to spot where your listcomps begin:  "["
> is easy for the eye to find.

That's the start of a listcomp, but not of a specific listcomp-for or -if.

> > I hope anyone writing something like that (notice the shadowing of
> > some of the outer vrbls in the inner loops)

> You can find the same in nested lambdas littering map/reduce/etc today.

Yes, and wasn't the point to remove those ? <wink>

Like I said, I'm not arguing against listcomprehensions, I'm just saying I'm
sorry we didn't get yet another debate on syntax ;) Having said that, I'll
step back and let Eric's predicted doom fall over Python; hopefully we are
wrong and you all are right :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Mon Aug 14 11:44:39 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 14 Aug 2000 11:44:39 +0200
Subject: [Python-Dev] Preventing recursion core dumps 
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
	     Fri, 11 Aug 2000 09:28:09 -0500 , <200008111428.JAA04464@cj20424-a.reston1.va.home.com> 
Message-ID: <20000814094440.0BC7F303181@snelboot.oratrix.nl>

Isn't the solution to this problem to just implement PyOS_CheckStack() for 
unix?

I assume you can implement it fairly cheaply by having the first call compute 
a stack warning address and subsequent calls simply checking that the stack 
hasn't extended below the limit yet.

It might also be needed to add a few more PyOS_CheckStack() calls here and 
there, but I think most of them are in place.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From mal at lemburg.com  Mon Aug 14 13:27:27 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 14 Aug 2000 13:27:27 +0200
Subject: [Python-Dev] Doc-strings for class attributes ?!
Message-ID: <3997D79F.CC4A5A0E@lemburg.com>

I've been doing a lot of auto-doc style documenation lately
and have wondered how to include documentation for class attributes
in a nice and usable way.

Right now, we already have doc-strings for modules, classes,
functions and methods. Yet there is no way to assign doc-strings
to arbitrary class attributes.

I figured that it would be nice to have the doc-strings for
attributes use the same notation as for the other objects, e.g.

class C
    " My class C "

    a = 1
    " This is the attribute a of class C, used for ..."

    b = 0
    " Setting b to 1 causes..."

The idea is to create an implicit second attribute for every
instance of documented attribute with a special name, e.g. for
attribute b:

    __doc__b__ = " Setting b to 1 causes..."

That way doc-strings would be able to use class inheritance
just like the attributes themselves. The extra attributes can
be created by the compiler. In -OO mode, these attributes would
not be created.

What do you think about this idea ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Mon Aug 14 16:13:21 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 10:13:21 -0400 (EDT)
Subject: [Python-Dev] 2nd thought: fully qualified host names
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<39933AD8.B8EF5D59@nowonder.de>
	<20000811005013.F17171@xs4all.nl>
Message-ID: <14743.65153.264194.444209@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Fine, the patch addresses that. When the hostname passed to
    TW> smtplib is "" (which is the default), it should be turned into
    TW> a FQDN. I agree. However, if someone passed in a name, we do
    TW> not know if they even *want* the name turned into a FQDN. In
    TW> the face of ambiguity, refuse the temptation to guess.

Just to weigh in after the fact, I agree with Thomas.  All this stuff
is primarily there to generate something sane for the default empty
string argument.  If the library client passes in their own name,
smtplib.py should use that as given.

-Barry



From fdrake at beopen.com  Mon Aug 14 16:46:17 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 10:46:17 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test test_ntpath.py,1.2,1.3
In-Reply-To: <200008140621.XAA12890@slayer.i.sourceforge.net>
References: <200008140621.XAA12890@slayer.i.sourceforge.net>
Message-ID: <14744.1593.850598.411098@cj42289-a.reston1.va.home.com>

Mark Hammond writes:
 > Test for fix to bug #110673: os.abspatth() now always returns
 > os.getcwd() on Windows, if an empty path is specified.  It
 > previously did not if an empty path was delegated to
 > win32api.GetFullPathName())
...
 > + tester('ntpath.abspath("")', os.getcwd())

  This doesn't work.  The test should pass on non-Windows platforms as
well; on Linux I get this:

cj42289-a(.../python/linux-beowolf); ./python ../Lib/test/test_ntpath.py
error!
evaluated: ntpath.abspath("")
should be: /home/fdrake/projects/python/linux-beowolf
 returned: \home\fdrake\projects\python\linux-beowolf\

1 errors.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Mon Aug 14 16:56:39 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 14 Aug 2000 09:56:39 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0001.txt,1.4,1.5
In-Reply-To: <200008141448.HAA18067@slayer.i.sourceforge.net>
References: <200008141448.HAA18067@slayer.i.sourceforge.net>
Message-ID: <14744.2215.11395.695253@beluga.mojam.com>

    Barry> There are now three basic types of PEPs: informational, standards
    Barry> track, and technical.

Looking more like RFCs all the time... ;-)

Skip



From jim at interet.com  Mon Aug 14 17:25:59 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 14 Aug 2000 11:25:59 -0400
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <39980F87.85641FD2@interet.com>

Bill Tutt wrote:
> 
> This is an alternative approach that we should certainly consider. We could
> use ANTLR (www.antlr.org) as our parser generator, and have it generate Java

What about using Bison/Yacc?  I have been playing with a
lint tool for Python, and have been using it.

JimA



From trentm at ActiveState.com  Mon Aug 14 17:41:28 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 14 Aug 2000 08:41:28 -0700
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was Lockstep iteration - eureka!))
In-Reply-To: <39980F87.85641FD2@interet.com>; from jim@interet.com on Mon, Aug 14, 2000 at 11:25:59AM -0400
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com> <39980F87.85641FD2@interet.com>
Message-ID: <20000814084128.A7537@ActiveState.com>

On Mon, Aug 14, 2000 at 11:25:59AM -0400, James C. Ahlstrom wrote:
> What about using Bison/Yacc?  I have been playing with a
> lint tool for Python, and have been using it.
> 
Oh yeah? What does the linter check? I would be interested in seeing that.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Mon Aug 14 17:46:50 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 11:46:50 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
Message-ID: <14744.5229.470633.973850@anthem.concentric.net>

>>>>> "PS" == Peter Schneider-Kamp <nowonder at nowonder.de> writes:

    PS> After sleeping over it, I noticed that at least
    PS> BaseHTTPServer and ftplib also use a similar
    PS> algorithm to get a fully qualified domain name.

    PS> Together with smtplib there are four occurences
    PS> of the algorithm (2 in BaseHTTPServer). I think
    PS> it would be good not to have four, but one
    PS> implementation.

    PS> First I thought it could be socket.get_fqdn(),
    PS> but it seems a bit troublesome to write it in C.

    PS> Should this go somewhere? If yes, where should
    PS> it go?

    PS> I'll happily prepare a patch as soon as I know
    PS> where to put it.

I wonder if we should move socket to _socket and write a Python
wrapper which would basically import * from _socket and add
make_fqdn().

-Barry



From thomas at xs4all.net  Mon Aug 14 17:48:37 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 14 Aug 2000 17:48:37 +0200
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.5229.470633.973850@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 14, 2000 at 11:46:50AM -0400
References: <3992DF9E.BF5A080C@nowonder.de> <200008101614.LAA28785@cj20424-a.reston1.va.home.com> <20000810174026.D17171@xs4all.nl> <3993D570.7578FE71@nowonder.de> <14744.5229.470633.973850@anthem.concentric.net>
Message-ID: <20000814174837.S14470@xs4all.nl>

On Mon, Aug 14, 2000 at 11:46:50AM -0400, Barry A. Warsaw wrote:

> >>>>> "PS" == Peter Schneider-Kamp <nowonder at nowonder.de> writes:

>     PS> After sleeping over it, I noticed that at least
>     PS> BaseHTTPServer and ftplib also use a similar
>     PS> algorithm to get a fully qualified domain name.
> 
>     PS> Together with smtplib there are four occurences
>     PS> of the algorithm (2 in BaseHTTPServer). I think
>     PS> it would be good not to have four, but one
>     PS> implementation.
> 
>     PS> First I thought it could be socket.get_fqdn(),
>     PS> but it seems a bit troublesome to write it in C.
> 
>     PS> Should this go somewhere? If yes, where should
>     PS> it go?
> 
>     PS> I'll happily prepare a patch as soon as I know
>     PS> where to put it.
> 
> I wonder if we should move socket to _socket and write a Python
> wrapper which would basically import * from _socket and add
> make_fqdn().

+1 on that idea, especially since BeOS and Windows (I think ?) already have
that constructions. If we are going to place this make_fqdn() function
anywhere, it should be the socket module or a 'dns' module. (And I mean a
real DNS module, not the low-level wrapper around raw DNS packets that Guido
wrote ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Mon Aug 14 17:56:15 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 11:56:15 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <Pine.LNX.4.10.10008090216090.497-100000@skuld.lfw.org>
	<l03102802b5b71c40f9fc@[193.78.237.121]>
	<3993FD49.C7E71108@prescod.net>
Message-ID: <14744.5791.895030.893545@anthem.concentric.net>

>>>>> "PP" == Paul Prescod <paul at prescod.net> writes:

    PP> Let me throw out another idea. What if sequences just had
    PP> .items() methods?

Funny, I remember talking with Guido about this on a lunch trip
several years ago.  Tim will probably chime in that /he/ proposed it
in the Python 0.9.3 time frame.  :)

-Barry



From fdrake at beopen.com  Mon Aug 14 17:59:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 11:59:53 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.5229.470633.973850@anthem.concentric.net>
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
	<14744.5229.470633.973850@anthem.concentric.net>
Message-ID: <14744.6009.66009.888078@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > I wonder if we should move socket to _socket and write a Python
 > wrapper which would basically import * from _socket and add
 > make_fqdn().

  I think we could either do this or use PyRun_String() from
initsocket().


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Mon Aug 14 18:09:11 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:09:11 -0400 (EDT)
Subject: [Python-Dev] Cookie.py
References: <20000811122608.F20646@kronos.cnri.reston.va.us>
	<Pine.GSO.4.10.10008111936060.5259-100000@sundial>
Message-ID: <14744.6567.225562.458943@anthem.concentric.net>

>>>>> "MZ" == Moshe Zadka <moshez at math.huji.ac.il> writes:

    | a) SimpleCookie -- never uses pickle
    | b) SerilizeCookie -- always uses pickle
    | c) SmartCookie -- uses pickle based on old heuristic.

Very cool.  The autopicklification really bugged me too (literally) in
Mailman.

-Barry



From bwarsaw at beopen.com  Mon Aug 14 18:12:45 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:12:45 -0400 (EDT)
Subject: [Python-Dev] Python keywords (was Lockstep iteration - eureka
	!)
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <14744.6781.535265.161119@anthem.concentric.net>

>>>>> "BT" == Bill Tutt <billtut at microsoft.com> writes:

    BT> This is an alternative approach that we should certainly
    BT> consider. We could use ANTLR (www.antlr.org) as our parser
    BT> generator, and have it generate Java for JPython, and C++ for
    BT> CPython.  This would be a good chunk of work, and it's
    BT> something I really don't have time to pursue. I don't even
    BT> have time to pursue the idea about moving keyword recognition
    BT> into the lexer.

    BT> I'm just not sure if you want to bother introducing C++ into
    BT> the Python codebase solely to only have one parser for CPython
    BT> and JPython.

We've talked about exactly those issues internally a while back, but
never came to a conclusion (IIRC) about the C++ issue for CPython.

-Barry



From jim at interet.com  Mon Aug 14 18:29:08 2000
From: jim at interet.com (James C. Ahlstrom)
Date: Mon, 14 Aug 2000 12:29:08 -0400
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was 
 Lockstep iteration - eureka!))
References: <58C671173DB6174A93E9ED88DCB0883D0A6121@red-msg-07.redmond.corp.microsoft.com> <39980F87.85641FD2@interet.com> <20000814084128.A7537@ActiveState.com>
Message-ID: <39981E54.D50BD0B4@interet.com>

Trent Mick wrote:
> 
> On Mon, Aug 14, 2000 at 11:25:59AM -0400, James C. Ahlstrom wrote:
> > What about using Bison/Yacc?  I have been playing with a
> > lint tool for Python, and have been using it.
> >
> Oh yeah? What does the linter check? I would be interested in seeing that.

Actually I have better luck parsing Python than linting it.  My
initial naive approach using C-language wisdom such as checking for
line numbers where variables are set/used failed.  I now feel that
a Python lint tool must either use complete data flow analysis
(hard) or must actually interpret the code as Python does (hard).
All I can really do so far is get and check function signatures.
I can supply more details if you want, but remember it doesn't
work yet, and I may not have time to complete it.  I learned a
lot though.

To parse Python I first use Parser/tokenizer.c to return tokens,
then a Yacc grammar file.  This parses all of Lib/*.py in less
than two seconds on a modest machine.  The tokens returned by
tokenizer.c must be massaged a bit to be suitable for Yacc, but
nothing major.

All the Yacc actions are calls to Python methods, so the real
work is written in Python.  Yacc just represents the grammar.

The problem I have with the current grammar is the large number
of confusing shifts required.  The grammar can't specify operator
precedence, so it uses shift/reduce conflicts instead.  Yacc
eliminates this problem.

JimA



From tim_one at email.msn.com  Mon Aug 14 18:42:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 14 Aug 2000 12:42:14 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <14744.5791.895030.893545@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>

[Paul Prescod]
> Let me throw out another idea. What if sequences just had
> .items() methods?

[Barry A. Warsaw]
> Funny, I remember talking with Guido about this on a lunch trip
> several years ago.  Tim will probably chime in that /he/ proposed it
> in the Python 0.9.3 time frame.  :)

Not me, although *someone* proposed it at least that early, perhaps at 0.9.1
already.  IIRC, that was the very first time Guido used the term
"hypergeneralization" in a cluck-cluck kind of public way.  That is,
sequences and mappings are different concepts in Python, and intentionally
so.  Don't know how he feels now.

But if you add seq.items(), you had better add seq.keys() too, and
seq.values() as a synonym for seq[:].  I guess the perceived advantage of
adding seq.items() is that it supplies yet another incredibly slow and
convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
allocate gazillabytes of storage and compute all the indexes into a massive
data structure up front, and then we can use the loop index that's already
sitting there for free anyway to index into that and get back a redundant
copy of itself!" <wink>.

not-a-good-sign-when-common-sense-is-offended-ly y'rs  - tim





From bwarsaw at beopen.com  Mon Aug 14 18:48:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 12:48:59 -0400 (EDT)
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com>
Message-ID: <14744.8955.35531.757406@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> Could someone at BeOpen please check what happened to the
    M> python-announce mailing list ?!

This is on my task list, but I was on vacation last week and have been
swamped with various other things.  My plan is to feed the
announcements to a Mailman list, where approval can happen using the
same moderator interface.  But I need to make a few changes to Mailman
to support this.

-Barry



From mal at lemburg.com  Mon Aug 14 18:54:05 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 14 Aug 2000 18:54:05 +0200
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com> <14744.8955.35531.757406@anthem.concentric.net>
Message-ID: <3998242D.A61010FB@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> Could someone at BeOpen please check what happened to the
>     M> python-announce mailing list ?!
> 
> This is on my task list, but I was on vacation last week and have been
> swamped with various other things.  My plan is to feed the
> announcements to a Mailman list, where approval can happen using the
> same moderator interface.  But I need to make a few changes to Mailman
> to support this.

Great :-)

BTW, doesn't SourceForge have some News channel for Python
as well (I have seen these for other projects) ? Would be
cool to channel the announcements there as well.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Mon Aug 14 20:58:11 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 14 Aug 2000 13:58:11 -0500 (CDT)
Subject: Python lint tool (was: Re: [Python-Dev] Python keywords (was 
 Lockstep iteration - eureka!))
In-Reply-To: <39981E54.D50BD0B4@interet.com>
Message-ID: <Pine.LNX.4.10.10008141345220.3988-100000@server1.lfw.org>

Trent Mick wrote:
Oh yeah? What does the linter check? I would be interested in seeing that.

James C. Ahlstrom wrote:
> Actually I have better luck parsing Python than linting it.  [...]
> All I can really do so far is get and check function signatures.

Python is hard to lint-check because types and objects are so
dynamic.  Last time i remember visiting this issue, Tim Peters
came up with a lint program that was based on warning you if
you used a particular spelling of an identifier only once (thus
likely to indicate a typing mistake).

I enhanced this a bit to follow imports and the result is at

    http://www.lfw.org/python/

(look for "pylint").

The rule is pretty simplistic, but i've tweaked it a bit and it
has actually worked pretty well for me.

Anyway, feel free to give it a whirl.



-- ?!ng




From bwarsaw at beopen.com  Mon Aug 14 21:12:04 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:12:04 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
	<14744.5229.470633.973850@anthem.concentric.net>
	<14744.6009.66009.888078@cj42289-a.reston1.va.home.com>
Message-ID: <14744.17540.586064.729048@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    |   I think we could either do this or use PyRun_String() from
    | initsocket().

Ug.  -1 on using PyRun_String().  Doing the socket->_socket shuffle is
better for the long term.

-Barry



From nowonder at nowonder.de  Mon Aug 14 23:12:03 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Mon, 14 Aug 2000 21:12:03 +0000
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <399860A3.4E9A340E@nowonder.de>

Tim Peters wrote:
> 
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived advantage of
> adding seq.items() is that it supplies yet another incredibly slow and
> convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
> allocate gazillabytes of storage and compute all the indexes into a massive
> data structure up front, and then we can use the loop index that's already
> sitting there for free anyway to index into that and get back a redundant
> copy of itself!" <wink>.

That's a -1, right? <0.1 wink>

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From fdrake at beopen.com  Mon Aug 14 21:13:29 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 14 Aug 2000 15:13:29 -0400 (EDT)
Subject: [Python-Dev] xxx.get_fqdn() for the standard lib
In-Reply-To: <14744.17540.586064.729048@anthem.concentric.net>
References: <3992DF9E.BF5A080C@nowonder.de>
	<200008101614.LAA28785@cj20424-a.reston1.va.home.com>
	<20000810174026.D17171@xs4all.nl>
	<3993D570.7578FE71@nowonder.de>
	<14744.5229.470633.973850@anthem.concentric.net>
	<14744.6009.66009.888078@cj42289-a.reston1.va.home.com>
	<14744.17540.586064.729048@anthem.concentric.net>
Message-ID: <14744.17625.935969.667720@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Ug.  -1 on using PyRun_String().  Doing the socket->_socket shuffle is
 > better for the long term.

  I'm inclined to agree, simply because it allows at least a slight
simplification in socketmodule.c since the conditional naming of the
module init function can be removed.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Mon Aug 14 21:24:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:24:10 -0400 (EDT)
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <14744.5791.895030.893545@anthem.concentric.net>
	<LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <14744.18266.840173.466719@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> But if you add seq.items(), you had better add seq.keys() too,
    TP> and seq.values() as a synonym for seq[:].  I guess the
    TP> perceived advantage of adding seq.items() is that it supplies
    TP> yet another incredibly slow and convoluted way to get at the
    TP> for-loop index?  "Ah, that's the ticket!  Let's allocate
    TP> gazillabytes of storage and compute all the indexes into a
    TP> massive data structure up front, and then we can use the loop
    TP> index that's already sitting there for free anyway to index
    TP> into that and get back a redundant copy of itself!" <wink>.

Or create a generator.  <oops, slap>

-Barry



From bwarsaw at beopen.com  Mon Aug 14 21:25:07 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 14 Aug 2000 15:25:07 -0400 (EDT)
Subject: [Python-Dev] Python Announcements ???
References: <39951D69.45D01703@lemburg.com>
	<14744.8955.35531.757406@anthem.concentric.net>
	<3998242D.A61010FB@lemburg.com>
Message-ID: <14744.18323.499501.115700@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> BTW, doesn't SourceForge have some News channel for Python
    M> as well (I have seen these for other projects) ? Would be
    M> cool to channel the announcements there as well.

Yes, but it's a bit clunky.

-Barry



From esr at thyrsus.com  Tue Aug 15 00:57:18 2000
From: esr at thyrsus.com (esr at thyrsus.com)
Date: Mon, 14 Aug 2000 18:57:18 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>
References: <20000813095357.K14470@xs4all.nl> <200008140417.QAA14959@s454.cosc.canterbury.ac.nz>
Message-ID: <20000814185718.A2509@thyrsus.com>

Greg Ewing <greg at cosc.canterbury.ac.nz>:
> Two reasons why list comprehensions fit better in Python
> than the equivalent map/filter/lambda constructs:
> 
> 1) Scoping. The expressions in the LC have direct access to the
>    enclosing scope, which is not true of lambdas in Python.

This is a bug in lambdas, not a feature of syntax.
 
> 2) Efficiency. An LC with if-clauses which weed out many potential
>    list elements can be much more efficient than the equivalent
>    filter operation, which must build the whole list first and
>    then remove unwanted items.

A better argument.  To refute it, I'd need to open a big can of worms
labeled "lazy evaluation".
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

Freedom, morality, and the human dignity of the individual consists
precisely in this; that he does good not because he is forced to do
so, but because he freely conceives it, wants it, and loves it.
	-- Mikhail Bakunin 



From esr at thyrsus.com  Tue Aug 15 00:59:08 2000
From: esr at thyrsus.com (esr at thyrsus.com)
Date: Mon, 14 Aug 2000 18:59:08 -0400
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers for 2.0)
In-Reply-To: <20000814075713.P14470@xs4all.nl>
References: <20000813095357.K14470@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJDGPAA.tim_one@email.msn.com> <20000814075713.P14470@xs4all.nl>
Message-ID: <20000814185908.B2509@thyrsus.com>

Thomas Wouters <thomas at xs4all.net>:
> Like I said, I'm not arguing against listcomprehensions, I'm just saying I'm
> sorry we didn't get yet another debate on syntax ;) Having said that, I'll
> step back and let Eric's predicted doom fall over Python; hopefully we are
> wrong and you all are right :-)

Now, now.  I'm not predicting the doom of Python as a whole, just that 
listcomp syntax will turn out to have been a bad, limiting mistake.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

It is proper to take alarm at the first experiment on our
liberties. We hold this prudent jealousy to be the first duty of
citizens and one of the noblest characteristics of the late
Revolution. The freemen of America did not wait till usurped power had
strengthened itself by exercise and entangled the question in
precedents. They saw all the consequences in the principle, and they
avoided the consequences by denying the principle. We revere this
lesson too much ... to forget it
	-- James Madison.



From MarkH at ActiveState.com  Tue Aug 15 02:46:56 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 15 Aug 2000 10:46:56 +1000
Subject: [Python-Dev] WindowsError repr
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>

I have just checked in a fix for: [ Bug #110670 ] Win32 os.listdir raises
confusing errors
http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=110670

In a nutshell:
>>> os.listdir('/cow')
...
OSError: [Errno 3] No such process: '/cow'
>>>

The solution here was to use the new WindowsError object that was defined
back in February
(http://www.python.org/pipermail/python-dev/2000-February/008803.html)  As
this is a sub-class of OSError, nothing will break.

However, the _look_ of the error does change.  After my fix, it now looks
like:

>>> os.listdir('/cow')
...
WindowsError: [Errno 3] The system cannot find the path specified: '/cow'
>>>

AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
work, as WindowsError derives from OSError.  It just worries me that people
will start explicitly catching "WindowsError", regardless of whatever
documentation we might write on the subject.

Does anyone see this as a problem?  Should a WindowsError masquerade as
"OSError", or maybe just look a little more like it - eg, "OSError
(windows)" ??

Thoughts,

Mark.




From tim_one at email.msn.com  Tue Aug 15 03:01:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 14 Aug 2000 21:01:55 -0400
Subject: [Python-Dev] WindowsError repr
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMIGPAA.tim_one@email.msn.com>

[Mark Hammond]
> ...
> However, the _look_ of the error does change.  After my fix, it now looks
> like:
>
> >>> os.listdir('/cow')
> ...
> WindowsError: [Errno 3] The system cannot find the path specified: '/cow'
> >>>

Thank you!

> AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
> work, as WindowsError derives from OSError.  It just worries me
> that people will start explicitly catching "WindowsError", regardless
> of whatever documentation we might write on the subject.
>
> Does anyone see this as a problem?  Should a WindowsError masquerade as
> "OSError", or maybe just look a little more like it - eg, "OSError
> (windows)" ??

I can assure you that nobody running on a Unix(tm) derivative is going to
catch WindowsError as such on purpose, so the question is how stupid are
Windows users?  I say leave it alone and let them tell us <wink>.





From greg at cosc.canterbury.ac.nz  Tue Aug 15 03:08:00 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 15 Aug 2000 13:08:00 +1200 (NZST)
Subject: [Python-Dev] RE: list comprehensions (was parsers and compilers
 for 2.0)
In-Reply-To: <20000814075713.P14470@xs4all.nl>
Message-ID: <200008150108.NAA15067@s454.cosc.canterbury.ac.nz>

> > [((a,b)*c, (spam(d)%34)^e) for a in [(x, y) for x in L for y in
> > S] for b in [b for b in B if mean(b)] for b,c in C for a,d in D
> > for e in [Egg(a, b, c, d, e) for e in E]]

Note that shadowing of the local variables like that in
an LC is NOT allowed, because, like the variables in a
normal for loop, they're all at the same scope level.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Tue Aug 15 06:43:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 00:43:44 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <399860A3.4E9A340E@nowonder.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENAGPAA.tim_one@email.msn.com>

[Tim]
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived
> advantage of adding seq.items() is that it supplies yet another
> incredibly slow and convoluted way to get at the for-loop index?
> "Ah, that's the ticket!  Let's allocate gazillabytes of storage and
> compute all the indexes into a massive data structure up front, and
> then we can use the loop index that's already sitting there for
> free anyway to index into that and get back a redundant copy of
> itself!" <wink>.

[Peter Schneider-Kamp]]
> That's a -1, right? <0.1 wink>

-0 if you also add .keys() and .values() (if you're going to
hypergeneralize, don't go partway nuts -- then it's both more general than
it should be yet still not as general as people will expect).

-1 if it's *just* seq.items().

+1 on an "indexing" clause (the BDFL liked that enough to implement it a few
years ago, but it didn't go in then because he found some random putz who
had used "indexing" as a vrbl name; but if doesn't need to be a keyword,
even that lame (ask Just <wink>) objection goes away).

sqrt(-1) on Barry's generator tease, because that's an imaginary proposal at
this stage of the game.





From effbot at telia.com  Tue Aug 15 07:33:03 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 07:33:03 +0200
Subject: [Python-Dev] WindowsError repr
References: <ECEPKNMJLHAPFFJHDOJBEEMDDEAA.MarkH@ActiveState.com>
Message-ID: <003d01c0067a$4aa6dc40$f2a6b5d4@hagrid>

mark wrote:
> AGAIN - I stress - catching "OSError" or "os.error" _will_ continue to
> work, as WindowsError derives from OSError.  It just worries me that people
> will start explicitly catching "WindowsError", regardless of whatever
> documentation we might write on the subject.
> 
> Does anyone see this as a problem?

I've seen bigger problems -- but I think it's a problem.

any reason you cannot just use a plain OSError?  is the extra
"this is not a generic OSError" information bit actually used by
anyone?

</F>




From effbot at telia.com  Tue Aug 15 08:14:42 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 08:14:42 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.7,1.8
References: <200008150558.WAA26703@slayer.i.sourceforge.net>
Message-ID: <006d01c00680$1c4469c0$f2a6b5d4@hagrid>

tim wrote:
> !     test_popen2       Win32      X X    26-Jul-2000
>           [believe this was fix by /F]
> !         [still fails 15-Aug-2000 for me, on Win98 - tim
> !          test test_popen2 crashed -- exceptions.WindowsError :
> !          [Errno 2] The system cannot find the file specified
> !         ]

do you have w9xpopen.exe in place?

(iirc, mark just added the build files)

</F>




From tim_one at email.msn.com  Tue Aug 15 08:30:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 02:30:40 -0400
Subject: [Python-Dev] [PEP 200] Help!
Message-ID: <LNBBLJKPBEHFEDALKOLCEENEGPAA.tim_one@email.msn.com>

I took a stab at updating PEP200 (the 2.0 release plan), but if you know
more about any of it that should be recorded or changed, please just do so!
There's no reason to funnel updates thru me.  Jeremy may feel differently
when he gets back, but in the meantime this is just more time-consuming
stuff I hadn't planned on needing to do.

Windows geeks:  what's going on with test_winreg2 and test_popen2?  Those
tests have been failing forever (at least on Win98 for me), and the grace
period has more than expired.  Fredrik, if you're still waiting for me to do
something with popen2 (rings a vague bell), please remind me -- I've
forgotten what it was!

thrashingly y'rs  - tim





From tim_one at email.msn.com  Tue Aug 15 08:43:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 02:43:06 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.7,1.8
In-Reply-To: <006d01c00680$1c4469c0$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENFGPAA.tim_one@email.msn.com>

> tim wrote:
> > !     test_popen2       Win32      X X    26-Jul-2000
> >           [believe this was fix by /F]
> > !         [still fails 15-Aug-2000 for me, on Win98 - tim
> > !          test test_popen2 crashed -- exceptions.WindowsError :
> > !          [Errno 2] The system cannot find the file specified
> > !         ]

[/F]
> do you have w9xpopen.exe in place?
>
> (iirc, mark just added the build files)

Ah, thanks!  This is coming back to me now ... kinda ... will pursue.





From tim_one at email.msn.com  Tue Aug 15 09:07:49 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 03:07:49 -0400
Subject: [Python-Dev] test_popen2 on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENFGPAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENGGPAA.tim_one@email.msn.com>

[/F]
> do you have w9xpopen.exe in place?
>
> (iirc, mark just added the build files)

Heh -- yes, and I wasn't building them.

Now test_popen2 fails for a different reason:

def _test():
    teststr = "abc\n"
    print "testing popen2..."
    r, w = popen2('cat')
    ...

Ain't no "cat" on Win98!  The test is specific to Unix derivatives.  Other
than that, popen2 is working for me now.

Mumble.





From MarkH at ActiveState.com  Tue Aug 15 10:08:33 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 15 Aug 2000 18:08:33 +1000
Subject: [Python-Dev] test_popen2 on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKENGGPAA.tim_one@email.msn.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCENBDEAA.MarkH@ActiveState.com>

> Ain't no "cat" on Win98!  The test is specific to Unix
> derivatives.  Other than that, popen2 is working for me

heh - I noticed that yesterday, then lumped it in the too hard basket.

What I really wanted was for test_popen2 to use python itself for the
sub-process.  This way, commands like 'python -c "import sys;sys.exit(2)"'
could test the handle close semantics, for example.  I gave up when I
realized I would probably need to create temp files with the mini-programs.

I was quite confident that if I attempted this, I would surely break the
test suite on a few platforms.  I wasn't brave enough to risk those
testicles of wrath at this stage in the game <wink>

Mark.





From thomas at xs4all.net  Tue Aug 15 13:15:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 13:15:42 +0200
Subject: [Python-Dev] New PEP for import-as
Message-ID: <20000815131542.B14470@xs4all.nl>


I wrote a quick PEP describing the 'import as' proposal I posted a patch for
last week. Mostly since I was bored in the train to work (too many kids running
around to play Diablo II or any other game -- I hate it when those brats go
'oh cool' and keep hanging around looking over my shoulder ;-) but also a
bit because Tim keeps insisting it should be easy to write a PEP. Perhaps
lowering the standard by providing a few *small* PEPs helps with that ;)
Just's 'indexing-for' PEP would be a good one, too, in that case.

Anyway, the proto-PEP is attached. It's in draft status as far as I'm
concerned, but the PEP isn't really necessary if the feature is accepted by
acclamation.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
-------------- next part --------------
PEP: 2XX
Title: Import As
Version: $Revision: 1.0 $
Owner: thomas at xs4all.net (Thomas Wouters)
Python-Version: 2.0
Status: Draft


Introduction

    This PEP describes the `import as' proposal for Python 2.0. This
    PEP tracks the status and ownership of this feature. It contains a
    description of the feature and outlines changes necessary to
    support the feature. The CVS revision history of this file
    contains the definitive historical record.


Rationale

    This PEP proposes a small extention of current Python syntax
    regarding the `import' and `from <module> import' statements. 
    These statements load in a module, and either bind that module to
    a local name, or binds objects from that module to a local name. 
    However, it is sometimes desirable to bind those objects to a
    different name, for instance to avoid name clashes. Currently, a
    round-about way has to be used to achieve this:

    import os
    real_os = os
    del os
    
    And similar for the `from ... import' statement:
    
    from os import fdopen, exit, stat
    os_fdopen = fdopen
    os_stat = stat
    del fdopen, stat
    
    The proposed syntax change would add an optional `as' clause to
    both these statements, as follows:

    import os as real_os
    from os import fdopen as os_fdopen, exit, stat as os_stat
    
    The `as' name is not intended to be a keyword, and some trickery
    has to be used to convince the CPython parser it isn't one. For
    more advanced parsers/tokenizers, however, this should not be a
    problem.


Implementation details

    A proposed implementation of this new clause can be found in the
    SourceForge patch manager[XX]. The patch uses a NAME field in the
    grammar rather than a bare string, to avoid the keyword issue. It
    also introduces a new bytecode, IMPORT_FROM_AS, which loads an
    object from a module and pushes it onto the stack, so it can be
    stored by a normal STORE_NAME opcode.
    
    The special case of `from module import *' remains a special case,
    in that it cannot accomodate an `as' clause. Also, the current
    implementation does not use IMPORT_FROM_AS for the old form of
    from-import, even though it would make sense to do so. The reason
    for this is that the current IMPORT_FROM bytecode loads objects
    directly from a module into the local namespace, in one bytecode
    operation, and also handles the special case of `*'. As a result
    of moving to the IMPORT_FROM_AS bytecode, two things would happen:
    
    - Two bytecode operations would have to be performed, per symbol,
      rather than one.
      
    - The names imported through `from-import' would be susceptible to
      the `global' keyword, which they currently are not. This means
      that `from-import' outside of the `*' special case behaves more
      like the normal `import' statement, which already follows the
      `global' keyword. It also means, however, that the `*' special
      case is even more special, compared to the ordinary form of
      `from-import'

    However, for consistency and for simplicity of implementation, it
    is probably best to split off the special case entirely, making a
    separate bytecode `IMPORT_ALL' that handles the special case of
    `*', and handle all other forms of `from-import' the way the
    proposed `IMPORT_FROM_AS' bytecode does.

    This dilemma does not apply to the normal `import' statement,
    because this is alread split into two opcodes, a `LOAD_MODULE' and a
    `STORE_NAME' opcode. Supporting the `import as' syntax is a slight
    change to the compiler only.


Copyright

    This document has been placed in the Public Domain.


References

    [1]
http://sourceforge.net/patch/?func=detailpatch&patch_id=101135&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

From nowonder at nowonder.de  Tue Aug 15 17:32:50 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 15 Aug 2000 15:32:50 +0000
Subject: [Python-Dev] IDLE development - Call for participation
Message-ID: <399962A2.D53A048F@nowonder.de>

To (hopefully) speed up the devlopment of IDLE a temporary
fork has been created as a seperate project at SourceForge:

  http://idlefork.sourceforge.net
  http://sourceforge.net/projects/idlefork

The CVS version represents the enhanced IDLE version
sed by David Scherer in his VPython. Besides other
improvements this version executes threads in a
seperate process.

The spanish inquisition invites everybody interested in
IDLE (and not keen to participate in any witch trials)
to contribute to the project.

Any kind of contribution (discussion of new features,
bug reports, patches) will be appreciated.

If we can get the new IDLE version stable and Python's
benevolent dictator for life blesses our lines of code,
the improved IDLE may go back into Python's source
tree proper.

at-least-it'll-be-part-of-Py3K-<wink>-ly y'rs
Peter

P.S.: You do not have to be a member of the Flying Circus.
P.P.S.: There is no Spanish inquisition <0.5 wink>!
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Tue Aug 15 17:56:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 17:56:46 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python pythonrun.c,2.105,2.106
In-Reply-To: <200008151549.IAA25722@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Tue, Aug 15, 2000 at 08:49:06AM -0700
References: <200008151549.IAA25722@slayer.i.sourceforge.net>
Message-ID: <20000815175646.A376@xs4all.nl>

On Tue, Aug 15, 2000 at 08:49:06AM -0700, Fred L. Drake wrote:

> + #include "osdefs.h"			/* SEP */

This comment is kind of cryptic... I know of only one SEP, and that's in "a
SEP field", a construct we use quite often at work ;-) Does this comment
mean the same ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 15 18:09:34 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 12:09:34 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python pythonrun.c,2.105,2.106
In-Reply-To: <20000815175646.A376@xs4all.nl>
References: <200008151549.IAA25722@slayer.i.sourceforge.net>
	<20000815175646.A376@xs4all.nl>
Message-ID: <14745.27454.815489.456310@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > On Tue, Aug 15, 2000 at 08:49:06AM -0700, Fred L. Drake wrote:
 > 
 > > + #include "osdefs.h"			/* SEP */
 > 
 > This comment is kind of cryptic... I know of only one SEP, and that's in "a
 > SEP field", a construct we use quite often at work ;-) Does this comment
 > mean the same ?

  Very cryptic indeed!  It meant I was including osdefs.h to get the
SEP #define from there, but then I didn't need it in the final version
of the code, so the #include can be removed.
  I'll remove those pronto!  Thanks for pointing out my sloppiness!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Tue Aug 15 19:47:23 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 15 Aug 2000 10:47:23 -0700
Subject: [Python-Dev] segfault in sre on 64-bit plats
Message-ID: <20000815104723.A27306@ActiveState.com>

Fredrik,

The sre module currently segfaults on one of the tests suite tests on both
Win64 and 64-bit linux:

    [trentm at nickel src]$ ./python -c "import sre; sre.match('(x)*', 50000*'x')" > srefail.out
    Segmentation fault (core dumped)

I know that I can't expect you to debug this completely, as you don't have to
hardware, but I was hoping you might be able to shed some light on the
subject for me.

This test on Win32 and Linux32 hits the recursion limit check of 10000 in
SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
7500. I don't want to just willy-nilly drop the recursion limit down to make
the problem go away.

Do you have any idea why the segfault may be occuring on 64-bit platforms?

Mark (Favas), have you been having any problems with sre on your 64-bit plats?


In the example above I turned VERBOSE on in _sre.c. WOuld the trace help you?
Here is the last of it (the whole thing is 2MB so I am not sending it all):

    copy 0:1 to 15026 (2)
    |0x600000000020b90c|0x6000000000200d72|ENTER 7517
    |0x600000000020b90e|0x6000000000200d72|MARK 0
    |0x600000000020b912|0x6000000000200d72|LITERAL 120
    |0x600000000020b916|0x6000000000200d73|MARK 1
    |0x600000000020b91a|0x6000000000200d73|MAX_UNTIL 7515
    copy 0:1 to 15028 (2)
    |0x600000000020b90c|0x6000000000200d73|ENTER 7518
    |0x600000000020b90e|0x6000000000200d73|MARK 0
    |0x600000000020b912|0x6000000000200d73|LITERAL 120
    |0x600000000020b916|0x6000000000200d74|MARK 1
    |0x600000000020b91a|0x6000000000200d74|MAX_UNTIL 7516
    copy 0:1 to 15030 (2)
    |0x600000000020b90c|0x6000000000200d74|ENTER 7519
    |0x600000000020b90e|0x6000000000200d74|MARK 0
    |0x600000000020b912|0x6000000000200d74|LITERAL 120
    |0x600000000020b916|0x6000000000200d75|MARK 1
    |0x600000000020b91a|0x6000000000200d75|MAX_UNTIL 7517
    copy 0:1 to 15032 (2)
    |0x600000000020b90c|0x6000000000200d75|ENTER 7520
    |0x600000000020b90e|0x6000000000200d75|MARK 0
    |0x600000000020b912|0x6000000000200d75|LITERAL 120
    |0x600000000020b916|0x6000000000200d76|MARK 1
    |0x600000000020b91a|0x6000000000200d76|MAX_UNTIL 7518
    copy 0:1 to 15034 (2)
    |0x600000000020b90c|0x6000000000200d76|ENTER 7521
    |0x600000000020b90e|0x6000000000200d76|MARK 0
    |0x600000000020b912|0x6000000000200d76|LITERAL 120
    |0x600000000020b916|0x6000000000200d77|MARK 1
    |0x600000000020b91a|0x6000000000200d77|MAX_UNTIL 7519
    copy 0:1 to 15036 (2)
    |0x600000000020b90c|0x600



Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From thomas at xs4all.net  Tue Aug 15 20:24:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 20:24:14 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008151746.KAA06454@bush.i.sourceforge.net>; from noreply@sourceforge.net on Tue, Aug 15, 2000 at 10:46:39AM -0700
References: <200008151746.KAA06454@bush.i.sourceforge.net>
Message-ID: <20000815202414.B376@xs4all.nl>

On Tue, Aug 15, 2000 at 10:46:39AM -0700, noreply at sourceforge.net wrote:

[ About my slight fix to ref5.tex, on list comprehensions syntax ]

> Comment by tim_one:

> Reassigned to Fred, because it's a simple doc change.  Fred, accept this
> <wink> and check it in.  Note that the grammar has a bug, though, so this
> will need to be changed again (and so will the implementation).  That is,
> [x if 6] should not be a legal expression but the grammar allows it today.

A comment by someone (?!ng ?) who forgot to login, at the original
list-comprehensions patch suggests that Skip forgot to include the
documentation patch to listcomps he provided. Ping, Skip, can you sort this
out and check in the rest of that documentation (which supposedly includes a
tutorial section as well) ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Tue Aug 15 20:27:38 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 20:27:38 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/ref ref5.tex,1.32,1.33
In-Reply-To: <200008151754.KAA19233@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Tue, Aug 15, 2000 at 10:54:51AM -0700
References: <200008151754.KAA19233@slayer.i.sourceforge.net>
Message-ID: <20000815202737.C376@xs4all.nl>

On Tue, Aug 15, 2000 at 10:54:51AM -0700, Fred L. Drake wrote:

> Index: ref5.tex
> diff -C2 -r1.32 -r1.33
> *** ref5.tex	2000/08/12 18:09:50	1.32
> --- ref5.tex	2000/08/15 17:54:49	1.33
> ***************
> *** 153,157 ****
>   
>   \begin{verbatim}
> ! list_display:   "[" [expression_list [list_iter]] "]"
>   list_iter:   list_for | list_if
>   list_for:    "for" expression_list "in" testlist [list_iter]
> --- 153,158 ----
>   
>   \begin{verbatim}
> ! list_display:   "[" [listmaker] "]"
> ! listmaker:   expression_list ( list_iter | ( "," expression)* [","] )

Uhm, this is wrong, and I don't think it was what I submitted either
(though, if I did, I apologize :) The first element of listmaker is an
expression, not an expression_list. I'll change that, unless Ping and Skip
wake up and fix it in a much better way instead.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 15 20:32:07 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 14:32:07 -0400 (EDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
References: <200008151746.KAA06454@bush.i.sourceforge.net>
	<20000815202414.B376@xs4all.nl>
Message-ID: <14745.36007.423378.87635@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > A comment by someone (?!ng ?) who forgot to login, at the original
 > list-comprehensions patch suggests that Skip forgot to include the
 > documentation patch to listcomps he provided. Ping, Skip, can you sort this
 > out and check in the rest of that documentation (which supposedly includes a
 > tutorial section as well) ?

  I've not been tracking the list comprehensions discussion, but there
is a (minimal) entry in the tutorial.  It could use some fleshing out.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Tue Aug 15 20:34:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 14:34:43 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/ref ref5.tex,1.32,1.33
In-Reply-To: <20000815202737.C376@xs4all.nl>
References: <200008151754.KAA19233@slayer.i.sourceforge.net>
	<20000815202737.C376@xs4all.nl>
Message-ID: <14745.36163.362268.388275@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Uhm, this is wrong, and I don't think it was what I submitted either
 > (though, if I did, I apologize :) The first element of listmaker is an
 > expression, not an expression_list. I'll change that, unless Ping and Skip
 > wake up and fix it in a much better way instead.

  You're right; that's what I get for applying it manually (trying to
avoid all the machinery of saving/patching from SF...).
  Fixed in a sec!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Aug 15 21:11:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:11:11 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
References: <20000815104723.A27306@ActiveState.com>
Message-ID: <005401c006ec$a95a74a0$f2a6b5d4@hagrid>

trent wrote:
> This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> 7500. I don't want to just willy-nilly drop the recursion limit down to make
> the problem go away.

SRE is overflowing the stack, of course :-(

:::

I spent a little time surfing around on the MSDN site, and came
up with the following little PyOS_CheckStack implementation for
Visual C (and compatibles):

#include <malloc.h>

int __inline
PyOS_CheckStack()
{
    __try {
        /* alloca throws a stack overflow exception if there's less
           than 2k left on the stack */
        alloca(2000);
        return 0;
    } __except (1) {
        /* just ignore the error */
    }
    return 1;
}

a quick look at the resulting assembler code indicates that this
should be pretty efficient (some exception-related stuff, and a
call to an internal stack probe function), but I haven't added it
to the interpreter (and I don't have time to dig deeper into this
before the weekend).

maybe someone else has a little time to spare?

it shouldn't be that hard to figure out 1) where to put this, 2) what
ifdef's to use around it, and 3) what "2000" should be changed to...

(and don't forget to set USE_STACKCHECK)

</F>




From effbot at telia.com  Tue Aug 15 21:17:49 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:17:49 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
References: <20000815104723.A27306@ActiveState.com> <005401c006ec$a95a74a0$f2a6b5d4@hagrid>
Message-ID: <008601c006ed$8100c120$f2a6b5d4@hagrid>

I wrote:
>     } __except (1) {

should probably be:

    } __except (EXCEPTION_EXECUTE_HANDLER) {

</F>




From effbot at telia.com  Tue Aug 15 21:19:32 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 15 Aug 2000 21:19:32 +0200
Subject: [Python-Dev] PyOS_CheckStack for windows
Message-ID: <009501c006ed$be40afa0$f2a6b5d4@hagrid>

I wrote:
>     } __except (EXCEPTION_EXECUTE_HANDLER) {

which is defined in "excpt.h"...

</F>




From tim_one at email.msn.com  Tue Aug 15 21:19:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 15:19:23 -0400
Subject: [Python-Dev] Call for reviewer!
Message-ID: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>

There are 5 open & related patches to getopt.py:  101106 thru 101110
inclusive.  Who wants to review these?  Fair warning in advance that Guido
usually hates adding stuff to getopt, and the patch comment

    I examined the entire 1.6b1 tarball for incompatibilities,
    and found only 2 in 90+ modules using getopt.py.

probably means it's dead on arrival (2+% is infinitely more than 0% <0.1
wink>).

On that basis alone, my natural inclination is to reject them for lack of
backward compatibility.  So let's get some votes and see whether there's
sufficient support to overcome that.





From trentm at ActiveState.com  Tue Aug 15 21:53:46 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Tue, 15 Aug 2000 12:53:46 -0700
Subject: [Python-Dev] Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Aug 15, 2000 at 03:19:23PM -0400
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
Message-ID: <20000815125346.I30086@ActiveState.com>

On Tue, Aug 15, 2000 at 03:19:23PM -0400, Tim Peters wrote:
> There are 5 open & related patches to getopt.py:  101106 thru 101110
> inclusive.  Who wants to review these?  Fair warning in advance that Guido
> usually hates adding stuff to getopt, and the patch comment
> 
>     I examined the entire 1.6b1 tarball for incompatibilities,
>     and found only 2 in 90+ modules using getopt.py.
> 
> probably means it's dead on arrival (2+% is infinitely more than 0% <0.1
> wink>).
> 
> On that basis alone, my natural inclination is to reject them for lack of
> backward compatibility.  So let's get some votes and see whether there's
> sufficient support to overcome that.
> 

-0 (too timid to use -1)

getopt is a nice simple, quick, useful module. Rather than extending it I
would rather see a separate getopt-like module for those who need some more
heavy duty option processing. One that supports windows '/' switch markers.
One where each option is maybe a class instance with methods that do the
processing and record state for that option and with attributes for help
strings and the number of arguments accepted and argument validification
methods. One that supports abstraction of options to capabilities (e.g. two
compiler interfaces, same capability, different option to specify it, share
option processing). One that support different algorithms for parsing the
command line (some current apps like to run through and grab *all* the
options, some like to stop option processing at the first non-option).

Call it 'supergetopt' and whoever cam 'import supergetopt as getopt'.

Keep getopt the way it is. Mind you, I haven't looked at the proposed patches
so my opinion might be unfair.

Trent


-- 
Trent Mick
TrentM at ActiveState.com



From akuchlin at mems-exchange.org  Tue Aug 15 22:01:56 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 15 Aug 2000 16:01:56 -0400
Subject: [Python-Dev] Call for reviewer!
In-Reply-To: <20000815125346.I30086@ActiveState.com>; from trentm@ActiveState.com on Tue, Aug 15, 2000 at 12:53:46PM -0700
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com> <20000815125346.I30086@ActiveState.com>
Message-ID: <20000815160156.D16506@kronos.cnri.reston.va.us>

On Tue, Aug 15, 2000 at 12:53:46PM -0700, Trent Mick wrote:
>Call it 'supergetopt' and whoever cam 'import supergetopt as getopt'.

Note that there's Lib/distutils/fancy_getopt.py.  The docstring reads:

Wrapper around the standard getopt module that provides the following
additional features:
  * short and long options are tied together
  * options have help strings, so fancy_getopt could potentially
    create a complete usage summary
  * options set attributes of a passed-in object

--amk



From bwarsaw at beopen.com  Tue Aug 15 22:30:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 16:30:59 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
Message-ID: <14745.43139.834290.323136@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <twouters at users.sourceforge.net> writes:

    TW> Apply SF patch #101151, by Peter S-K, which fixes smtplib's
    TW> passing of the 'helo' and 'ehlo' message, and exports the
    TW> 'make_fqdn' function. This function should be moved to
    TW> socket.py, if that module ever gets a Python wrapper.

Should I work on this for 2.0?  Specifically 1) moving socketmodule to
_socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
socket.py instead of smtplib.

It makes no sense for make_fqdn to live in smtplib.

I'd be willing to do this.

-Barry



From gstein at lyra.org  Tue Aug 15 22:42:02 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 15 Aug 2000 13:42:02 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.43139.834290.323136@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:30:59PM -0400
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <20000815134202.K19525@lyra.org>

On Tue, Aug 15, 2000 at 04:30:59PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TW" == Thomas Wouters <twouters at users.sourceforge.net> writes:
> 
>     TW> Apply SF patch #101151, by Peter S-K, which fixes smtplib's
>     TW> passing of the 'helo' and 'ehlo' message, and exports the
>     TW> 'make_fqdn' function. This function should be moved to
>     TW> socket.py, if that module ever gets a Python wrapper.
> 
> Should I work on this for 2.0?  Specifically 1) moving socketmodule to
> _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
> socket.py instead of smtplib.
> 
> It makes no sense for make_fqdn to live in smtplib.
> 
> I'd be willing to do this.

Note that Windows already has a socket.py module (under plat-win or
somesuch). You will want to integrate that with any new socket.py that you
implement.

Also note that Windows does some funny stuff in socketmodule.c to export
itself as _socket. (the *.dsp files already build it as _socket.dll)


+1

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Tue Aug 15 22:46:15 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 16:46:15 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
	<14745.43139.834290.323136@anthem.concentric.net>
	<20000815134202.K19525@lyra.org>
Message-ID: <14745.44055.15573.283903@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> Note that Windows already has a socket.py module (under
    GS> plat-win or somesuch). You will want to integrate that with
    GS> any new socket.py that you implement.

    GS> Also note that Windows does some funny stuff in socketmodule.c
    GS> to export itself as _socket. (the *.dsp files already build it
    GS> as _socket.dll)

    GS> +1

Should we have separate plat-*/socket.py files or does it make more
sense to try to integrate them into one shared socket.py?  From quick
glance it certainly looks like there's Windows specific stuff in
plat-win/socket.py (big surprise, huh?)

-Barry



From nowonder at nowonder.de  Wed Aug 16 00:47:24 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 15 Aug 2000 22:47:24 +0000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib 
 libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <3999C87C.24A0DF82@nowonder.de>

"Barry A. Warsaw" wrote:
> 
> Should I work on this for 2.0?  Specifically 1) moving socketmodule to
> _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
> socket.py instead of smtplib.

+1 on you doing that. I'd volunteer, but I am afraid ...

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Tue Aug 15 23:04:11 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 15 Aug 2000 23:04:11 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.44055.15573.283903@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:46:15PM -0400
References: <200008151930.MAA10234@slayer.i.sourceforge.net> <14745.43139.834290.323136@anthem.concentric.net> <20000815134202.K19525@lyra.org> <14745.44055.15573.283903@anthem.concentric.net>
Message-ID: <20000815230411.D376@xs4all.nl>

On Tue, Aug 15, 2000 at 04:46:15PM -0400, Barry A. Warsaw wrote:

>     GS> Note that Windows already has a socket.py module (under
>     GS> plat-win or somesuch). You will want to integrate that with
>     GS> any new socket.py that you implement.

BeOS also has its own socket.py wrapper, to provide some functions BeOS
itself is missing (dup, makefile, fromfd, ...) I'm not sure if that's still
necessary, though, perhaps BeOS decided to implement those functions in a
later version ?

> Should we have separate plat-*/socket.py files or does it make more
> sense to try to integrate them into one shared socket.py?  From quick
> glance it certainly looks like there's Windows specific stuff in
> plat-win/socket.py (big surprise, huh?)

And don't forget the BeOS stuff ;P This is the biggest reason I didn't do it
myself: it takes some effort and a lot of grokking to fix this up properly,
without spreading socket.py out in every plat-dir. Perhaps it needs to be
split up like the os module ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Wed Aug 16 00:25:33 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 15 Aug 2000 18:25:33 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <20000815230411.D376@xs4all.nl>
References: <14745.44055.15573.283903@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 15, 2000 at 04:46:15PM -0400
Message-ID: <1245744161-146360088@hypernet.com>

Thomas Wouters wrote:
> On Tue, Aug 15, 2000 at 04:46:15PM -0400, Barry A. Warsaw wrote:
> 
> >     GS> Note that Windows already has a socket.py module (under
> >     GS> plat-win or somesuch). You will want to integrate that
> >     with GS> any new socket.py that you implement.
> 
> BeOS also has its own socket.py wrapper, to provide some
> functions BeOS itself is missing (dup, makefile, fromfd, ...) I'm
> not sure if that's still necessary, though, perhaps BeOS decided
> to implement those functions in a later version ?

Sounds very close to what Windows left out. As for *nixen, 
there are some differences between BSD and SysV sockets, 
but they're really, really arcane.
 


- Gordon



From fdrake at beopen.com  Wed Aug 16 01:06:15 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 15 Aug 2000 19:06:15 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
In-Reply-To: <14745.43139.834290.323136@anthem.concentric.net>
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
	<14745.43139.834290.323136@anthem.concentric.net>
Message-ID: <14745.52455.487734.450253@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Should I work on this for 2.0?  Specifically 1) moving socketmodule to
 > _socket and writing a socket.py wrapper; 2) exporting make_fqdn() in
 > socket.py instead of smtplib.
 > 
 > It makes no sense for make_fqdn to live in smtplib.

  I've started, but am momentarily interupted.  Watch for it late
tonight.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Wed Aug 16 01:19:42 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 15 Aug 2000 19:19:42 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Doc/lib libsocket.tex,1.46,1.47
References: <200008151930.MAA10234@slayer.i.sourceforge.net>
	<14745.43139.834290.323136@anthem.concentric.net>
	<14745.52455.487734.450253@cj42289-a.reston1.va.home.com>
Message-ID: <14745.53262.605601.806635@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    Fred>   I've started, but am momentarily interupted.  Watch for it
    Fred> late tonight.  ;)

Okay fine.  I'll hold off on socket module then, and will take a look
at whatever you come up with.

-Barry



From gward at mems-exchange.org  Wed Aug 16 01:57:51 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Tue, 15 Aug 2000 19:57:51 -0400
Subject: [Python-Dev] Winreg update
In-Reply-To: <3993FEC7.4E38B4F1@prescod.net>; from paul@prescod.net on Fri, Aug 11, 2000 at 08:25:27AM -0500
References: <3993FEC7.4E38B4F1@prescod.net>
Message-ID: <20000815195751.A16100@ludwig.cnri.reston.va.us>

On 11 August 2000, Paul Prescod said:
> This is really easy so I want
> some real feedback this time. Distutils people, this means you! Mark! I
> would love to hear Bill Tutt, Greg Stein and anyone else who claims some
> knowledge of Windows!

All I know is that the Distutils only use the registry for one thing:
finding the MSVC binaries (in distutils/msvccompiler.py).  The registry
access is coded in such a way that we can use either the
win32api/win32con modules ("old way") or _winreg ("new way", but still
the low-level interface).

I'm all in favour of high-level interfaces, and I'm also in favour of
speaking the local tongue -- when in Windows, follow the Windows API (at
least for features that are totally Windows-specific, like the
registry).  But I know nothing about all this stuff, and as far as I
know the registry access in distutils/msvccompiler.py works just fine as
is.

        Greg



From tim_one at email.msn.com  Wed Aug 16 02:28:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 20:28:10 -0400
Subject: [Python-Dev] Release Satii
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAAHAAA.tim_one@email.msn.com>

1.6:  Is done, but being held back (by us -- two can play at this game
<wink>) pending resolution of license issues.  Since 2.0 will be a
derivative work of 1.6, the license that goes out with 1.6 affects us
forever after.  Can't say more about that because I don't know more; and
Guido is out of town this week.

2.0:  Full steam ahead!  Just finished going thru every patch on
SourceForge.  What's Open at this instant is *it* for new 2.0 features.
More accurately, they're the only new features that will still be
*considered* for 2.0 (not everything in Open now will necessarily be
accepted).  The only new patches that won't be instantly Postponed from now
until 2.0 final ships are bugfixes.  Some oddities:

+ 8 patches remain unassigned.  7 of those are part of a single getopt
crusade (well, two getopt crusades, since as always happens when people go
extending getopt, they can't agree about what to do), and if nobody speaks
in their favor they'll probably get gently rejected.  The eighth is a CGI
patch from Ping that looks benign to me but is incomplete (missing doc
changes).

+ /F's Py_ErrFormat patch got moved back from Rejected to Open so we can
find all the putative 2.0 patches in one SF view (i.e., Open).

I've said before that I have no faith in the 2.0 release schedule.  Here's
your chance to make a fool of me -- and in public too <wink>!

nothing-would-make-me-happier-ly y'rs  - tim





From greg at cosc.canterbury.ac.nz  Wed Aug 16 02:57:18 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 12:57:18 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
Message-ID: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas at xs4all.net>:

> Comment by tim_one:

> [x if 6] should not be a legal expression but the grammar allows it today.

Why shouldn't it be legal?

The meaning is quite clear (either a one-element list or an empty
list). It's something of a degenerate case, but I don't think
degenerate cases should be excluded simply because they're
degenerate.

Excluding it will make both the implementation and documentation
more complicated, with no benefit that I can see.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Wed Aug 16 03:26:36 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 21:26:36 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEADHAAA.tim_one@email.msn.com>

[Tim]
> [x if 6] should not be a legal expression but the grammar
> allows it today.

[Greg Ewing]
> Why shouldn't it be legal?

Because Guido hates it.  It's almost certainly an error on the part of the
user; really the same reason that zip() without arguments raises an
exception.

> ...
> Excluding it will make both the implementation and documentation
> more complicated,

Of course, but marginally so.  "The first clause must be an iterator"; end
of doc changes.

> with no benefit that I can see.

Catching likely errors is a benefit for the user.  I realize that Haskell
does allow it -- although that would be a surprise to most Haskell users
<wink>.





From dgoodger at bigfoot.com  Wed Aug 16 04:36:02 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Tue, 15 Aug 2000 22:36:02 -0400
Subject: [Python-Dev] Re: Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
Message-ID: <B5BF7652.7B39%dgoodger@bigfoot.com>

I thought the "backwards compatibility" issue might be a sticking point ;>
And I *can* see why.

So, if I were to rework the patch to remove the incompatibility, would it
fly or still be shot down? Here's the change, in a nutshell:

Added a function getoptdict(), which returns the same data as getopt(), but
instead of a list of [(option, optarg)], it returns a dictionary of
{option:optarg}, enabling random/direct access.

getoptdict() turns this:

    if __name__ == '__main__':
        import getopt
        opts, args = getopt.getopt(sys.argv[1:], 'a:b')
        if len(args) <> 2:
            raise getopt.error, 'Exactly two arguments required.'
        options = {'a': [], 'b': 0}  # default option values
        for opt, optarg in opts:
            if opt == '-a':
                options['a'].append(optarg)
            elif opt == '-b':
                options['b'] = 1
        main(args, options)

into this:

    if __name__ == '__main__':
        import getopt
        opts, args = getopt.getoptdict(sys.argv[1:], 'a:b',
                                       repeatedopts=APPEND)
        if len(args) <> 2:
            raise getopt.error, 'Exactly two arguments required.'
        options = {'a': opts.get('-a', [])}
        options['b'] = opts.has_key('-b')
        main(args, options)

(Notice how the defaults get subsumed into the option processing, which goes
from 6 lines to 2 for this short example. A much higher-level interface,
IMHO.)

BUT WAIT, THERE'S MORE! As part of the deal, you get a free test_getopt.py
regression test module! Act now; vote +1! (Actually, you'll get that no
matter what you vote. I'll remove the getoptdict-specific stuff and resubmit
it if this patch is rejected.)

The incompatibility was introduced because the current getopt() returns an
empty string as the optarg (second element of the tuple) for an argumentless
option. I changed it to return None. Otherwise, it's impossible to
differentiate between an argumentless option '-a' and an empty string
argument '-a ""'. But I could rework it to remove the incompatibility.

Again: If the patch were to become 100% backwards-compatible, with just the
addition of getoptdict(), would it still be rejected, or does it have a
chance?

Eagerly awaiting your judgement...

-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From amk at s205.tnt6.ann.va.dialup.rcn.com  Wed Aug 16 05:13:08 2000
From: amk at s205.tnt6.ann.va.dialup.rcn.com (A.M. Kuchling)
Date: Tue, 15 Aug 2000 23:13:08 -0400
Subject: [Python-Dev] Fate of Include/my*.h
Message-ID: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>

The now-redundant Include/my*.h files in Include should either be
deleted, or at least replaced with empty files containing only a "This
file is obsolete" comment.  I don't think they were ever part of the
public API (Python.h always included them), so deleting them shouldn't
break anything.   

--amk



From greg at cosc.canterbury.ac.nz  Wed Aug 16 05:04:33 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 15:04:33 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEADHAAA.tim_one@email.msn.com>
Message-ID: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>

Tim Peters:

> Because Guido hates it.  It's almost certainly an error on the part
> of the user

Guido doesn't like it, therefore it must be an error. Great
piece of logic there.

> Catching likely errors is a benefit for the user.

What evidence is there that this particular "likely error" is
going to be prevalent enough to justify outlawing a potentially
useful construct? Where are the hoardes of Haskell user falling
into this trap and clamouring for it to be disallowed?

> really the same reason that zip() without arguments raises an
> exception.

No, I don't think it's the same reason. It's not clear what
zip() without arguments should return. There's no such difficulty
in this case.

For the most part, Python is free of artificial restrictions, and I
like it that way. Imposing a restriction of this sort seems
un-Pythonic.

This is the second gratuitous change that's been made to my
LC syntax without any apparent debate. While I acknowledge the
right of the BDFL to do this, I'm starting to feel a bit
left out...

> I realize that Haskell does allow it -- although that would be a
> surprise to most Haskell users

Which suggests that they don't trip over this feature very
often, otherwise they'd soon find out about it!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From gstein at lyra.org  Wed Aug 16 05:20:13 2000
From: gstein at lyra.org (Greg Stein)
Date: Tue, 15 Aug 2000 20:20:13 -0700
Subject: [Python-Dev] Fate of Include/my*.h
In-Reply-To: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>; from amk@s205.tnt6.ann.va.dialup.rcn.com on Tue, Aug 15, 2000 at 11:13:08PM -0400
References: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <20000815202013.H17689@lyra.org>

On Tue, Aug 15, 2000 at 11:13:08PM -0400, A.M. Kuchling wrote:
> The now-redundant Include/my*.h files in Include should either be
> deleted, or at least replaced with empty files containing only a "This
> file is obsolete" comment.  I don't think they were ever part of the
> public API (Python.h always included them), so deleting them shouldn't
> break anything.   

+1 on deleting them.

-- 
Greg Stein, http://www.lyra.org/



From tim_one at email.msn.com  Wed Aug 16 05:23:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 23:23:44 -0400
Subject: [Python-Dev] Nasty new bug in test_longexp
Message-ID: <LNBBLJKPBEHFEDALKOLCEEAGHAAA.tim_one@email.msn.com>

Fred, I vaguely recall you touched something here recently, so you're top o'
the list.  Smells like an uninitialized variable.

1 of 4:  test_longexp fails in release build:

C:\PCbuild>python ..\lib\test\regrtest.py test_longexp
test_longexp
test test_longexp failed -- Writing: '\012', expected: ' '
1 test failed: test_longexp

2 of 4:  but passes in verbose mode, despite that the output doesn't appear
to match what's expected (missing " (line 1)"):

C:\PCbuild>python ..\lib\test\regrtest.py -v test_longexp
test_longexp
test_longexp
Caught SyntaxError for long expression: expression too long
1 test OK.

3 of 4:  but passes in debug build:

C:\PCbuild>python_d ..\lib\test\regrtest.py test_longexp
Adding parser accelerators ...
Done.
test_longexp
1 test OK.
[3962 refs]

4 of 4: and verbose debug output does appear to match what's expected:

C:\PCbuild>python_d ..\lib\test\regrtest.py -v test_longexp

Adding parser accelerators ...
Done.
test_longexp
test_longexp
Caught SyntaxError for long expression: expression too long (line 1)
1 test OK.
[3956 refs]

C:\PCbuild>





From tim_one at email.msn.com  Wed Aug 16 05:24:44 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 15 Aug 2000 23:24:44 -0400
Subject: [Python-Dev] Fate of Include/my*.h
In-Reply-To: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAHHAAA.tim_one@email.msn.com>

[A.M. Kuchling]
> The now-redundant Include/my*.h files in Include should either be
> deleted, or at least replaced with empty files containing only a "This
> file is obsolete" comment.  I don't think they were ever part of the
> public API (Python.h always included them), so deleting them shouldn't
> break anything.   

+1





From tim_one at email.msn.com  Wed Aug 16 06:13:00 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 00:13:00 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>

[Tim]
>> Because Guido hates it.  It's almost certainly an error on the part
>> of the user

[Greg Ewing]
> Guido doesn't like it, therefore it must be an error. Great
> piece of logic there.

Perhaps I should have used a colon:  Guido hates it *because* it's almost
certainly an error.  I expect the meaning was plain enough without that,
though.

>> Catching likely errors is a benefit for the user.

> What evidence is there that this particular "likely error" is

Nobody said it was likely.  Scare quotes don't work unless you quote
something that was actually said <wink>.  Likeliness has nothing to do with
whether Python calls something an error anyway, here or anywhere else.

> going to be prevalent enough to justify outlawing a potentially
> useful construct?

Making a list that's either empty or a singleton is useful?  Fine, here you
go:

   (boolean and [x] or [])

We don't need listcomps for that.  listcomps are a concrete implementation
of mathematical set-builder notation, and without iterators to supply a
universe of elements to build *on*, it may make *accidental* sense thanks to
this particular implementation -- but about as much *logical* sense as
map(None, seq1, seq2, ...) makes now.  SETL is the original computer
language home for comprehensions (both set and list), and got this part
right (IMO; Guido just hates it for his own incrutable reasons <wink>).

> Where are the hoardes of Haskell user falling into this trap and
> clamouring for it to be disallowed?

I'd look over on comp.lang.haskell -- provided anyone is still hanging out
there.

>> really the same reason that zip() without arguments raises an
>> exception.

> No, I don't think it's the same reason. It's not clear what
> zip() without arguments should return. There's no such difficulty
> in this case.

A zip with no arguments has no universe to zip over; a listcomp without
iterators has no universe to build on.  I personally don't want syntax
that's both a floor wax and a dessert topping.  The *intent* here is to
supply a flexible and highly expressive way to build lists out of other
sequences; no other sequences, use something else.

> For the most part, Python is free of artificial restrictions, and I
> like it that way. Imposing a restriction of this sort seems
> un-Pythonic.
>
> This is the second gratuitous change that's been made to my
> LC syntax without any apparent debate.

The syntax hasn't been changed yet -- this *is* the debate.  I won't say any
more about it, let's hear what others think.

As to being upset over changes to your syntax, I offered you ownership of
the PEP the instant it was dumped on me (26-Jul), but didn't hear back.
Perhaps you didn't get the email.  BTW, what was the other gratuitous
change?  Requiring parens around tuple targets?  That was discussed here
too, but the debate was brief as consensus seemed clearly to favor requiring
them.  That, plus Guido suggested it at a PythonLabs mtg, and agreement was
unanimous on that point.  Or are you talking about some other change (I
can't recall any other one)?

> While I acknowledge the right of the BDFL to do this, I'm starting
> to feel a bit left out...

Well, Jeez, Greg -- Skip took over the patch, Ping made changes to it after,
I got stuck with the PEP and the Python-Dev rah-rah stuff, and you just sit
back and snipe.  That's fine, you're entitled, but if you choose not to do
the work anymore, you took yourself out of the loop.

>> I realize that Haskell does allow it -- although that would be a
>> surprise to most Haskell users

> Which suggests that they don't trip over this feature very
> often, otherwise they'd soon find out about it!

While also suggesting it's useless to allow it.





From paul at prescod.net  Wed Aug 16 06:30:06 2000
From: paul at prescod.net (Paul Prescod)
Date: Wed, 16 Aug 2000 00:30:06 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCCELAGPAA.tim_one@email.msn.com>
Message-ID: <399A18CE.6CFFCAB9@prescod.net>

Tim Peters wrote:
> 
> ...
> 
> But if you add seq.items(), you had better add seq.keys() too, and
> seq.values() as a synonym for seq[:].  I guess the perceived advantage of
> adding seq.items() is that it supplies yet another incredibly slow and
> convoluted way to get at the for-loop index?  "Ah, that's the ticket!  Let's
> allocate gazillabytes of storage and compute all the indexes into a massive
> data structure up front, and then we can use the loop index that's already
> sitting there for free anyway to index into that and get back a redundant
> copy of itself!" <wink>.
> 
> not-a-good-sign-when-common-sense-is-offended-ly y'rs  - tim

.items(), .keys(), .values() and range() all offended my common sense
when I started using Python in the first place. I got over it. 

I really don't see this "indexing" issue to be common enough either for
special syntax OR to worry alot about efficiency. Nobody is forcing
anyone to use .items(). If you want a more efficient way to do it, it's
available (just not as syntactically beautifu -- same as range/xrangel).

That isn't the case for dictionary .items(), .keys() and .values().

Also, if .keys() returns a range object then theoretically the
interpreter could recognize that it is looping over a range and optimize
it at runtime. That's an alternate approach to optimizing range literals
through new byte-codes. I don't have time to think about what that would
entail right now.... :(

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From fdrake at beopen.com  Wed Aug 16 06:51:34 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 00:51:34 -0400 (EDT)
Subject: [Python-Dev] socket module changes
Message-ID: <14746.7638.870650.747281@cj42289-a.reston1.va.home.com>

  This is a brief description of what I plan to check in to make the
changes we've discussed regarding the socket module.  I'll make the
checkins tomorrow morning, allowing you all night to scream if you
think that'll help.  ;)
  Windows and BeOS both use a wrapper module, but these are
essentially identical; the Windows variant has evolved a bit more, but
that evolution is useful for BeOS as well, aside from the errorTab
table (which gives messages for Windows-specific error numbers).  I
will be moving the sharable portions to a new module, _dupless_socket,
which the new socket module will import on Windows and BeOS.  (That
name indicates why they use a wrapper in the first place!)  The
errorTab definition will be moved to the new socket module and will
only be defined on Windows.  The exist wrappers, plat-beos/socket.py
and plat-win/socket.py, will be removed.
  socketmodule.c will only build as _socket, allowing much
simplification of the conditional compilation at the top of the
initialization function.
  The socket module will include the make_fqdn() implementation,
adjusted to make local references to the socket module facilities it
requires and to use string methods instead of using the string
module.  It is documented.
  The docstring in _socket will be moved to socket.py.
  If the screaming doesn't wake me, I'll check this in in the
morning.  The regression test isn't complaining!  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Wed Aug 16 07:12:21 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 01:12:21 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
In-Reply-To: <399A18CE.6CFFCAB9@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com>

[Paul Prescod]
> ...
> I really don't see this "indexing" issue to be common enough

A simple grep (well, findstr under Windows) finds over 300 instances of

    for ... in range(len(...

in the .py files on my laptop.  I don't recall exactly what the percentages
were when I looked over a very large base of Python code several years ago,
but I believe it was about 1 in 7 for loops.

> for special syntax OR to worry alot about efficiency.

1 in 7 is plenty.  range(len(seq)) is a puzzler to newbies, too -- it's
*such* an indirect way of saying what they say directly in other languages.

> Nobody is forcing anyone to use .items().

Agreed, but since seq.items() doesn't exist now <wink>.

> If you want a more efficient way to do it, it's available (just not as
> syntactically beautiful -- same as range/xrangel).

Which way would that be?  I don't know of one, "efficient" either in the
sense of runtime speed or of directness of expression.  xrange is at least a
storage-efficient way, and isn't it grand that we index the xrange object
with the very integer we're (usually) trying to get it to return <wink>?
The "loop index" isn't an accident of the way Python happens to implement
"for" today, it's the very basis of Python's thing.__getitem__(i)/IndexError
iteration protocol.  Exposing it is natural, because *it* is natural.

> ...
> Also, if .keys() returns a range object then theoretically the
> interpreter could recognize that it is looping over a range and optimize
> it at runtime.

Sorry, but seq.keys() just makes me squirm.  It's a little step down the
Lispish path of making everything look the same.  I don't want to see
float.write() either <wink>.

although-that-would-surely-be-more-orthogonal-ly y'rs  - tim





From thomas at xs4all.net  Wed Aug 16 07:34:29 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 16 Aug 2000 07:34:29 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 12:13:00AM -0400
References: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz> <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <20000816073429.E376@xs4all.nl>

On Wed, Aug 16, 2000 at 12:13:00AM -0400, Tim Peters wrote:

> > This is the second gratuitous change that's been made to my
> > LC syntax without any apparent debate.
> 
> The syntax hasn't been changed yet -- this *is* the debate.  I won't say any
> more about it, let's hear what others think.

It'd be nice to hear *what* the exact syntax issue is. At first I thought
you meant forcing parentheses around all forms of iterator expressions, but
apparently you mean requiring at least a single 'for' statement in a
listcomp ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Wed Aug 16 07:36:24 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 01:36:24 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAMHAAA.tim_one@email.msn.com>

Clarification:

[Tim]
>>> Catching likely errors is a benefit for the user.

[Greg Ewing]
>> What evidence is there that this particular "likely error" is ..

[Tim]
> Nobody said it was likely. ...

Ha!  I did!  But not in Greg's sense.  It was originally in the sense of "if
we see it, it's almost certainly an error on the part of the user", not that
"it's likely we'll see this".  This is in the same sense that Python
considers

    x = float(i,,)
or
    x = for i [1,2,3]

to be likely errors -- you don't see 'em often, but they're most likely
errors on the part of the user when you do.

back-to-the-more-mundane-confusions-ly y'rs  - tim





From greg at cosc.canterbury.ac.nz  Wed Aug 16 08:02:23 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 16 Aug 2000 18:02:23 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <200008160602.SAA15239@s454.cosc.canterbury.ac.nz>

> Guido hates it *because* it's almost certainly an error.

Yes, I know what you meant. I was just trying to point out
that, as far as I can see, it's only Guido's *opinion* that
it's almost certainly an error.

Let n1 be the number of times that [x if y] appears in some
program and the programmer actually meant to write something
else. Let n2 be the number of times [x if y] appears and
the programmer really meant it.

Now, I agree that n1+n2 will probably be a very small number.
But from that alone it doesn't follow that a given instance
of [x if y] is probably an error. That is only true if
n1 is much greater than n2, and in the absence of any
experience, there's no reason to believe that.

> A zip with no arguments has no universe to zip over; a listcomp without
> iterators has no universe to build on... The *intent* here is to
> supply a flexible and highly expressive way to build lists out of other
> sequences; no other sequences, use something else.

That's a reasonable argument. It might even convince me if
I think about it some more. I'll think about it some more.

> if you choose not to do the work anymore, you took yourself out of the
> loop.

You're absolutely right. I'll shut up now.

(By the way, I think your mail must have gone astray, Tim --
I don't recall ever being offered ownership of a PEP, whatever
that might entail.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Wed Aug 16 08:18:30 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 02:18:30 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000816073429.E376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEAOHAAA.tim_one@email.msn.com>

[Thomas Wouters]
> It'd be nice to hear *what* the exact syntax issue is. At first I thought
> you meant forcing parentheses around all forms of iterator
> expressions,

No, that's an old one, and was requiring parens around a target expression
iff it's a tuple.  So

    [x, y for x in s for y in t]  # BAD
    [(x, y) for x in s for y in t]  # good
    [(((x, y))) for x in s for y in t]  # good, though silly
    [x+y for x in s for y in t] # good
    [(x+y) for x in s for y in t] # good
    [x , for x in s] # BAD
    [(x ,) for x in s] # good

That much is already implemented in the patch currently on SourceForge.

> but apparently you mean requiring at least a single 'for' statement
> in a listcomp ?

No too <wink>, but closer:  it's that the leftmost ("first") clause must be
a "for".  So, yes, at least one for, but also that an "if" can't precede
*all* the "for"s:

   [x for x in s if x & 1] # good
   [x if x & 1 for x in s] # BAD
   [x for x in s]  # good
   [x if y & 1] # BAD

Since the leftmost clause can't refer to any bindings established "to its
right", an "if" as the leftmost clause can't act to filter the elements
generated by the iterators, and so Guido (me too) feels it's almost
certainly an error on the user's part if Python sees an "if" in the leftmost
position.  The current patch allows all of these, though.

In (mathematical) set-builder notation, you certainly see things like

    odds = {x | mod(x, 2) = 1}

That is, with "just a condition".  But in such cases the universe over which
x ranges is implicit from context (else the expression is not
well-defined!), and can always be made explicit; e.g., perhaps the above is
in a text where it's assumed everything is a natural number, and then it can
be made explicit via

    odds = {x in Natural | mod(x, 2) = 1|

In the concrete implementation afforded by listcomps, there is no notion of
an implicit universal set, so (as in SETL too, where this all came from
originally) explicit naming of the universe is required.

The way listcomps are implemented *can* make

   [x if y]

"mean something" (namely one of [x] or [], depending on y's value), but that
has nothing to do with its set-builder heritage.  Looks to me like the user
is just confused!  To Guido too.  Hence the desire not to allow this form at
all.







From ping at lfw.org  Wed Aug 16 08:23:57 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 15 Aug 2000 23:23:57 -0700 (PDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in
 the Ref manual docs on listcomprehensions
In-Reply-To: <200008160057.MAA15191@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008152212170.416-100000@skuld.lfw.org>

On Wed, 16 Aug 2000, Greg Ewing wrote:
> > [x if 6] should not be a legal expression but the grammar allows it today.
> 
> Why shouldn't it be legal?
[...]
> Excluding it will make both the implementation and documentation
> more complicated, with no benefit that I can see.

I don't have a strong opinion on this either way, but i can state
pretty confidently that the change would be tiny and simple: just
replace "list_iter" in the listmaker production with "list_for",
and you are done.


-- ?!ng

"I'm not trying not to answer the question; i'm just not answering it."
    -- Lenore Snell





From tim_one at email.msn.com  Wed Aug 16 08:59:06 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 02:59:06 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <200008160602.SAA15239@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEAPHAAA.tim_one@email.msn.com>

[Tim]
>> Guido hates it *because* it's almost certainly an error.

[Greg Ewing]
> Yes, I know what you meant. I was just trying to point out
> that, as far as I can see, it's only Guido's *opinion* that
> it's almost certainly an error.

Well, it's mine too, but I always yield to him on stuff like that anyway;
and I guess I *have* to now, because he's my boss <wink>.

> Let n1 be the number of times that [x if y] appears in some
> program and the programmer actually meant to write something
> else. Let n2 be the number of times [x if y] appears and
> the programmer really meant it.
>
> Now, I agree that n1+n2 will probably be a very small number.
> But from that alone it doesn't follow that a given instance
> of [x if y] is probably an error. That is only true if
> n1 is much greater than n2, and in the absence of any
> experience, there's no reason to believe that.

I argued that one all I'm going to -- I think there is.

>> ... The *intent* here is to supply a flexible and highly expressive
> way to build lists out of other sequences; no other sequences, use
> something else.

> That's a reasonable argument. It might even convince me if
> I think about it some more. I'll think about it some more.

Please do, because ...

>> if you choose not to do the work anymore, you took yourself out
>> of the loop.

> You're absolutely right. I'll shut up now.

Please don't!  This patch is not without opposition, and while consensus is
rarely reached on Python-Dev, I think that's partly because "the BDFL ploy"
is overused to avoid the pain of principled compromise.  If this ends in a
stalement among the strongest proponents, it may not be such a good idea
after all.

> (By the way, I think your mail must have gone astray, Tim --
> I don't recall ever being offered ownership of a PEP, whatever
> that might entail.)

All explained at

    http://python.sourceforge.net/peps/

Although in this particular case, I haven't done anything with the PEP
except argue in favor of what I haven't yet written!  Somebody else filled
in the skeletal text that's there now.  If you still want it, it's yours;
I'll attach the email in question.

ok-that's-16-hours-of-python-today-in-just-a-few-more-i'll-
    have-to-take-a-pee<wink>-ly y'rs  - tim


-----Original Message-----

From: Tim Peters [mailto:tim_one at email.msn.com]
Sent: Wednesday, July 26, 2000 1:25 AM
To: Greg Ewing <greg at cosc.canterbury.ac.nz>
Subject: RE: [Python-Dev] PEP202


Greg, nice to see you on Python-Dev!  I became the PEP202 shepherd because
nobody else volunteered, and I want to see the patch get into 2.0.  That's
all there was to it, though:  if you'd like to be its shepherd, happy to
yield to you.  You've done the most to make this happen!  Hmm -- but maybe
that also means you don't *want* to do more.  That's OK too.





From bwarsaw at beopen.com  Wed Aug 16 15:21:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 09:21:59 -0400 (EDT)
Subject: [Python-Dev] Re: Call for reviewer!
References: <LNBBLJKPBEHFEDALKOLCOEOMGPAA.tim_one@email.msn.com>
	<B5BF7652.7B39%dgoodger@bigfoot.com>
Message-ID: <14746.38263.433927.239480@anthem.concentric.net>

I used to think getopt needed a lot of changes, but I'm not so sure
anymore.  getopt's current API works fine for me and I use it in all
my scripts.  However,

>>>>> "DG" == David Goodger <dgoodger at bigfoot.com> writes:

    DG> The incompatibility was introduced because the current
    DG> getopt() returns an empty string as the optarg (second element
    DG> of the tuple) for an argumentless option. I changed it to
    DG> return None. Otherwise, it's impossible to differentiate
    DG> between an argumentless option '-a' and an empty string
    DG> argument '-a ""'. But I could rework it to remove the
    DG> incompatibility.

I don't think that's necessary.  In my own use, if I /know/ -a doesn't
have an argument (because I didn't specify as "a:"), then I never
check the optarg.  And it's bad form for a flag to take an optional
argument; it either does or it doesn't and you know that in advance.

-Barry



From bwarsaw at beopen.com  Wed Aug 16 15:23:45 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 09:23:45 -0400 (EDT)
Subject: [Python-Dev] Fate of Include/my*.h
References: <20000815231308.A1157@207-172-36-205.s205.tnt6.ann.va.dialup.rcn.com>
Message-ID: <14746.38369.116212.875999@anthem.concentric.net>

>>>>> "AMK" == A M Kuchling <amk at s205.tnt6.ann.va.dialup.rcn.com> writes:

    AMK> The now-redundant Include/my*.h files in Include should
    AMK> either be deleted

+1

-Barry



From fdrake at beopen.com  Wed Aug 16 16:26:29 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 10:26:29 -0400 (EDT)
Subject: [Python-Dev] socket module changes
Message-ID: <14746.42133.355087.417895@cj42289-a.reston1.va.home.com>

  The changes to the socket module are now complete.  Note two changes
to yesterdays plan:
  - there is no _dupless_socket; I just merged that into socket.py
  - make_fqdn() got renamed to getfqdn() for consistency with the rest
of the module.
  I also remembered to update smptlib.  ;)
  I'll be away from email during the day; Windows & BeOS users, please
test this!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From bwarsaw at beopen.com  Wed Aug 16 16:46:26 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 10:46:26 -0400 (EDT)
Subject: [Python-Dev] socket module changes
References: <14746.42133.355087.417895@cj42289-a.reston1.va.home.com>
Message-ID: <14746.43330.134066.238781@anthem.concentric.net>

    >> - there is no _dupless_socket; I just merged that into socket.py -

Thanks, that's the one thing I was going to complain about. :)

-Barry



From bwarsaw at beopen.com  Wed Aug 16 17:11:57 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 11:11:57 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
Message-ID: <14746.44861.78992.343012@anthem.concentric.net>

After channeling and encouragement by Tim Peters, I've updated PEP
214, the extended print statement.  Text is included below, but is
also available at

    http://python.sourceforge.net/peps/pep-0214.html

SourceForge patch #100970 contains the patch to apply against the
current CVS tree if you want to play with it

    http://sourceforge.net/patch/download.php?id=100970

-Barry

-------------------- snip snip --------------------
PEP: 214
Title: Extended Print Statement
Version: $Revision: 1.3 $
Author: bwarsaw at beopen.com (Barry A. Warsaw)
Python-Version: 2.0
Status: Draft
Created: 24-Jul-2000
Post-History: 16-Aug-2000


Introduction

    This PEP describes a syntax to extend the standard `print'
    statement so that it can be used to print to any file-like object,
    instead of the default sys.stdout.  This PEP tracks the status and
    ownership of this feature.  It contains a description of the
    feature and outlines changes necessary to support the feature.
    This PEP summarizes discussions held in mailing list forums, and
    provides URLs for further information, where appropriate.  The CVS
    revision history of this file contains the definitive historical
    record.


Proposal

    This proposal introduces a syntax extension to the print
    statement, which allows the programmer to optionally specify the
    output file target.  An example usage is as follows:

        print >> mylogfile, 'this message goes to my log file'

    Formally, the syntax of the extended print statement is
    
        print_stmt: ... | '>>' test [ (',' test)+ [','] ] )

    where the ellipsis indicates the original print_stmt syntax
    unchanged.  In the extended form, the expression just after >>
    must yield an object with a write() method (i.e. a file-like
    object).  Thus these two statements are equivalent:

	print 'hello world'
        print >> sys.stdout, 'hello world'

    As are these two statements:

        print
        print >> sys.stdout

    These two statements are syntax errors:

        print ,
        print >> sys.stdout,


Justification

    `print' is a Python keyword and introduces the print statement as
    described in section 6.6 of the language reference manual[1].
    The print statement has a number of features:

    - it auto-converts the items to strings
    - it inserts spaces between items automatically
    - it appends a newline unless the statement ends in a comma

    The formatting that the print statement performs is limited; for
    more control over the output, a combination of sys.stdout.write(),
    and string interpolation can be used.

    The print statement by definition outputs to sys.stdout.  More
    specifically, sys.stdout must be a file-like object with a write()
    method, but it can be rebound to redirect output to files other
    than specifically standard output.  A typical idiom is

        sys.stdout = mylogfile
	try:
	    print 'this message goes to my log file'
	finally:
	    sys.stdout = sys.__stdout__

    The problem with this approach is that the binding is global, and
    so affects every statement inside the try: clause.  For example,
    if we added a call to a function that actually did want to print
    to stdout, this output too would get redirected to the logfile.

    This approach is also very inconvenient for interleaving prints to
    various output streams, and complicates coding in the face of
    legitimate try/except or try/finally clauses.


Reference Implementation

    A reference implementation, in the form of a patch against the
    Python 2.0 source tree, is available on SourceForge's patch
    manager[2].  This approach adds two new opcodes, PRINT_ITEM_TO and
    PRINT_NEWLINE_TO, which simply pop the file like object off the
    top of the stack and use it instead of sys.stdout as the output
    stream.


Alternative Approaches

    An alternative to this syntax change has been proposed (originally
    by Moshe Zadka) which requires no syntax changes to Python.  A
    writeln() function could be provided (possibly as a builtin), that
    would act much like extended print, with a few additional
    features.

	def writeln(*args, **kws):
	    import sys
	    file = sys.stdout
	    sep = ' '
	    end = '\n'
	    if kws.has_key('file'):
		file = kws['file']
		del kws['file']
	    if kws.has_key('nl'):
		if not kws['nl']:
		    end = ' '
		del kws['nl']
	    if kws.has_key('sep'):
		sep = kws['sep']
		del kws['sep']
	    if kws:
		raise TypeError('unexpected keywords')
	    file.write(sep.join(map(str, args)) + end)

    writeln() takes a three optional keyword arguments.  In the
    context of this proposal, the relevant argument is `file' which
    can be set to a file-like object with a write() method.  Thus

        print >> mylogfile, 'this goes to my log file'

    would be written as

        writeln('this goes to my log file', file=mylogfile)

    writeln() has the additional functionality that the keyword
    argument `nl' is a flag specifying whether to append a newline or
    not, and an argument `sep' which specifies the separator to output
    in between each item.


References

    [1] http://www.python.org/doc/current/ref/print.html
    [2] http://sourceforge.net/patch/download.php?id=100970



From gvwilson at nevex.com  Wed Aug 16 17:49:06 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Wed, 16 Aug 2000 11:49:06 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
Message-ID: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>

> Barry Warsaw wrote:
> [extended print PEP]

+1 --- it'll come in handy when teaching newbies on Windows and Unix
simultaneously.

Greg




From skip at mojam.com  Wed Aug 16 18:33:30 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 11:33:30 -0500 (CDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <20000815202414.B376@xs4all.nl>
References: <200008151746.KAA06454@bush.i.sourceforge.net>
	<20000815202414.B376@xs4all.nl>
Message-ID: <14746.49754.697401.684106@beluga.mojam.com>

    Thomas> A comment by someone (?!ng ?) who forgot to login, at the
    Thomas> original list-comprehensions patch suggests that Skip forgot to
    Thomas> include the documentation patch to listcomps he provided. Ping,
    Thomas> Skip, can you sort this out and check in the rest of that
    Thomas> documentation (which supposedly includes a tutorial section as
    Thomas> well) ?

Ping & I have already taken care of this off-list.  His examples should be
checked in shortly, if not already.

Skip



From skip at mojam.com  Wed Aug 16 18:43:44 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 11:43:44 -0500 (CDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
References: <200008160304.PAA15215@s454.cosc.canterbury.ac.nz>
	<LNBBLJKPBEHFEDALKOLCAEAIHAAA.tim_one@email.msn.com>
Message-ID: <14746.50368.982239.813435@beluga.mojam.com>

    Tim> Well, Jeez, Greg -- Skip took over the patch, Ping made changes to
    Tim> it after, I got stuck with the PEP and the Python-Dev rah-rah
    Tim> stuff, and you just sit back and snipe.  That's fine, you're
    Tim> entitled, but if you choose not to do the work anymore, you took
    Tim> yourself out of the loop.

Tim,

I think that's a bit unfair to Greg.  Ages ago Greg offered up a prototype
implementation of list comprehensions based upon a small amount of
discussion on c.l.py.  I took over the patch earlier because I wanted to see
it added to Python (originally 1.7, which is now 2.0).  I knew it would
languish or die if someone on python-dev didn't shepherd it.  I was just get
the thing out there for discussion, and I knew that Greg wasn't on
python-dev to do it himself, which is where most of the discussion about
list comprehensions has taken place.  When I've remembered to, I've tried to
at least CC him on threads I've started so he could participate.  My
apologies to Greg for not being more consistent in that regard.  I don't
think we can fault him for not having been privy to all the discussion.

Skip




From gward at mems-exchange.org  Wed Aug 16 19:34:02 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Wed, 16 Aug 2000 13:34:02 -0400
Subject: [Python-Dev] Python 1.6 & Distutils 0.9.1: success
Message-ID: <20000816133401.F16672@ludwig.cnri.reston.va.us>

[oops, screwed up the cc: python-dev when I sent this to Fred.  let's
 try again, shall we?]

Hi Fred --

I went ahead and tried out the current cnri-16-start branch on Solaris
2.6.  (I figured you guys are all using Linux by now, so you might want
to hear back how it works on Solaris.)

In short: no problem!  It built, tested, and installed just fine.

Oops, just noticed that my configure.in fix from late May didn't make
the cut:

  revision 1.124
  date: 2000/05/26 12:22:54;  author: gward;  state: Exp;  lines: +6 -2
  When building on Solaris and the compiler is GCC, use '$(CC) -G' to
  create shared extensions rather than 'ld -G'.  This ensures that shared
  extensions link against libgcc.a, in case there are any functions in the
  GCC runtime not already in the Python core.

Oh well.  This means that Robin Dunn's bsddb extension won't work with
Python 1.6 under Solaris.

So then I tried Distutils 0.9.1 with the new build: again, it worked
just fine.  I was able to build and install the Distutils proper, and
then NumPy.  And I made a NumPy source dist.  Looks like it works just
fine, although this is hardly a rigorous test (sigh).

I'd say go ahead and release Distutils 0.9.1 with Python 1.6...

        Greg
-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From thomas at xs4all.net  Wed Aug 16 22:55:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 16 Aug 2000 22:55:53 +0200
Subject: [Python-Dev] Pending patches for 2.0
Message-ID: <20000816225552.H376@xs4all.nl>

I have a small problem with the number of pending patches that I wrote, and
haven't fully finished yet: I'm going to be away on vacation from about
September 1st until about October 1st or so ;P I'll try to finish them as
much as possible before then (they mostly need just documentation anyway)
but if Guido decides to go for a different approach for one or more of them
(like allowing floats and/or longs in range literals) someone will have to
take them over to finish them in time for 2.0.

I'm not sure when I'll be leaving my internet connection behind, where we'll
be going or when I'll be back, but I won't be able to do too much rewriting
in the next two weeks either -- work is killing me. (Which is one of the
reasons I'm going to try to be as far away from work as possible, on
September 2nd ;) However, if a couple of patches are rejected/postponed and
others don't require substantial changes, and if those decisions are made
before, say, August 30th, I think I can move them into the CVS tree before
leaving and just shove the responsibility for them on the entire dev team ;)

This isn't a push to get them accepted ! Just a warning that if they aren't
accepted before then, someone will have to take over the breastfeeding ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Wed Aug 16 23:07:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 17:07:35 -0400 (EDT)
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <20000816225552.H376@xs4all.nl>
References: <20000816225552.H376@xs4all.nl>
Message-ID: <14747.663.260950.537440@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > much as possible before then (they mostly need just documentation anyway)
                                                  ^^^^^^^^^^^^^^^^^^

  Don't underestimate that requirement!

 > This isn't a push to get them accepted ! Just a warning that if they aren't
 > accepted before then, someone will have to take over the breastfeeding ;)

  I don't think I want to know too much about your development tools!
;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Wed Aug 16 23:24:19 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 17:24:19 -0400 (EDT)
Subject: [Python-Dev] Python 1.6 & Distutils 0.9.1: success
In-Reply-To: <20000816133401.F16672@ludwig.cnri.reston.va.us>
References: <20000816133401.F16672@ludwig.cnri.reston.va.us>
Message-ID: <14747.1667.252426.489530@cj42289-a.reston1.va.home.com>

Greg Ward writes:
 > I went ahead and tried out the current cnri-16-start branch on Solaris
 > 2.6.  (I figured you guys are all using Linux by now, so you might want
 > to hear back how it works on Solaris.)

  Great!  I've just updated 1.6 to include the Distutils-0_9_1 tagged
version of the distutils package and the documentation.  I'm
rebuilding our release candidates now.

 > In short: no problem!  It built, tested, and installed just fine.

  Great!  Thanks!

 > Oops, just noticed that my configure.in fix from late May didn't make
 > the cut:
...
 > Oh well.  This means that Robin Dunn's bsddb extension won't work with
 > Python 1.6 under Solaris.

  That's unfortunate.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 17 00:22:05 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 00:22:05 +0200
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <14747.663.260950.537440@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Wed, Aug 16, 2000 at 05:07:35PM -0400
References: <20000816225552.H376@xs4all.nl> <14747.663.260950.537440@cj42289-a.reston1.va.home.com>
Message-ID: <20000817002205.I376@xs4all.nl>

On Wed, Aug 16, 2000 at 05:07:35PM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > much as possible before then (they mostly need just documentation anyway)
>                                                   ^^^^^^^^^^^^^^^^^^
>   Don't underestimate that requirement!

I'm not, especially since the things that need documentation (if they are in
principle acceptable to Guido) are range literals (tutorials and existing
code examples), 'import as' (ref, tut), augmented assignment (ref, tut, lib,
api, ext, existing examples), the getslice->getitem change (tut, lib, all
other references to getslice/extended slices and existing example code) and
possibly the 'indexing for' patch (ref, tut, a large selection of existing
example code.)

Oh, and I forgot, some patches would benefit from more library changes, too,
like augmented assignment and getslice-to-getitem. That can always be done
after the patches are in, by other people (if they can't, the patch
shouldn't go in in the first place!)

I guess I'll be doing one large, intimate pass over all documentation, do
everything at once, and later split it up. I also think I'm going to post
them seperately, to allow for easier proofreading. I also think I'm in need
of some sleep, and will think about this more tomorrow, after I get
LaTeX2HTML working on my laptop, so I can at least review my own changes ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Thu Aug 17 01:55:42 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 16 Aug 2000 16:55:42 -0700
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
Message-ID: <20000816165542.D29260@ActiveState.com>

Hello autoconf-masters,

I am currently trying to port Python to Monterey (64-bit AIX) and I need to
add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
whatever appropriate variables for all 'cc' and 'ld' invocations) but it is
not obvious *at all* how to do that in configure.in. Can anybody helpme on
that?

ANother issue that I am having. This is how the python executable is linked
on Linux with gcc:

gcc  -Xlinker -export-dynamic python.o ../libpython2.0.a -lpthread -ldl  -lutil -lm  -o python
          
It, of course, works fine, but shouldn't the proper (read "portable")
invocation to include the python2.0 library be

gcc  -Xlinker -export-dynamic python.o -L.. -lpython2.0 -lpthread -ldl  -lutil -lm  -o python

That invocation form (i.e. with the '-L.. -lpython2.0') works on Linux, and
is *required* on Monterey. Does this problem not show up with other Unix
compilers. My hunch is that simply listing library (*.a) arguments on the gcc
command line is a GNU gcc/ld shortcut to the more portable usage of -L and
-l. Any opinions. I would either like to change the form to the latter or
I'll have to special case the invocation for Monterey. ANy opinions on which
is worse.


Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Thu Aug 17 02:24:25 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 16 Aug 2000 17:24:25 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
Message-ID: <20000816172425.A32338@ActiveState.com>

I am porting Python to Monterey (64-bit AIX) and have a small (hopefully)
question about POSIX threads. I have Monterey building and passing the
threads test suite using Python/thread_pthread.h with just one issue:


-------------- snipped from current thread_pthread.h ---------------
long
PyThread_get_thread_ident(void)
{
    volatile pthread_t threadid;
    if (!initialized)
        PyThread_init_thread();
    /* Jump through some hoops for Alpha OSF/1 */
    threadid = pthread_self();
    return (long) *(long *) &threadid;
}
-------------------------------------------------------------------

Does the POSIX threads spec specify a C type or minimum size for
pthread_t? Or can someone point me to the appropriate resource to look
this up. On Linux (mine at least):
  /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int pthread_t;

On Monterey:
  typedef unsigned int pthread_t;
 
That is fine, they are both 32-bits, however Monterey is an LP64 platform
(sizeof(long)==8, sizeof(int)=4), which brings up the question:

WHAT IS UP WITH THAT return STATEMENT?
  return (long) *(long *) &threadid;

My *guess* is that this is an attempt to just cast 'threadid' (a pthread_t)
to a long and go through hoops to avoid compiler warnings. I dont' know what
else it could be. Is that what the "Alpha OSF/1" comment is about? Anybody
have an Alpha OSF/1 hanging around. The problem is that when
sizeof(pthread_t) != sizeof(long) this line is just broken.

Could this be changed to
  return threadid;
safely?


Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From greg at cosc.canterbury.ac.nz  Thu Aug 17 02:33:40 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 12:33:40 +1200 (NZST)
Subject: [Python-Dev] Re: [Patches] [Patch #101175] Fix slight bug in the
 Ref manual docs on listcomprehensions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEAPHAAA.tim_one@email.msn.com>
Message-ID: <200008170033.MAA15351@s454.cosc.canterbury.ac.nz>

> If this ends in a stalement among the strongest proponents, it may not
> be such a good idea after all.

Don't worry, I really don't have any strong objection to
either of these changes. They're only cosmetic, after all.
It's still a good idea.

Just one comment: even if the first clause *is* a 'for',
there's no guarantee that the rest of the clauses have
to have anything to do with what it produces. E.g.

   [x for y in [1] if z]

The [x if y] case is only one of an infinite number of
possible abuses. Do you still think it's worth taking
special steps to catch that particular one?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 17 03:17:53 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 13:17:53 +1200 (NZST)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>
Message-ID: <200008170117.NAA15360@s454.cosc.canterbury.ac.nz>

Looks reasonably good. Not entirely sure I like the look
of >> though -- a bit too reminiscent of C++.

How about

   print to myfile, x, y, z

with 'to' as a non-reserved keyword. Or even

   print to myfile: x, y, z

but that might be a bit too radical!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From m.favas at per.dem.csiro.au  Thu Aug 17 03:17:42 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 17 Aug 2000 09:17:42 +0800
Subject: [Python-Dev] [Fwd: segfault in sre on 64-bit plats]
Message-ID: <399B3D36.6921271@per.dem.csiro.au>

 
-------------- next part --------------
An embedded message was scrubbed...
From: Mark Favas <m.favas at per.dem.csiro.au>
Subject: Re: segfault in sre on 64-bit plats
Date: Thu, 17 Aug 2000 09:15:22 +0800
Size: 2049
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000817/4238d330/attachment-0001.eml>

From greg at cosc.canterbury.ac.nz  Thu Aug 17 03:26:59 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 13:26:59 +1200 (NZST)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com>
Message-ID: <200008170126.NAA15363@s454.cosc.canterbury.ac.nz>

> My hunch is that simply listing library (*.a) arguments on the gcc
> command line is a GNU gcc/ld shortcut to the more portable usage of -L
> and -l.

I've never encountered a Unix that wouldn't let you explicity
give .a files to cc or ld. It's certainly not a GNU invention.

Sounds like Monterey is the odd one out here. ("Broken" is
another word that comes to mind.)

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 03:41:48 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 03:41:48 +0200 (CEST)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000816172425.A32338@ActiveState.com> from "Trent Mick" at Aug 16, 2000 05:24:25 PM
Message-ID: <200008170141.DAA17229@python.inrialpes.fr>

Trent Mick wrote:
> 
> I am porting Python to Monterey (64-bit AIX) and have a small (hopefully)
> question about POSIX threads. I have Monterey building and passing the
> threads test suite using Python/thread_pthread.h with just one issue:
> 
> -------------- snipped from current thread_pthread.h ---------------
> long
> PyThread_get_thread_ident(void)
> {
>     volatile pthread_t threadid;
>     if (!initialized)
>         PyThread_init_thread();
>     /* Jump through some hoops for Alpha OSF/1 */
>     threadid = pthread_self();
>     return (long) *(long *) &threadid;
> }
> -------------------------------------------------------------------
> 
> ...
> 
> WHAT IS UP WITH THAT return STATEMENT?
>   return (long) *(long *) &threadid;

I don't know and I had the same question at the time when there was some
obscure bug on my AIX combo at this location. I remember that I had played
with the debugger and the only workaround at the time which solved the
mystery was to add the 'volatile' qualifier. So if you're asking yourself
what that 'volatile' is for, you have one question less...

> 
> My *guess* is that this is an attempt to just cast 'threadid' (a pthread_t)
> to a long and go through hoops to avoid compiler warnings. I dont' know what
> else it could be. Is that what the "Alpha OSF/1" comment is about? Anybody
> have an Alpha OSF/1 hanging around. The problem is that when
> sizeof(pthread_t) != sizeof(long) this line is just broken.
> 
> Could this be changed to
>   return threadid;
> safely?

I have the same question. If Guido can't answer this straight, we need
to dig the CVS logs.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 03:43:33 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 03:43:33 +0200 (CEST)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com> from "Trent Mick" at Aug 16, 2000 04:55:42 PM
Message-ID: <200008170143.DAA17238@python.inrialpes.fr>

Trent Mick wrote:
> 
> Hello autoconf-masters,
> 
> I am currently trying to port Python to Monterey (64-bit AIX) and I need to
> add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> whatever appropriate variables for all 'cc' and 'ld' invocations) but it is
> not obvious *at all* how to do that in configure.in. Can anybody helpme on
> that?

How can we help? What do want to do, exactly?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From fdrake at beopen.com  Thu Aug 17 03:40:32 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 16 Aug 2000 21:40:32 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <200008170141.DAA17229@python.inrialpes.fr>
References: <20000816172425.A32338@ActiveState.com>
	<200008170141.DAA17229@python.inrialpes.fr>
Message-ID: <14747.17040.968927.914435@cj42289-a.reston1.va.home.com>

Vladimir Marangozov writes:
 > I have the same question. If Guido can't answer this straight, we need
 > to dig the CVS logs.

  Guido is out of town right now, and doesn't have his usual email
tools with him, so he may not respond this week.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 04:12:18 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 04:12:18 +0200 (CEST)
Subject: [Python-Dev] shm + Win32 + docs (was: Adding library modules to the core)
In-Reply-To: <20000808114655.C29686@thyrsus.com> from "Eric S. Raymond" at Aug 08, 2000 11:46:55 AM
Message-ID: <200008170212.EAA17523@python.inrialpes.fr>

Eric S. Raymond wrote:
> 
> Vladimir, I suggest that the most useful thing you could do to advance
> the process at this point would be to document shm in core-library style.

Eric, I'm presently suffering from chronic lack of time (as you probably
do too) so if you could write the docs for me and take all associated
credits for them, please do so (shouldn't be that hard, after all -- the
web page and the comments are self-explanatory :-). I'm willing to "unblock"
you on this, but I can hardly make the time for it -- it's low-priority on
my dynamic task schedule. :(

I'd also love to assemble the win32 bits on the matter (what's in win32event
for the semaphore interface + my Windows book) to add shm and sem for
Windows and rewrite the interface, but I have no idea on when this could
happen.

I will disappear from the face of the World sometime soon and it's
unclear on when I'll be able to reappear (nor how soon I'll disappear, btw)
So, be aware of that. I hope to be back again before 2.1 so if we can
wrap up a Unix + win32 shm, that would be much appreciated!

> 
> At the moment, core Python has nothing (with the weak and nonportable 
> exception of open(..., O_EXCL)) that can do semaphores properly.  Thus
> shm would address a real gap in the language.

Indeed. This is currently being discussed on the french Python list,
where Richard Gruet (rgruet at ina.fr) posted the following code for
inter-process locks: glock.py

I don't have the time to look at it in detail, just relaying here
for food and meditation :-)

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252

------------------------------[ glock.py ]----------------------------
#!/usr/bin/env python
#----------------------------------------------------------------------------
# glock.py: 				Global mutex
#
# Prerequisites:
#    - Python 1.5.2 or newer (www.python.org)
#    - On windows: win32 extensions installed
#			(http://www.python.org/windows/win32all/win32all.exe)
#    - OS: Unix, Windows.
#
# History:
#	-22 Jan 2000 (R.Gruet): creation
#
# Limitations:
# TODO:
#----------------------------------------------------------------------------
'''	This module defines the class GlobalLock that implements a global
	(inter-process) mutex that works on Windows and Unix, using
	file-locking on Unix (I also tried this approach on Windows but got
	some tricky problems so I ended using Win32 Mutex)..
	See class GlobalLock for more details.
'''
__version__ = 1,0,2
__author__ = ('Richard Gruet', 'rgruet at ina.fr')

# Imports:
import sys, string, os

# System-dependent imports for locking implementation:
_windows = (sys.platform == 'win32')

if _windows:
	try:
		import win32event, win32api, pywintypes
	except ImportError:
		sys.stderr.write('The win32 extensions need to be installed!')
else:	# assume Unix
	try:
		import fcntl
	except ImportError:
		sys.stderr.write("On what kind of OS am I ? (Mac?) I should be on "
						 "Unix but can't import fcntl.\n")
		raise
	import threading

# Exceptions :
# ----------
class GlobalLockError(Exception):
	''' Error raised by the glock module.
	'''
	pass

class NotOwner(GlobalLockError):
	''' Attempt to release somebody else's lock.
	'''
	pass


# Constants
# ---------:
true=-1
false=0

#----------------------------------------------------------------------------
class GlobalLock:
#----------------------------------------------------------------------------
	''' A global mutex.

		*Specification:
		 -------------
		 The lock must act as a global mutex, ie block between different
		 candidate processus, but ALSO between different candidate
		 threads of the same process.
		 It must NOT block in case of recursive lock request issued by
		 the SAME thread.
		 Extraneous unlocks should be ideally harmless.

		*Implementation:
		 --------------
		 In Python there is no portable global lock AFAIK.
		 There is only a LOCAL/ in-process Lock mechanism
		 (threading.RLock), so we have to implement our own solution.

		Unix: use fcntl.flock(). Recursive calls OK. Different process OK.
			  But <> threads, same process don't block so we have to
			  use an extra threading.RLock to fix that point.
		Win: We use WIN32 mutex from Python Win32 extensions. Can't use
			 std module msvcrt.locking(), because global lock is OK, but
			 blocks also for 2 calls from the same thread!
	'''
	def __init__(self, fpath, lockInitially=false):
		'''	Creates (or opens) a global lock.

			@param fpath Path of the file used as lock target. This is also
						 the global id of the lock. The file will be created
						 if non existent.
			@param lockInitially if true locks initially.
		'''
		if _windows:
			self.name = string.replace(fpath, '\\', '_')
			self.mutex = win32event.CreateMutex(None, lockInitially, self.name)
		else: # Unix
			self.name = fpath
			self.flock = open(fpath, 'w')
			self.fdlock = self.flock.fileno()
			self.threadLock = threading.RLock()
		if lockInitially:
			self.acquire()

	def __del__(self):
		#print '__del__ called' ##
		try: self.release()
		except: pass
		if _windows:
			win32api.CloseHandle(self.mutex)
		else:
			try: self.flock.close()
			except: pass

	def __repr__(self):
		return '<Global lock @ %s>' % self.name

	def acquire(self):
		''' Locks. Suspends caller until done.

			On windows an IOError is raised after ~10 sec if the lock
			can't be acquired.
			@exception GlobalLockError if lock can't be acquired (timeout)
		'''
		if _windows:
			r = win32event.WaitForSingleObject(self.mutex, win32event.INFINITE)
			if r == win32event.WAIT_FAILED:
				raise GlobalLockError("Can't acquire mutex.")
		else:
			# Acquire 1st the global (inter-process) lock:
			try:
				fcntl.flock(self.fdlock, fcntl.LOCK_EX)	# blocking
			except IOError:	#(errno 13: perm. denied,
							#		36: Resource deadlock avoided)
				raise GlobalLockError('Cannot acquire lock on "file" %s\n' %
										self.name)
			#print 'got file lock.' ##
			# Then acquire the local (inter-thread) lock:
			self.threadLock.acquire()
			#print 'got thread lock.' ##

	def release(self):
		''' Unlocks. (caller must own the lock!)

			@return The lock count.
			@exception IOError if file lock can't be released
			@exception NotOwner Attempt to release somebody else's lock.
		'''
		if _windows:
			try:
				win32event.ReleaseMutex(self.mutex)
			except pywintypes.error, e:
				errCode, fctName, errMsg =  e.args
				if errCode == 288:
					raise NotOwner("Attempt to release somebody else's lock")
				else:
					raise GlobalLockError('%s: err#%d: %s' % (fctName, errCode,
															  errMsg))
		else:
			# Acquire 1st the local (inter-thread) lock:
			try:
				self.threadLock.release()
			except AssertionError:
				raise NotOwner("Attempt to release somebody else's lock")

			# Then release the global (inter-process) lock:
			try:
				fcntl.flock(self.fdlock, fcntl.LOCK_UN)
			except IOError:	# (errno 13: permission denied)
				raise GlobalLockError('Unlock of file "%s" failed\n' %
															self.name)

#----------------------------------------------------------------------------
#		M A I N
#----------------------------------------------------------------------------
def main():
	# unfortunately can't test inter-process lock here!
	lockName = 'myFirstLock'
	l = GlobalLock(lockName)
	if not _windows:
		assert os.path.exists(lockName)
	l.acquire()
	l.acquire()	# rentrant lock, must not block
	l.release()
	l.release()
	if _windows:
		try: l.release()
		except NotOwner: pass
		else: raise Exception('should have raised a NotOwner exception')

	# Check that <> threads of same process do block:
	import threading, time
	thread = threading.Thread(target=threadMain, args=(l,))
	print 'main: locking...',
	l.acquire()
	print ' done.'
	thread.start()
	time.sleep(3)
	print '\nmain: unlocking...',
	l.release()
	print ' done.'
	time.sleep(0.1)
	del l	# to close file
	print 'tests OK.'

def threadMain(lock):
	print 'thread started(%s).' % lock
	print 'thread: locking (should stay blocked for ~ 3 sec)...',
	lock.acquire()
	print 'thread: locking done.'
	print 'thread: unlocking...',
	lock.release()
	print ' done.'
	print 'thread ended.'

if __name__ == "__main__":
	main()



From bwarsaw at beopen.com  Thu Aug 17 05:17:23 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 23:17:23 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <Pine.LNX.4.10.10008161146170.25725-100000@akbar.nevex.com>
	<200008170117.NAA15360@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.22851.266303.28877@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Looks reasonably good. Not entirely sure I like the look
    GE> of >> though -- a bit too reminiscent of C++.

    GE> How about

    GE>    print to myfile, x, y, z

Not bad at all.  Seems quite Pythonic to me.

    GE> with 'to' as a non-reserved keyword. Or even

    GE>    print to myfile: x, y, z

    GE> but that might be a bit too radical!

Definitely so.

-Barry



From bwarsaw at beopen.com  Thu Aug 17 05:19:25 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 16 Aug 2000 23:19:25 -0400 (EDT)
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
References: <20000816165542.D29260@ActiveState.com>
	<200008170126.NAA15363@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.22973.502494.739270@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    >> My hunch is that simply listing library (*.a) arguments on the
    >> gcc command line is a GNU gcc/ld shortcut to the more portable
    >> usage of -L and -l.

    GE> I've never encountered a Unix that wouldn't let you explicity
    GE> give .a files to cc or ld. It's certainly not a GNU invention.

That certainly jives with my experience.  All the other non-gcc C
compilers I've used (admittedly only on *nix) have always accepted
explicit .a files on the command line.

-Barry



From MarkH at ActiveState.com  Thu Aug 17 05:32:25 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 17 Aug 2000 13:32:25 +1000
Subject: [Python-Dev] os.path.commonprefix breakage
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>

Hi,
	I believe that Skip recently made a patch to os.path.commonprefix to only
return the portion of the common prefix that corresponds to a directory.

I have just dicovered some code breakage from this change.  On 1.5.2, the
behaviour was:

>>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
'../foo/'

While since the change we have:
'../foo'

Note that the trailing slash has been dropped.

The code this broke did similar to:

prefix = os.path.commonprefix(files)
for file in files:
  tail_portion = file[len(prefix):]

In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
"/spam", respectively.  The intent was obviously to get absolute path names
back ("bar" and "spam")

The code that broke is not mine, so you can safely be horrified at how
broken it is :-)  The point, however, is that code like this does exist out
there.

I'm obviously going to change the code that broke, and don't have time to
look into the posixpath.py code - but is this level of possible breakage
acceptable?

Thanks,

Mark.





From tim_one at email.msn.com  Thu Aug 17 05:34:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 23:34:12 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000816172425.A32338@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>

[Trent Mick]
> I am porting Python to Monterey (64-bit AIX) and have a small
> (hopefully) question about POSIX threads.

POSIX threads. "small question".  HAHAHAHAHAHA.  Thanks, that felt good
<wink>.

> I have Monterey building and passing the threads test suite using
> Python/thread_pthread.h with just one issue:
>
>
> -------------- snipped from current thread_pthread.h ---------------
> long
> PyThread_get_thread_ident(void)
> {
>     volatile pthread_t threadid;
>     if (!initialized)
>         PyThread_init_thread();
>     /* Jump through some hoops for Alpha OSF/1 */
>     threadid = pthread_self();
>     return (long) *(long *) &threadid;
> }
> -------------------------------------------------------------------
>
> Does the POSIX threads spec specify a C type or minimum size for
> pthread_t?

Which POSIX threads spec?  There are so very many (it went thru many
incompatible changes).  But, to answer your question, I don't know but doubt
it.  In practice, some implementations return pointers into kernel space,
others pointers into user space, others small integer indices into kernel-
or user-space arrays of structs.  So I think it's *safe* to assume it will
always fit in an integral type large enough to hold a pointer, but not
guaranteed.  Plain "long" certainly isn't safe in theory.

> Or can someone point me to the appropriate resource to look
> this up. On Linux (mine at least):
>   /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int
> pthread_t;

And this is a 32- or 64-bit Linux?

> On Monterey:
>   typedef unsigned int pthread_t;
>
> That is fine, they are both 32-bits, however Monterey is an LP64 platform
> (sizeof(long)==8, sizeof(int)=4), which brings up the question:
>
> WHAT IS UP WITH THAT return STATEMENT?
>   return (long) *(long *) &threadid;

Heh heh.  Thanks for the excuse!  I contributed the pthreads implementation
originally, and that eyesore sure as hell wasn't in it when I passed it on.
That's easy for me to be sure of, because that entire function was added by
somebody after me <wink>.  I've been meaning to track down where that crap
line came from for *years*, but never had a good reason before.

So, here's the scoop:

+ The function was added in revision 2.3, more than 6 years ago.  At that
time, the return had a direct cast to long.

+ The "Alpha OSF/1" horror was the sole change made to get revision 2.5.

Back in those days, the "patches list" was Guido's mailbox, and *all* CVS
commits were done by him.  So he checked in everything everyone could
convince them they needed, and sometimes without knowing exactly why.  So I
strongly doubt he'll even remember this change, and am certain it's not his
code.

> My *guess* is that this is an attempt to just cast 'threadid' (a
> pthread_t) to a long and go through hoops to avoid compiler warnings. I
> dont' know what else it could be.

Me neither.

> Is that what the "Alpha OSF/1" comment is about?

That comment was introduced by the commit that added the convoluted casting,
so yes, that's what the comment is talking about.

> Anybody have an Alpha OSF/1 hanging around. The problem is that when
> sizeof(pthread_t) != sizeof(long) this line is just broken.
>
> Could this be changed to
>   return threadid;
> safely?

Well, that would return it to exactly the state it was in at revision 2.3,
except with the cast to long left implicit.  Apparently that "didn't work"!

Something else is broken here, too, and has been forever:  the thread docs
claim that thread.get_ident() returns "a nonzero integer".  But across all
the thread implementations, there's nothing that guarantees that!  It's a
goof, based on the first thread implementation in which it just happened to
be true for that platform.

So thread.get_ident() is plain braindead:  if Python wants to return a
unique non-zero long across platforms, the current code doesn't guarantee
any of that.

So one of two things can be done:

1. Bite the bullet and do it correctly.  For example, maintain a static
   dict mapping the native pthread_self() return value to Python ints,
   and return the latter as Python's thread.get_ident() value.  Much
   better would to implement a x-platform thread-local storage
   abstraction, and use that to hold a Python-int ident value.

2. Continue in the tradition already established <wink>, and #ifdef the
   snot out of it for Monterey.

In favor of #2, the code is already so hosed that making it hosier won't be
a significant relative increase in its inherent hosiness.

spoken-like-a-true-hoser-ly y'rs  - tim





From tim_one at email.msn.com  Thu Aug 17 05:47:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 16 Aug 2000 23:47:04 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.22851.266303.28877@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDGHAAA.tim_one@email.msn.com>

[Greg Ewing]
> Looks reasonably good. Not entirely sure I like the look
> of >> though -- a bit too reminiscent of C++.
>
> How about
>
>    print to myfile, x, y, z

[Barry Warsaw]
> Not bad at all.  Seems quite Pythonic to me.

Me too!  +1 on changing ">>" to "to" here.  Then we can introduce

   x = print from myfile, 3

as a synonym for

   x = myfile.read(3)

too <wink>.

People should know that Guido doesn't seem to like the idea of letting print
specify the output target at all.  "Why not?"  "Because people say print is
pretty useless anyway, for example, when they want to write to something
other than stdout."  "But that's the whole point of this change!  To make
print more useful!"  "Well, but then ...".  After years of channeling, you
get a feel for when to change the subject and bring it up again later as if
it were brand new <wink>.

half-of-channeling-is-devious-persuasion-ly y'rs  - tim





From skip at mojam.com  Thu Aug 17 06:04:54 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:04:54 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
Message-ID: <14747.25702.435148.549678@beluga.mojam.com>

>>>>> "Mark" == Mark Hammond <MarkH at ActiveState.com> writes:

    Mark> I believe that Skip recently made a patch to os.path.commonprefix
    Mark> to only return the portion of the common prefix that corresponds
    Mark> to a directory.

    Mark> I have just dicovered some code breakage from this change.  On
    Mark> 1.5.2, the behaviour was:

    >>>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
    Mark> '../foo/'

    Mark> While since the change we have:
    Mark> '../foo'

I'm sure it can be argued that the slash should be there.  The previous
behavior was clearly broken, however, because it was advancing
character-by-character instead of directory-by-directory.  Consequently,
calling 

    os.path.commonprefix(["/home/swen", "/home/swenson"])

would yield the most likely invalid path "/home/sw" as the common prefix.

It would be easy enough to append the appropriate path separator to the the
result before returning.  I have no problem with that.  Others with more
knowledge of path semantics should chime in.  Also, should the behavior be
consistent across platforms or should it do what is correct for each
platform on which it's implemented (dospath, ntpath, macpath)?

Skip




From tim_one at email.msn.com  Thu Aug 17 06:05:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 00:05:12 -0400
Subject: [Python-Dev] os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDIHAAA.tim_one@email.msn.com>

I agree this is Bad Damage, and should be fixed before 2.0b1 goes out.  Can
you enter a bug report?

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Mark Hammond
> Sent: Wednesday, August 16, 2000 11:32 PM
> To: python-dev at python.org
> Subject: [Python-Dev] os.path.commonprefix breakage
>
>
> Hi,
> 	I believe that Skip recently made a patch to
> os.path.commonprefix to only
> return the portion of the common prefix that corresponds to a directory.
>
> I have just dicovered some code breakage from this change.  On 1.5.2, the
> behaviour was:
>
> >>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
> '../foo/'
>
> While since the change we have:
> '../foo'
>
> Note that the trailing slash has been dropped.
>
> The code this broke did similar to:
>
> prefix = os.path.commonprefix(files)
> for file in files:
>   tail_portion = file[len(prefix):]
>
> In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
> "/spam", respectively.  The intent was obviously to get absolute
> path names
> back ("bar" and "spam")
>
> The code that broke is not mine, so you can safely be horrified at how
> broken it is :-)  The point, however, is that code like this does
> exist out
> there.
>
> I'm obviously going to change the code that broke, and don't have time to
> look into the posixpath.py code - but is this level of possible breakage
> acceptable?
>
> Thanks,
>
> Mark.





From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:11:51 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:11:51 +1200 (NZST)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEDGHAAA.tim_one@email.msn.com>
Message-ID: <200008170411.QAA15381@s454.cosc.canterbury.ac.nz>

tim_one:

> +1 on changing ">>" to "to" here.

Your +1 might be a bit hasty. I've just realised that
a non-reserved word in that position would be ambiguous,
as can be seen by considering

   print to(myfile), x, y, z

> Then we can introduce
>
>   x = print from myfile, 3

Actually, for the sake of symmetry, I was going to suggest

    input from myfile, x, y ,z

except that the word 'input' is already taken. Bummer.

But wait a moment, we could have

    from myfile input x, y, z

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake at beopen.com  Thu Aug 17 06:11:44 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 00:11:44 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
	<14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > I'm sure it can be argued that the slash should be there.  The previous
 > behavior was clearly broken, however, because it was advancing
 > character-by-character instead of directory-by-directory.  Consequently,
 > calling 
 > 
 >     os.path.commonprefix(["/home/swen", "/home/swenson"])
 > 
 > would yield the most likely invalid path "/home/sw" as the common prefix.

  You have a typo in there... ;)

 > It would be easy enough to append the appropriate path separator to the the
 > result before returning.  I have no problem with that.  Others with more
 > knowledge of path semantics should chime in.  Also, should the behavior be

  I'd guess that the path separator should only be appended if it's
part of the passed-in strings; that would make it a legitimate part of
the prefix.  If it isn't present for all of them, it shouldn't be part
of the result:

>>> os.path.commonprefix(["foo", "foo/bar"])
'foo'


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Thu Aug 17 06:23:37 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:23:37 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
	<14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <14747.26825.977663.599413@beluga.mojam.com>

    Skip> os.path.commonprefix(["/home/swen", "/home/swenson"])

    Skip> would yield the most likely invalid path "/home/sw" as the common
    Skip> prefix.

Ack!  I meant to use this example:

    os.path.commonprefix(["/home/swen", "/home/swanson"])

which would yield "/home/sw"...

S



From m.favas at per.dem.csiro.au  Thu Aug 17 06:27:20 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 17 Aug 2000 12:27:20 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
Message-ID: <399B69A8.4A94337C@per.dem.csiro.au>

[Trent}
-------------- snipped from current thread_pthread.h ---------------
long
PyThread_get_thread_ident(void)
{
    volatile pthread_t threadid;
    if (!initialized)
        PyThread_init_thread();
    /* Jump through some hoops for Alpha OSF/1 */
    threadid = pthread_self();
    return (long) *(long *) &threadid;
}
-------------------------------------------------------------------
WHAT IS UP WITH THAT return STATEMENT?
  return (long) *(long *) &threadid;

My *guess* is that this is an attempt to just cast 'threadid' (a
pthread_t)
to a long and go through hoops to avoid compiler warnings. I dont' know
what
else it could be. Is that what the "Alpha OSF/1" comment is about?
Anybody
have an Alpha OSF/1 hanging around. The problem is that when
sizeof(pthread_t) != sizeof(long) this line is just broken.

Could this be changed to
  return threadid;
safely?

This is a DEC-threads thing... (and I'm not a DEC-threads savant). 
Making the suggested change gives the compiler warning:

cc -O -Olimit 1500 -I./../Include -I.. -DHAVE_CONFIG_H   -c thread.c -o
thread.o
cc: Warning: thread_pthread.h, line 182: In this statement, "threadid"
of type "
pointer to struct __pthread_t", is being converted to "long".
(cvtdiftypes)
        return threadid;
---------------^

The threads test still passes with this change.



From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:28:19 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:28:19 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>

Skip:

> Also, should the behavior be
> consistent across platforms or should it do what is correct for each
> platform on which it's implemented (dospath, ntpath, macpath)?

Obviously it should do what's correct for each platform,
although more than one thing can be correct for a
given platform -- e.g Unix doesn't care whether there's a
trailing slash on a pathname.

In the Unix case it's probably less surprising if the trailing
slash is removed, because it's redundant.

The "broken" code referred to in the original message highlights
another problem, however: there is no platform-independent way
provided to remove a prefix from a pathname, given the prefix
as returned by one of the other platform-independent path
munging functions.

So maybe there should be an os.path.removeprefix(prefix, path)
function.

While we're on the subject, another thing that's missing is
a platform-independent way of dealing with the notion of
"up one directory".

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:34:01 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:34:01 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>

Skip:

> The previous behavior was clearly broken, however, because it was
> advancing character-by-character instead of directory-by-directory.

I've just looked at the 1.5.2 docs and realised that this is
what it *says* it does! So it's right according to the docs,
although it's obviously useless as a pathname manipulating
function.

The question now is, do we change both the specification and the
behaviour, which could break existing code, or leave it be and
add a new function which does the right thing?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From skip at mojam.com  Thu Aug 17 06:41:59 2000
From: skip at mojam.com (Skip Montanaro)
Date: Wed, 16 Aug 2000 23:41:59 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.26112.609255.338170@cj42289-a.reston1.va.home.com>
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
	<14747.25702.435148.549678@beluga.mojam.com>
	<14747.26112.609255.338170@cj42289-a.reston1.va.home.com>
Message-ID: <14747.27927.170223.873328@beluga.mojam.com>

    Fred> I'd guess that the path separator should only be appended if it's
    Fred> part of the passed-in strings; that would make it a legitimate
    Fred> part of the prefix.  If it isn't present for all of them, it
    Fred> shouldn't be part of the result:

    >>> os.path.commonprefix(["foo", "foo/bar"])
    'foo'

Hmmm... I think you're looking at it character-by-character again.  I see
three possibilities:

    * it's invalid to have a path with a trailing separator

    * it's okay to have a path with a trailing separator

    * it's required to have a path with a trailing separator

In the first and third cases, you have no choice.  In the second you have to
decide which would be best.

On Unix my preference would be to not include the trailing "/" for aesthetic
reasons.  The shell's pwd command, the os.getcwd function and the
os.path.normpath function all return directories without the trailing slash.
Also, while Python may not have this problem (and os.path.join seems to
normalize things), some external tools will interpret doubled "/" characters
as single characters while others (most notably Emacs), will treat the
second slash as "erase the prefix and start from /".  

In fact, the more I think of it, the more I think that Mark's reliance on
the trailing slash is a bug waiting to happen (in fact, it just happened
;-).  There's certainly nothing wrong (on Unix anyway) with paths that don't
contain a trailing slash, so if you're going to join paths together, you
ought to be using os.path.join.  To whack off prefixes, perhaps we need
something more general than os.path.split, so instead of

    prefix = os.path.commonprefix(files)
    for file in files:
       tail_portion = file[len(prefix):]

Mark would have used

    prefix = os.path.commonprefix(files)
    for file in files:
       tail_portion = os.path.splitprefix(prefix, file)[1]

The assumption being that

    os.path.splitprefix("/home", "/home/beluga/skip")

would return

    ["/home", "beluga/skip"]

Alternatively, how about os.path.suffixes?  It would work similar to
os.path.commonprefix, but instead of returning the prefix of a group of
files, return a list of the suffixes resulting in the application of the
common prefix:

    >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
    >>> prefix = os.path.commonprefix(files)
    >>> print prefix
    "/home"
    >>> suffixes = os.path.suffixes(prefix, files)
    >>> print suffixes
    ["swen", "swanson", "jules"]

Skip




From fdrake at beopen.com  Thu Aug 17 06:49:24 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 00:49:24 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170434.QAA15389@s454.cosc.canterbury.ac.nz>
References: <14747.25702.435148.549678@beluga.mojam.com>
	<200008170434.QAA15389@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > I've just looked at the 1.5.2 docs and realised that this is
 > what it *says* it does! So it's right according to the docs,
 > although it's obviously useless as a pathname manipulating
 > function.

  I think we should now fix the docs; Skip's right about the desired
functionality.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From greg at cosc.canterbury.ac.nz  Thu Aug 17 06:53:05 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 17 Aug 2000 16:53:05 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.27927.170223.873328@beluga.mojam.com>
Message-ID: <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>

Skip:

> Alternatively, how about os.path.suffixes?  It would work similar to
> os.path.commonprefix, but instead of returning the prefix of a group of
> files, return a list of the suffixes resulting in the application of the
> common prefix:

To avoid duplication of effort, how about a single function
that does both:

   >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
   >>> os.path.factorize(files)
   ("/home", ["swen", "swanson", "jules"])

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From nowonder at nowonder.de  Thu Aug 17 09:13:08 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 17 Aug 2000 07:13:08 +0000
Subject: [Python-Dev] timeout support for socket.py? (was: [ANN] TCP socket timeout module -->
 timeoutsocket.py)
References: <300720002054234840%timo@alum.mit.edu>
Message-ID: <399B9084.C068DCE3@nowonder.de>

As the socketmodule is now exported as _socket and a seperate socket.py
file wrapping _socket is now available in the standard library,
wouldn't it be possible to include timeout capabilities like in
  http://www.timo-tasi.org/python/timeoutsocket.py

If the default behaviour would be "no timeout", I would think this
would not break any code. But it would give an easy(and documentable)
solution) to people who e.g. have their
  urllib.urlopen("http://spam.org").read()
hang on them. (Actually the approach should work for all streaming
socket connections, as far as I understand it.)

Are there any efficiency concerns? If so, would it be possible to
include a second socket class timeoutsocket in socket.py, so that
this could be used instead of the normal socket class? [In this case
a different default timeout than "None" could be chosen.]

Peter

P.S.: For your convenience a quote of the announcement on c.l.py,
      for module documentation (== endlessly long doc string) look in
        http://www.timo-tasi.org/python/timeoutsocket.py

Timothy O'Malley wrote:
> 
> Numerous times I have seen people request a solution for TCP socket
> timeouts in conjunction with urllib.  Recently, I found myself in the
> same boat.  I wrote a server that would connect to skytel.com and send
> automated pages.  Periodically, the send thread in the server would
> hang for a long(!) time.  Yup -- I was bit by a two hour timeout built
> into tcp sockets.
> 
> Thus, I wrote timeoutsocket.py
> 
> With timeoutsocket.py, you can force *all* TCP sockets to have a
> timeout.  And, this is all accomplished without interfering with the
> standard python library!
> 
> Here's how to put a 20 second timeout on all TCP sockets for urllib:
> 
>    import timeoutsock
>    import urllib
>    timeoutsocket.setDefaultSocketTimeout(20)
> 
> Just like that, any TCP connection made by urllib will have a 20 second
> timeout.  If a connect(), read(), or write() blocks for more than 20
> seconds, then a socket.Timeout error will be raised.
> 
> Want to see how to use this in ftplib?
> 
>    import ftplib
>    import timeoutsocket
>    timeoutsocket.setDefaultSocketTimeout(20)
> 
> Wasn't that easy!
> The timeoutsocket.py module acts as a shim on top of the standard
> socket module.  Whenever a TCP socket is requested, an instance of
> TimeoutSocket is returned.  This wrapper class adds timeout support to
> the standard TCP socket.
> 
> Where can you get this marvel of modern engineering?
> 
>    http://www.timo-tasi.org/python/timeoutsocket.py
> 
> And it will very soon be found on the Vaults of Parnassus.
> 
> Good Luck!
> 
> --
> --
> happy daze
>   -tim O
> --
> http://www.python.org/mailman/listinfo/python-list

-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From moshez at math.huji.ac.il  Thu Aug 17 08:16:29 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 17 Aug 2000 09:16:29 +0300 (IDT)
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.22851.266303.28877@anthem.concentric.net>
Message-ID: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>

On Wed, 16 Aug 2000, Barry A. Warsaw wrote:

> 
> >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:
> 
>     GE> Looks reasonably good. Not entirely sure I like the look
>     GE> of >> though -- a bit too reminiscent of C++.
> 
>     GE> How about
> 
>     GE>    print to myfile, x, y, z
> 
> Not bad at all.  Seems quite Pythonic to me.

Ummmmm......

print to myfile  (print a newline on myfile)
print to, myfile (print to+" "+myfile to stdout)

Perl has similar syntax, and I always found it horrible.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Thu Aug 17 08:30:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 08:30:23 +0200
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 17, 2000 at 09:16:29AM +0300
References: <14747.22851.266303.28877@anthem.concentric.net> <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
Message-ID: <20000817083023.J376@xs4all.nl>

On Thu, Aug 17, 2000 at 09:16:29AM +0300, Moshe Zadka wrote:
> On Wed, 16 Aug 2000, Barry A. Warsaw wrote:

> > >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

> >     GE> How about
> >     GE>    print to myfile, x, y, z

> > Not bad at all.  Seems quite Pythonic to me.

> print to myfile  (print a newline on myfile)
> print to, myfile (print to+" "+myfile to stdout)

> Perl has similar syntax, and I always found it horrible.

Agreed. It might be technically unambiguous, but I think it's too hard for a
*human* to parse this correctly. The '>>' version might seem more C++ish and
less pythonic, but it also stands out a lot more. The 'print from' statement
could easily (and more consistently, IMHO ;) be written as 'print <<' (not
that I like the 'print from' idea, though.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Thu Aug 17 08:41:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 02:41:29 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>

[Greg Ewing]
> I've just looked at the 1.5.2 docs and realised that this is
> what it *says* it does! So it's right according to the docs,
> although it's obviously useless as a pathname manipulating
> function.

[Fred Drake]
>   I think we should now fix the docs; Skip's right about the desired
> functionality.

Oddly enough, I don't:  commonprefix worked exactly as documented for at
least 6 years and 5 months (which is when CVS shows Guido checking in
ntpath.py with the character-based functionality), and got out of synch with
the docs about 5 weeks ago when Skip changed to this other algorithm.  Since
the docs *did* match the code, there's no reason to believe the original
author was confused, and no reason to believe users aren't relying on it
(they've had over 6 years to gripe <wink>).

I think it's wrong to change what released code or docs do or say in
non-trivial ways when they weren't ever in conflict.  We have no idea who
may be relying on the old behavior!  Bitch all you like about MarkH's test
case, it used to work, it doesn't now, and that sucks for the user.

I appreciate that some other behavior may be more useful more often, but if
you can ever agree on what that is across platforms, it should be spelled
via a new function name ("commonpathprefix" comes to mind), or optional flag
(defaulting to "old behavior") on commonprefix (yuck!).  BTW, the presence
or absence of a trailing path separator makes a *big* difference to many
cmds on Windows, and you can't tell me nobody isn't currently doing e.g.

    commonprefix(["blah.o", "blah", "blah.cpp"])

on Unix either.





From thomas at xs4all.net  Thu Aug 17 08:55:41 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 08:55:41 +0200
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000816165542.D29260@ActiveState.com>; from trentm@ActiveState.com on Wed, Aug 16, 2000 at 04:55:42PM -0700
References: <20000816165542.D29260@ActiveState.com>
Message-ID: <20000817085541.K376@xs4all.nl>

On Wed, Aug 16, 2000 at 04:55:42PM -0700, Trent Mick wrote:

> I am currently trying to port Python to Monterey (64-bit AIX) and I need
> to add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> whatever appropriate variables for all 'cc' and 'ld' invocations) but it
> is not obvious *at all* how to do that in configure.in. Can anybody helpme
> on that?

You'll have to write a shell 'case' for AIX Monterey, checking to make sure
it is monterey, and setting LDFLAGS accordingly. If you look around in
configure.in, you'll see a few other 'special cases', all to tune the
way the compiler is called. Depending on what you need to do to detect
monterey, you could fit it in one of those. Just search for 'Linux' or
'bsdos' to find a couple of those cases.

> ANother issue that I am having. This is how the python executable is linked
> on Linux with gcc:

> gcc  -Xlinker -export-dynamic python.o ../libpython2.0.a -lpthread -ldl  -lutil -lm  -o python

> It, of course, works fine, but shouldn't the proper (read "portable")
> invocation to include the python2.0 library be

> gcc  -Xlinker -export-dynamic python.o -L.. -lpython2.0 -lpthread -ldl  -lutil -lm  -o python

> That invocation form (i.e. with the '-L.. -lpython2.0') works on Linux, and
> is *required* on Monterey. Does this problem not show up with other Unix
> compilers. My hunch is that simply listing library (*.a) arguments on the gcc
> command line is a GNU gcc/ld shortcut to the more portable usage of -L and
> -l. Any opinions. I would either like to change the form to the latter or
> I'll have to special case the invocation for Monterey. ANy opinions on which
> is worse.

Well, as far as I know, '-L.. -lpython2.0' does something *different* than
just '../libpython2.0.a' ! When supplying the static library on the command
line, the library is always statically linked. When using -L/-l, it is
usually dynamically linked, unless a dynamic library doesn't exist. We
currently don't have a libpython2.0.so, but a patch to add it is on Barry's
plate. Also, I'm not entirely sure about the search order in such a case:
gcc's docs seem to suggest that the systemwide library directories are
searched before the -L directories. I'm not sure on that, though.

Also, listing the library on the command line is not a gcc shortcut, but
other people already said that :) I'd be suprised if AIX removed it (but not
especially so; my girlfriend works with AIX machines a lot, and she already
showed me some suprising things ;) but perhaps there is another workaround ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gstein at lyra.org  Thu Aug 17 09:01:22 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 00:01:22 -0700
Subject: [Python-Dev] os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Thu, Aug 17, 2000 at 01:32:25PM +1000
References: <ECEPKNMJLHAPFFJHDOJBMEDGDFAA.MarkH@ActiveState.com>
Message-ID: <20000817000122.L17689@lyra.org>

>>> os.path.split('/foo/bar/')
('/foo/bar', '')
>>> 

Jamming a trailing slash on the end is a bit wonky. I'm with Skip on saying
that the slash should probably *not* be appended. It gives funny behavior
with the split. Users should use .join() to combine the resulting with
something else.

The removal of a prefix is an interesting issue. No opinions there.

Cheers,
-g

On Thu, Aug 17, 2000 at 01:32:25PM +1000, Mark Hammond wrote:
> Hi,
> 	I believe that Skip recently made a patch to os.path.commonprefix to only
> return the portion of the common prefix that corresponds to a directory.
> 
> I have just dicovered some code breakage from this change.  On 1.5.2, the
> behaviour was:
> 
> >>> os.path.commonprefix(["../foo/bar", "../foo/spam"])
> '../foo/'
> 
> While since the change we have:
> '../foo'
> 
> Note that the trailing slash has been dropped.
> 
> The code this broke did similar to:
> 
> prefix = os.path.commonprefix(files)
> for file in files:
>   tail_portion = file[len(prefix):]
> 
> In 1.6, the "tail_portion" result looks like an absolute path "/bar" and
> "/spam", respectively.  The intent was obviously to get absolute path names
> back ("bar" and "spam")
> 
> The code that broke is not mine, so you can safely be horrified at how
> broken it is :-)  The point, however, is that code like this does exist out
> there.
> 
> I'm obviously going to change the code that broke, and don't have time to
> look into the posixpath.py code - but is this level of possible breakage
> acceptable?
> 
> Thanks,
> 
> Mark.
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From thomas at xs4all.net  Thu Aug 17 09:09:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 09:09:42 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Aug 17, 2000 at 04:28:19PM +1200
References: <14747.25702.435148.549678@beluga.mojam.com> <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>
Message-ID: <20000817090942.L376@xs4all.nl>

On Thu, Aug 17, 2000 at 04:28:19PM +1200, Greg Ewing wrote:

> given platform -- e.g Unix doesn't care whether there's a
> trailing slash on a pathname.

Bzzzt. This is unfortunately not true. Observe:

daemon2:~/python > mkdir perl
daemon2:~/python > rm perl/
rm: perl/: is a directory
daemon2:~/python > rmdir perl/
rmdir: perl/: Is a directory
daemon2:~/python > rm -rf perl/
rm: perl/: Is a directory
daemon2:~/python > su
# rmdir perl/
rmdir: perl/: Is a directory
# rm -rf perl/
rm: perl/: Is a directory
# ^D
daemon2:~/python > rmdir perl
daemon2:~/python >

Note that the trailing slash is added by all tab-completing shells that I
know. And the problem *really* is that trailing slash, I shit you not.
Needless to say, every one of us ran into this at one time or another, and
spent an hour figuring out *why* the rmdir wouldn't remove a directory.

Consequently, I'm all for removing trailing slashes, but not enough to break
existing code. I wonder howmuch breakage there really is, though.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Thu Aug 17 09:49:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 03:49:33 -0400
Subject: [Python-Dev] Pending patches for 2.0
In-Reply-To: <20000816225552.H376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEBHAAA.tim_one@email.msn.com>

[Thomas Wouters, needs a well-earned vacation!]
> ...
> and if those decisions are made before, say, August 30th, I think
> I can move them into the CVS tree before leaving and just shove
> the responsibility for them on the entire dev team ;)
>
> This isn't a push to get them accepted ! Just a warning that if
> they aren't accepted before then, someone will have to take over
> the breastfeeding ;)

Guido will be back from his travels next week, and PythonLabs will have an
intense 2.0 release meeting on Tuesday or Wednesday (depending also on
exactly when Jeremy gets back).  I expect all go/nogo decisions will be made
then.  Part of deciding on a patch that isn't fully complete is deciding
whether others can take up the slack in time.  That's just normal release
business as usual -- nothing to worry about.  Well, not for *you*, anyway.

BTW, there's a trick few have learned:  get the doc patches in *first*, and
then we look like idiots if we release without code to implement it.  And
since this trick has hardly ever been tried, I bet you can sneak it by Fred
Drake for at least a year before anyone at PythonLabs catches on to it <0.9
wink>.

my-mouth-is-sealed-ly y'rs  - tim





From tim_one at email.msn.com  Thu Aug 17 09:29:05 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 03:29:05 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>

[Moshe Zadka]
> Ummmmm......
>
> print to myfile  (print a newline on myfile)
> print to, myfile (print to+" "+myfile to stdout)

Like I (and Greg too) clearly said all along, -1 on changing ">>" to "to"!





From mal at lemburg.com  Thu Aug 17 09:31:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 17 Aug 2000 09:31:55 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <ECEPKNMJLHAPFFJHDOJBEEDGDFAA.MarkH@ActiveState.com>
		<14747.25702.435148.549678@beluga.mojam.com>
		<14747.26112.609255.338170@cj42289-a.reston1.va.home.com> <14747.27927.170223.873328@beluga.mojam.com>
Message-ID: <399B94EB.E95260EE@lemburg.com>

Skip Montanaro wrote:
> 
>     Fred> I'd guess that the path separator should only be appended if it's
>     Fred> part of the passed-in strings; that would make it a legitimate
>     Fred> part of the prefix.  If it isn't present for all of them, it
>     Fred> shouldn't be part of the result:
> 
>     >>> os.path.commonprefix(["foo", "foo/bar"])
>     'foo'
> 
> Hmmm... I think you're looking at it character-by-character again.  I see
> three possibilities:
> 
>     * it's invalid to have a path with a trailing separator
> 
>     * it's okay to have a path with a trailing separator
> 
>     * it's required to have a path with a trailing separator
> 
> In the first and third cases, you have no choice.  In the second you have to
> decide which would be best.
> 
> On Unix my preference would be to not include the trailing "/" for aesthetic
> reasons.

Wait, Skip :-) By dropping the trailing slash from the path
you are removing important information from the path information.

This information can only be regained by performing an .isdir()
check and then only of the directory exists somewhere. If it
doesn't you are loosing valid information here.

Another aspect: 
Since posixpath is also used by URL handling code,
I would suspect that this results in some nasty problems too.
You'd have to actually ask the web server to give you back the
information you already had.

Note that most web-servers send back a redirect in case a
directory is queried without ending slash in the URL. They
do this for exactly the reason stated above: to add the
.isdir() information to the path itself.

Conclusion:
Please don't remove the slash -- at least not in posixpath.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Thu Aug 17 11:54:16 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 11:54:16 +0200
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 03:29:05AM -0400
References: <Pine.GSO.4.10.10008170915050.24783-100000@sundial> <LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>
Message-ID: <20000817115416.M376@xs4all.nl>

On Thu, Aug 17, 2000 at 03:29:05AM -0400, Tim Peters wrote:
> [Moshe Zadka]
> > Ummmmm......
> >
> > print to myfile  (print a newline on myfile)
> > print to, myfile (print to+" "+myfile to stdout)
> 
> Like I (and Greg too) clearly said all along, -1 on changing ">>" to "to"!

Really ? Hmmmm...

[Greg Ewing]
> Looks reasonably good. Not entirely sure I like the look
> of >> though -- a bit too reminiscent of C++.
>
> How about
>
>    print to myfile, x, y, z

[Barry Warsaw]
> Not bad at all.  Seems quite Pythonic to me.

[Tim Peters]
> Me too!  +1 on changing ">>" to "to" here.  Then we can introduce
[print from etc]

I guessed I missed the sarcasm ;-P

Gullib-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Thu Aug 17 13:58:26 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 07:58:26 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170428.QAA15385@s454.cosc.canterbury.ac.nz>
References: <14747.25702.435148.549678@beluga.mojam.com>
Message-ID: <1245608987-154490918@hypernet.com>

Greg Ewing wrote:
[snip]
> While we're on the subject, another thing that's missing is
> a platform-independent way of dealing with the notion of
> "up one directory".

os.chdir(os.pardir)

- Gordon



From paul at prescod.net  Thu Aug 17 14:56:23 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:56:23 -0400
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net> <20000815195751.A16100@ludwig.cnri.reston.va.us>
Message-ID: <399BE0F7.F00765DA@prescod.net>

Greg Ward wrote:
> 
> ...
> I'm all in favour of high-level interfaces, and I'm also in favour of
> speaking the local tongue -- when in Windows, follow the Windows API (at
> least for features that are totally Windows-specific, like the
> registry).

At this point, the question is not whether to follow the Microsoft API
or not (any more). It is whether to follow the early 1990s Microsoft API
for C programmers or the new Microsoft API for Visual Basic, C#, Eiffel
and Javascript programmers.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html





From paul at prescod.net  Thu Aug 17 14:57:08 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:57:08 -0400
Subject: [Python-Dev] Lockstep iteration - eureka!
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com>
Message-ID: <399BE124.9920B0B6@prescod.net>

Tim Peters wrote:
> 
> ...
> > If you want a more efficient way to do it, it's available (just not as
> > syntactically beautiful -- same as range/xrangel).
> 
> Which way would that be?  I don't know of one, "efficient" either in the
> sense of runtime speed or of directness of expression.  

One of the reasons for adding range literals was for efficiency.

So

for x in [:len(seq)]:
  ...

should be efficient.

> The "loop index" isn't an accident of the way Python happens to implement
> "for" today, it's the very basis of Python's thing.__getitem__(i)/IndexError
> iteration protocol.  Exposing it is natural, because *it* is natural.

I don't think of iterators as indexing in terms of numbers. Otherwise I
could do this:

>>> a={0:"zero",1:"one",2:"two",3:"three"}
>>> for i in a:
...     print i
...

So from a Python user's point of view, for-looping has nothing to do
with integers. From a Python class/module creator's point of view it
does have to do with integers. I wouldn't be either surprised nor
disappointed if that changed one day.

> Sorry, but seq.keys() just makes me squirm.  It's a little step down the
> Lispish path of making everything look the same.  I don't want to see
> float.write() either <wink>.

You'll have to explain your squeamishness better if you expect us to
channel you in the future. Why do I use the same syntax for indexing
sequences and dictionaries and for deleting sequence and dictionary
items? Is the rule: "syntax can work across types but method names
should never be shared"?

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html





From paul at prescod.net  Thu Aug 17 14:58:00 2000
From: paul at prescod.net (Paul Prescod)
Date: Thu, 17 Aug 2000 08:58:00 -0400
Subject: [Python-Dev] Winreg update
References: <3993FEC7.4E38B4F1@prescod.net> <045901c00414$27a67010$8119fea9@neil>
Message-ID: <399BE158.C2216D34@prescod.net>

Neil Hodgson wrote:
> 
> ...
> 
>    The registry is just not important enough to have this much attention or
> work.

I remain unreconstructed. My logic is as follows:

 * The registry is important enough to be in the standard library ...
unlike, let's say, functions to operate the Remote Access Service.

 * The registry is important enough that the interface to it is
documented (partially)

 * Therefore, the registry is important enough to have a decent API with
complete documentation.

You know the old adage: "anything worth doing..."

If the registry is just supposed to expose one or two functions for
distutils then it could expose one or two functions for distutils, be
called _distreg and be undocumented and formally unsupported.

>    The Microsoft.Win32.Registry* API appears to be a hacky legacy API to me.
> Its there for compatibility during the transition to the
> System.Configuration API. Read the blurb for ConfigManager to understand the
> features of System.Configuration. Its all based on XML files. What a
> surprise.

Nobody on Windows is going to migrate to XML configuration files this
year or next year. The change-over is going to be too difficult.
Predicting Microsoft configuration ideology in 2002 is highly risky. If
we need to do the registry today then we can do the registry right
today.

-- 
 Paul Prescod - Not encumbered by corporate consensus
Simplicity does not precede complexity, but follows it. 
	- http://www.cs.yale.edu/homes/perlis-alan/quotes.html





From skip at mojam.com  Thu Aug 17 14:50:28 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 17 Aug 2000 07:50:28 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
References: <14747.28372.771170.783868@cj42289-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
Message-ID: <14747.57236.264324.165612@beluga.mojam.com>

    Tim> Oddly enough, I don't: commonprefix worked exactly as documented
    Tim> for at least 6 years and 5 months (which is when CVS shows Guido
    Tim> checking in ntpath.py with the character-based functionality), and
    Tim> got out of synch with the docs about 5 weeks ago when Skip changed
    Tim> to this other algorithm.  Since the docs *did* match the code,
    Tim> there's no reason to believe the original author was confused, and
    Tim> no reason to believe users aren't relying on it (they've had over 6
    Tim> years to gripe <wink>).

I don't realize that because a bug wasn't noticed for a long time was any
reason not to fix it.  Guido was also involved in the repair of the bug, and
had no objections to the fix I eventually arrived at.  Also, when I
announced my original patch the subject of the message was

    patch for os.path.commonprefix (changes semantics - pls review)

In the body of the message I said

    Since my patch changes the semantics of the function, I submitted a
    patch via SF that implements what I believe to be the correct behavior
    instead of just checking in the change, so people could comment on it.

I don't think I could have done much more to alert people to the change than
I did.  I didn't expect the patch to go into 1.6.  (Did it?  It shouldn't
have.)  I see nothing wrong with correcting the semantics of a function that
is broken when we increment the major version number of the code.

    Tim> I appreciate that some other behavior may be more useful more
    Tim> often, but if you can ever agree on what that is across platforms,
    Tim> it should be spelled via a new function name ("commonpathprefix"
    Tim> comes to mind), or optional flag (defaulting to "old behavior") on
    Tim> commonprefix (yuck!).  BTW, the presence or absence of a trailing
    Tim> path separator makes a *big* difference to many cmds on Windows,
    Tim> and you can't tell me nobody isn't currently doing e.g.

    Tim>     commonprefix(["blah.o", "blah", "blah.cpp"])

    Tim> on Unix either.

Fine.  Let's preserve the broken implementation and not break any broken
usage.  Switch it back then.

Taking a look at the copious documentation for posixpath.commonprefix:

    Return the longest string that is a prefix of all strings in
    list.  If list is empty, return the empty string ''.

I see no mention of anything in this short bit of documentation taken
completely out of context that suggests that posixpath.commonprefix has
anything to do with paths, so maybe we should move it to some other module
that has no directory path implications.  That way nobody can make the
mistake of trying to assume it operates on paths.  Perhaps string?  Oh,
that's deprecated.  Maybe we should undeprecate it or make commonprefix a
string method.  Maybe I'll just reopen the patch and assign it to Barry
since he's the string methods guru.

On a more realistic note, perhaps I should submit a patch that corrects the
documentation.

Skip



From skip at mojam.com  Thu Aug 17 14:19:46 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 17 Aug 2000 07:19:46 -0500 (CDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008170453.QAA15394@s454.cosc.canterbury.ac.nz>
References: <14747.27927.170223.873328@beluga.mojam.com>
	<200008170453.QAA15394@s454.cosc.canterbury.ac.nz>
Message-ID: <14747.55394.783997.167234@beluga.mojam.com>

    Greg> To avoid duplication of effort, how about a single function that
    Greg> does both:

    >>> files = ["/home/swen", "/home/swanson", "/home/jules"]
    >>> os.path.factorize(files)
    ("/home", ["swen", "swanson", "jules"])

Since we already have os.path.commonprefix and it's not going away, it
seemed to me that just adding a complementary function to return the
suffixes made sense.  Also, there's nothing in the name factorize that
suggests that it would split the paths at the common prefix.

It could easily be written in terms of the two:

    def factorize(files):
	pfx = os.path.commonprefix(files)
	suffixes = os.path.suffixes(pfx, files)
	return (pfx, suffixes)

Skip




From bwarsaw at beopen.com  Thu Aug 17 16:35:03 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 10:35:03 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <14747.22851.266303.28877@anthem.concentric.net>
	<Pine.GSO.4.10.10008170915050.24783-100000@sundial>
	<20000817083023.J376@xs4all.nl>
Message-ID: <14747.63511.725610.771162@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Agreed. It might be technically unambiguous, but I think it's
    TW> too hard for a *human* to parse this correctly. The '>>'
    TW> version might seem more C++ish and less pythonic, but it also
    TW> stands out a lot more. The 'print from' statement could easily
    TW> (and more consistently, IMHO ;) be written as 'print <<' (not
    TW> that I like the 'print from' idea, though.)

I also played around with trying to get the grammar and parser to
recognize 'print to' and variants, and it seemed difficult and
complicated.  So I'm back to -0 on 'print to' and +1 on 'print >>'.

-Barry



From bwarsaw at beopen.com  Thu Aug 17 16:43:02 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 10:43:02 -0400 (EDT)
Subject: [Python-Dev] PEP 214, extended print statement
References: <Pine.GSO.4.10.10008170915050.24783-100000@sundial>
	<LNBBLJKPBEHFEDALKOLCEEEBHAAA.tim_one@email.msn.com>
	<20000817115416.M376@xs4all.nl>
Message-ID: <14747.63990.296049.566791@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Really ? Hmmmm...

    TW> [Tim Peters]
    >> Me too!  +1 on changing ">>" to "to" here.  Then we can
    >> introduce

    TW> I guessed I missed the sarcasm ;-P

No, Tim just forgot to twist the blue knob while he was pressing the
shiny pedal on Guido's time machine.  I've made the same mistake
myself before -- the VRTM can be as inscrutable as the BDFL himself at
times.  Sadly, changing those opinions now would cause an irreparable
time paradox, the outcome of which would force Python to be called
Bacon and require you to type `albatross' instead of colons to start
every block.

good-thing-tim-had-the-nose-plugs-in-or-Python-would-only-work-on-
19-bit-architectures-ly y'rs,
-Barry



From Vladimir.Marangozov at inrialpes.fr  Thu Aug 17 17:09:44 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 17 Aug 2000 17:09:44 +0200 (CEST)
Subject: [Python-Dev] PyErr_NoMemory
Message-ID: <200008171509.RAA20891@python.inrialpes.fr>

The current PyErr_NoMemory() function reads:

PyObject *
PyErr_NoMemory(void)
{
        /* raise the pre-allocated instance if it still exists */
        if (PyExc_MemoryErrorInst)
                PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
        else
                /* this will probably fail since there's no memory and hee,
                   hee, we have to instantiate this class
                */
                PyErr_SetNone(PyExc_MemoryError);

        return NULL;
}

thus overriding any previous exceptions unconditionally. This is a
problem when the current exception already *is* PyExc_MemoryError,
notably when we have a chain (cascade) of memory errors. It is a
problem because the original memory error and eventually its error
message is lost.

I suggest to make this code look like:

PyObject *
PyErr_NoMemory(void)
{
	if (PyErr_ExceptionMatches(PyExc_MemoryError))
		/* already current */
		return NULL;

        /* raise the pre-allocated instance if it still exists */
        if (PyExc_MemoryErrorInst)
                PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
...


If nobody sees a problem with this, I'm very tempted to check it in.
Any objections?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From gmcm at hypernet.com  Thu Aug 17 17:22:27 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 11:22:27 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.63990.296049.566791@anthem.concentric.net>
Message-ID: <1245596748-155226852@hypernet.com>

> No, Tim just forgot to twist the blue knob while he was pressing
> the shiny pedal on Guido's time machine.  I've made the same
> mistake myself before -- the VRTM can be as inscrutable as the
> BDFL himself at times.  Sadly, changing those opinions now would
> cause an irreparable time paradox, the outcome of which would
> force Python to be called Bacon and require you to type
> `albatross' instead of colons to start every block.

That accounts for the strange python.ba (mtime 1/1/70) I 
stumbled across this morning:

#!/usr/bin/env bacon
# released to the public domain at least one Tim Peters
import sys, string, tempfile
txt = string.replace(open(sys.argv[1]).read(), ':', ' albatross')
fnm = tempfile.mktemp() + '.ba'
open(fnm, 'w').write(txt)
os.system('bacon %s %s' % (fnm, string.join(sys.argv[2:]))



- Gordon



From nowonder at nowonder.de  Thu Aug 17 21:30:13 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Thu, 17 Aug 2000 19:30:13 +0000
Subject: [Python-Dev] PyErr_NoMemory
References: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <399C3D45.95ED79D8@nowonder.de>

Vladimir Marangozov wrote:
> 
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

This change makes sense to me. I can't see any harm in checking
it in.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From tim_one at email.msn.com  Thu Aug 17 19:58:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 13:58:25 -0400
Subject: [Python-Dev] PEP 214, extended print statement
In-Reply-To: <14747.63990.296049.566791@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>

> >>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:
>
>     TW> Really ? Hmmmm...
>
>     TW> [Tim Peters]
>     >> Me too!  +1 on changing ">>" to "to" here.  Then we can
>     >> introduce
>     TW> I guessed I missed the sarcasm ;-P

[Barry A. Warsaw]
> No, Tim just forgot to twist the blue knob while he was pressing the
> shiny pedal on Guido's time machine.  I've made the same mistake
> myself before -- the VRTM can be as inscrutable as the BDFL himself at
> times.  Sadly, changing those opinions now would cause an irreparable
> time paradox, the outcome of which would force Python to be called
> Bacon and require you to type `albatross' instead of colons to start
> every block.
>
> good-thing-tim-had-the-nose-plugs-in-or-Python-would-only-work-on-
> 19-bit-architectures-ly y'rs,

I have no idea what this is about.  I see an old msg from Barry voting "-1"
on changing ">>" to "to", but don't believe any such suggestion was ever
made.  And I'm sure that had such a suggestion ever been made, it would have
been voted down at once by everyone.

OTOH, there is *some* evidence that an amateur went mucking with the time
machine! No 19-bit architectures, but somewhere in a reality distortion
field around Vancouver, it appears that AIX actually survived long enough to
see the 64-bit world, and that some yahoo vendor decided to make a version
of C where sizeof(void*) > sizeof(long).  There's no way either of those
could have happened naturally.

even-worse-i-woke-up-today-*old*!-ly y'rs  - tim





From trentm at ActiveState.com  Thu Aug 17 20:21:22 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 11:21:22 -0700
Subject: screwin' with the time machine in Canada, eh (was: Re: [Python-Dev] PEP 214, extended print statement)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 01:58:25PM -0400
References: <14747.63990.296049.566791@anthem.concentric.net> <LNBBLJKPBEHFEDALKOLCGEFDHAAA.tim_one@email.msn.com>
Message-ID: <20000817112122.A27284@ActiveState.com>

On Thu, Aug 17, 2000 at 01:58:25PM -0400, Tim Peters wrote:
> 
> OTOH, there is *some* evidence that an amateur went mucking with the time
> machine! No 19-bit architectures, but somewhere in a reality distortion
> field around Vancouver, it appears that AIX actually survived long enough to
> see the 64-bit world, and that some yahoo vendor decided to make a version
> of C where sizeof(void*) > sizeof(long).  There's no way either of those
> could have happened naturally.
> 

And though this place is supposed to be one the more successful pot havens on
the planet I just can't seem to compete with the stuff those "vendors" in
Austin (AIX) and Seattle must have been smokin'.

<puff>-<inhale>-if-i-wasn't-seeing-flying-bunnies-i-would-swear-that-compiler
is-from-SCO-ly-y'rs - trent


> even-worse-i-woke-up-today-*old*!-ly y'rs  - tim

Come on up for a visit and we'll make you feel young again. :)


Trent

-- 
Trent Mick



From akuchlin at mems-exchange.org  Thu Aug 17 22:40:35 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 16:40:35 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
Message-ID: <E13PWSp-0006w9-00@kronos.cnri.reston.va.us>

Tim O'Malley finally mailed me the correct URL for the latest version
of the cookie module: http://www.timo-tasi.org/python/Cookie.py 

*However*...  I think the Web support in Python needs more work in
generally, and certainly more than can be done for 2.0.  One of my
plans for the not-too-distant future is to start writing a Python/CGI
guide, and the process of writing it is likely to shake out more
ugliness that should be fixed.

I'd like to propose a 'Web Library Enhancement PEP', and offer to
champion and write it.  Its goal would be to identify missing features
and specify them, and list other changes to improve Python as a
Web/CGI language.  Possibly the PEP would also drop
backward-compatibility cruft.

(Times like this I wish the Web-SIG hadn't been killed...)

--amk



From akuchlin at mems-exchange.org  Thu Aug 17 22:43:45 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 16:43:45 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
Message-ID: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>

Tim O'Malley finally mailed me the correct URL for the latest version
of the cookie module: http://www.timo-tasi.org/python/Cookie.py 

*However*...  I think the Web support in Python needs more work in
generally, and certainly more than can be done for 2.0.  One of my
plans for the not-too-distant future is to start writing a Python/CGI
guide, and the process of writing it is likely to shake out more
ugliness that should be fixed.

I'd like to propose a 'Web Library Enhancement PEP', and offer to
champion and write it.  Its goal would be to identify missing features
and specify them, and list other changes to improve Python as a
Web/CGI language.  Possibly the PEP would also drop
backward-compatibility cruft.

(Times like this I wish the Web-SIG hadn't been killed...)

--amk



From bwarsaw at beopen.com  Thu Aug 17 23:05:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 17:05:10 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
Message-ID: <14748.21382.305979.784637@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> Tim O'Malley finally mailed me the correct URL for the latest
    AK> version of the cookie module:
    AK> http://www.timo-tasi.org/python/Cookie.py

    AK> *However*...  I think the Web support in Python needs more
    AK> work in generally, and certainly more than can be done for
    AK> 2.0.

I agree, but I still think Cookie.py should go in the stdlib for 2.0.

-Barry



From akuchlin at mems-exchange.org  Thu Aug 17 23:13:52 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 17 Aug 2000 17:13:52 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <14748.21382.305979.784637@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 05:05:10PM -0400
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us> <14748.21382.305979.784637@anthem.concentric.net>
Message-ID: <20000817171352.B26730@kronos.cnri.reston.va.us>

On Thu, Aug 17, 2000 at 05:05:10PM -0400, Barry A. Warsaw wrote:
>I agree, but I still think Cookie.py should go in the stdlib for 2.0.

Fine.  Shall I just add it as-is?  (Opinion was generally positive as
I recall, unless the BDFL wants to exercise his veto for some reason.)

--amk



From thomas at xs4all.net  Thu Aug 17 23:19:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:19:42 +0200
Subject: [Python-Dev] 'import as'
Message-ID: <20000817231942.O376@xs4all.nl>

I have two remaining issues regarding the 'import as' statement, which I'm
just about ready to commit. The first one is documentation: I have
documentation patches, to the ref and the libdis sections, but I can't
really test them :P I *think* they are fine, though, and they aren't really
complicated. Should I upload a patch for them, so Fred or someone else can
look at them, or just check them in ?

The other issue is the change in semantics for 'from-import'. Currently,
'IMPORT_FROM' is a single operation that retrieves a name (possible '*')
from the module object at TOS, and stores it directly in the local
namespace. This is contrary to 'import <module>', which pushes it onto the
stack and uses a normal STORE operation to store it. It's also necessary for
'from ... import *', which can load any number of objects.

After the patch, 'IMPORT_FROM' is only used to load normal names, and a new
opcode, 'IMPORT_STAR' (with no argument) is used for 'from module import *'.
'IMPORT_FROM' pushes the result on the stack, instead of modifying the local
namespace directly, so that it's possible to store it to a different name.
This also means that a 'global' statement now has effect on objects
'imported from' a module, *except* those imported by '*'.

I don't think that's a big issue. 'global' is not that heavily used, and old
code mixing 'import from' and 'global' statements on the same identifier
would not have been doing what the programmer intended. However, if it *is*
a big issue, I can revert to an older version of the patch, that added a new
bytecode to handle 'from x import y as z', and leave the bytecode for the
currently valid cases unchanged. That would mean that only the '... as z'
would be effected by 'global' statements. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Thu Aug 17 23:22:07 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 14:22:07 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 11:34:12PM -0400
References: <20000816172425.A32338@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>
Message-ID: <20000817142207.A5592@ActiveState.com>

On Wed, Aug 16, 2000 at 11:34:12PM -0400, Tim Peters wrote:
> [Trent Mick]
> > I am porting Python to Monterey (64-bit AIX) and have a small
> > (hopefully) question about POSIX threads.
> 
> POSIX threads. "small question".  HAHAHAHAHAHA.  Thanks, that felt good
> <wink>.

Happy to provide you with cheer. <grumble>



> > Does the POSIX threads spec specify a C type or minimum size for
> > pthread_t?
> 
> or user-space arrays of structs.  So I think it's *safe* to assume it will
> always fit in an integral type large enough to hold a pointer, but not
> guaranteed.  Plain "long" certainly isn't safe in theory.

Not for pthread ports to Win64 anyway. But that is not my concern right now.
I'll let the pthreads-on-Windows fans worry about that when the time comes.


> > this up. On Linux (mine at least):
> >   /usr/include/bits/pthreadtypes.h:120:typedef unsigned long int
> > pthread_t;
> 
> And this is a 32- or 64-bit Linux?

That was 32-bit Linux. My 64-bit Linux box is down right now, I can tell
later if you really want to know.


> > WHAT IS UP WITH THAT return STATEMENT?
> >   return (long) *(long *) &threadid;
> 
<snip>
> 
> So, here's the scoop:
> 
<snip>

Thanks for trolling the cvs logs, Tim!

> 
> So one of two things can be done:
> 
> 1. Bite the bullet and do it correctly.  For example, maintain a static
>    dict mapping the native pthread_self() return value to Python ints,
>    and return the latter as Python's thread.get_ident() value.  Much
>    better would to implement a x-platform thread-local storage
>    abstraction, and use that to hold a Python-int ident value.
> 
> 2. Continue in the tradition already established <wink>, and #ifdef the
>    snot out of it for Monterey.
> 
> In favor of #2, the code is already so hosed that making it hosier won't be
> a significant relative increase in its inherent hosiness.
> 
> spoken-like-a-true-hoser-ly y'rs  - tim
> 

I'm all for being a hoser then. #ifdef's a-comin' down the pipe. One thing,
the only #define that I know I have a handle on for Monterey is '_LP64'. Do
you have an objection to that (seeing at is kind of misleading)? I will
accompany it with an explicative comment of course.


take-off-you-hoser-ly y'rs - wannabe Bob & Doug fan

-- 
Trent Mick
TrentM at ActiveState.com



From fdrake at beopen.com  Thu Aug 17 23:35:14 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 17:35:14 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>
References: <20000817231942.O376@xs4all.nl>
Message-ID: <14748.23186.372772.48426@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > really test them :P I *think* they are fine, though, and they aren't really
 > complicated. Should I upload a patch for them, so Fred or someone else can
 > look at them, or just check them in ?

  Just check them in; I'll catch problems before anyone else tries to
format the stuff at any rate.
  With regard to your semantics question, I think your proposed
solution is fine.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 17 23:38:21 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:38:21 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 11:19:42PM +0200
References: <20000817231942.O376@xs4all.nl>
Message-ID: <20000817233821.P376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:19:42PM +0200, Thomas Wouters wrote:

> This also means that a 'global' statement now has effect on objects
> 'imported from' a module, *except* those imported by '*'.

And while I was checking my documentation patches, I found this:

Names bound by \keyword{import} statements may not occur in
\keyword{global} statements in the same scope.
\stindex{global}

But there doesn't seem to be anything to prevent it ! On my RedHat supplied
Python 1.5.2:

>>> def test():
...     global sys
...     import sys
... 
>>> test()
>>> sys
<module 'sys' (built-in)>

And on a few weeks old CVS Python:

>>> def test():
...     global sys
...     import sys
...
>>> test()
>>> sys
<module 'sys' (built-in)>

Also, mixing 'global' and 'from-import' wasn't illegal, it was just
ineffective. (That is, it didn't make the variable 'global', but it didn't
raise an exception either!)

How about making 'from module import *' a special case in this regard, and
letting 'global' operate fine on normal 'import' and 'from-import'
statements ? I can definately see a use for it, anyway. Is this workable
(and relevant) for JPython / #Py ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From trentm at ActiveState.com  Thu Aug 17 23:41:04 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 14:41:04 -0700
Subject: [Python-Dev] [Fwd: segfault in sre on 64-bit plats]
In-Reply-To: <399B3D36.6921271@per.dem.csiro.au>; from m.favas@per.dem.csiro.au on Thu, Aug 17, 2000 at 09:17:42AM +0800
References: <399B3D36.6921271@per.dem.csiro.au>
Message-ID: <20000817144104.B7658@ActiveState.com>

On Thu, Aug 17, 2000 at 09:17:42AM +0800, Mark Favas wrote:
> [Trent]
> > This test on Win32 and Linux32 hits the recursion limit check of 10000 in
> > SRE_MATCH(). However, on Linux64 the segfault occurs at a recursion depth of
> > 7500. I don't want to just willy-nilly drop the recursion limit down to make
> > the problem go away.
> > 
> 
> Sorry for the delay - yes, I had these segfaults due to exceeding the
> stack size on Tru64 Unix (which, by default, is 2048 kbytes) before
> Fredrick introduced the recusrion limit of 10000 in _sre.c. You'd expect
> a 64-bit OS to use a bit more bytes of the stack when handling recursive
> calls, but your 7500 down from 10000 sounds a bit much - unless the

Actually with pointers being twice the size the stack will presumably get
comsumed more quickly (right?), so all other things being equal the earlier
stack overflow is expected.

> stack size limit you're using on Linux64 is smaller than that for
> Linux32 - what are they?

------------------- snip --------- snip ----------------------
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>

int main(void)
{
    struct rlimit lims;
    if (getrlimit(RLIMIT_STACK, &lims) != 0) {
        printf("error in getrlimit\n");
        exit(1);
    }
    printf("cur stack limit = %d, max stack limit = %d\n",
        lims.rlim_cur, lims.rlim_max);
    return 0;
}
------------------- snip --------- snip ----------------------

On Linux32:

    cur stack limit = 8388608, max stack limit = 2147483647

On Linux64:

    cur stack limit = 8388608, max stack limit = -1


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From cgw at fnal.gov  Thu Aug 17 23:43:38 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:43:38 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <20000817125903.2C29E1D0F5@dinsdale.python.org>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
Message-ID: <14748.23690.632808.944375@buffalo.fnal.gov>

This has probably been noted by somebody else already - somehow a
config.h showed up in the Include directory when I did a cvs update
today.  I assume this is an error.  It certainly keeps Python from
building on my system!



From thomas at xs4all.net  Thu Aug 17 23:46:07 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 17 Aug 2000 23:46:07 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817233821.P376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 11:38:21PM +0200
References: <20000817231942.O376@xs4all.nl> <20000817233821.P376@xs4all.nl>
Message-ID: <20000817234607.Q376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:38:21PM +0200, Thomas Wouters wrote:
> On Thu, Aug 17, 2000 at 11:19:42PM +0200, Thomas Wouters wrote:
> 
> > This also means that a 'global' statement now has effect on objects
> > 'imported from' a module, *except* those imported by '*'.
> 
> And while I was checking my documentation patches, I found this:

> Names bound by \keyword{import} statements may not occur in
> \keyword{global} statements in the same scope.
> \stindex{global}

And about five lines lower, I saw this:

(The current implementation does not enforce the latter two
restrictions, but programs should not abuse this freedom, as future
implementations may enforce them or silently change the meaning of the
program.)

My only excuse is that all that TeX stuff confuzzles my eyes ;) In any case,
my point still stands: 1) can we change this behaviour even if it's
documented to be impossible, and 2) should it be documented differently,
allowing mixing of 'global' and 'import' ?

Multip-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug 17 23:52:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 17:52:28 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.23690.632808.944375@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
Message-ID: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > This has probably been noted by somebody else already - somehow a
 > config.h showed up in the Include directory when I did a cvs update
 > today.  I assume this is an error.  It certainly keeps Python from
 > building on my system!

  This doesn't appear to be in CVS.  If you delete the file and the do
a CVS update, does it reappear?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From cgw at fnal.gov  Thu Aug 17 23:56:55 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:56:55 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
Message-ID: <14748.24487.903334.663705@buffalo.fnal.gov>


And it's not that sticky date, either (no idea how that got set!)

buffalo:Include$  cvs update -A
cvs server: Updating .
U config.h

buffalo:Include$ cvs status config.h 
===================================================================
File: config.h          Status: Up-to-date

   Working revision:    2.1
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v
   Sticky Tag:          (none)
   Sticky Date:         (none)
   Sticky Options:      (none)




From cgw at fnal.gov  Thu Aug 17 23:58:40 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 16:58:40 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
Message-ID: <14748.24592.448009.515511@buffalo.fnal.gov>


Fred L. Drake, Jr. writes:
 > 
 >   This doesn't appear to be in CVS.  If you delete the file and the do
 > a CVS update, does it reappear?
 > 

Yes.

buffalo:src$ pwd
/usr/local/src/Python-CVS/python/dist/src

buffalo:src$ cd Include/

buffalo:Include$ cvs update
cvs server: Updating .
U config.h

buffalo:Include$ cvs status config.h
===================================================================
File: config.h          Status: Up-to-date

   Working revision:    2.1
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v
   Sticky Tag:          (none)
   Sticky Date:         2000.08.17.05.00.00
   Sticky Options:      (none)




From fdrake at beopen.com  Fri Aug 18 00:02:39 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 18:02:39 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24487.903334.663705@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
	<14748.24487.903334.663705@buffalo.fnal.gov>
Message-ID: <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > And it's not that sticky date, either (no idea how that got set!)

  -sigh-  Is there an entry for config.h in the CVS/entries file?  If
so, surgically remove it, then delete the config.h, then try the
update again.
  *This* is getting mysterious.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From cgw at fnal.gov  Fri Aug 18 00:07:28 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 17 Aug 2000 17:07:28 -0500 (CDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
	<14748.24487.903334.663705@buffalo.fnal.gov>
	<14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
Message-ID: <14748.25120.807735.628798@buffalo.fnal.gov>

Fred L. Drake, Jr. writes:

 >   -sigh-  Is there an entry for config.h in the CVS/entries file?  If
 > so, surgically remove it, then delete the config.h, then try the
 > update again.

Yes, this entry was present, I removed it as you suggested.

Now, when I do cvs update the config.h doesn't reappear, but I still
see "needs checkout" if I ask for cvs status:


buffalo:Include$ cvs status config.h
===================================================================
File: no file config.h          Status: Needs Checkout

   Working revision:    No entry for config.h
   Repository revision: 2.1     /cvsroot/python/python/dist/src/Include/Attic/config.h,v

I keep my local CVS tree updated daily, I never use any kind of sticky
tags, and haven't seen this sort of problem at all, up until today.
Today I also noticed the CVS server responding very slowly, so I
suspect that something may be wrong with the server.





From fdrake at beopen.com  Fri Aug 18 00:13:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 18:13:28 -0400 (EDT)
Subject: [Python-Dev] Include/config.h in CVS
In-Reply-To: <14748.25120.807735.628798@buffalo.fnal.gov>
References: <20000817125903.2C29E1D0F5@dinsdale.python.org>
	<14748.23690.632808.944375@buffalo.fnal.gov>
	<14748.24220.666086.9128@cj42289-a.reston1.va.home.com>
	<14748.24487.903334.663705@buffalo.fnal.gov>
	<14748.24831.313742.340896@cj42289-a.reston1.va.home.com>
	<14748.25120.807735.628798@buffalo.fnal.gov>
Message-ID: <14748.25480.976849.825016@cj42289-a.reston1.va.home.com>

Charles G Waldman writes:
 > Now, when I do cvs update the config.h doesn't reappear, but I still
 > see "needs checkout" if I ask for cvs status:
[...output elided...]

  I get exactly the same output from "cvs status", and "cvs update"
doesn't produce the file.
  Now, if I say "cvs update config.h", it shows up and doesn't get
deleted by "cvs update", but after removing the line from CVS/Entries
and removing the file, it doesn't reappear.  So you're probably set
for now.

 > I keep my local CVS tree updated daily, I never use any kind of sticky
 > tags, and haven't seen this sort of problem at all, up until today.
 > Today I also noticed the CVS server responding very slowly, so I
 > suspect that something may be wrong with the server.

  This is weird, but that doesn't sound like the problem; the SF
servers can be very slow some days, but we suspect it's just load.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Fri Aug 18 00:15:08 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 15:15:08 -0700
Subject: [Python-Dev] autoconf question: howto add to CFLAGS and LDFLAGS?
In-Reply-To: <20000817085541.K376@xs4all.nl>; from thomas@xs4all.net on Thu, Aug 17, 2000 at 08:55:41AM +0200
References: <20000816165542.D29260@ActiveState.com> <20000817085541.K376@xs4all.nl>
Message-ID: <20000817151508.C7658@ActiveState.com>

On Thu, Aug 17, 2000 at 08:55:41AM +0200, Thomas Wouters wrote:
> On Wed, Aug 16, 2000 at 04:55:42PM -0700, Trent Mick wrote:
> 
> > I am currently trying to port Python to Monterey (64-bit AIX) and I need
> > to add a couple of Monterey specific options to CFLAGS and LDFLAGS (or to
> > whatever appropriate variables for all 'cc' and 'ld' invocations) but it
> > is not obvious *at all* how to do that in configure.in. Can anybody helpme
> > on that?
> 
> You'll have to write a shell 'case' for AIX Monterey, checking to make sure
> it is monterey, and setting LDFLAGS accordingly. If you look around in
> configure.in, you'll see a few other 'special cases', all to tune the
> way the compiler is called. Depending on what you need to do to detect
> monterey, you could fit it in one of those. Just search for 'Linux' or
> 'bsdos' to find a couple of those cases.

Right, thanks. I was looking at first to modify CFLAGS and LDFLAGS (as I
thought would be cleaner) but I have got it working by just modifying CC and
LINKCC instead (following the crowd on that one).



[Trent blames placing *.a on the cc command line for his problems and Thomas
and Barry, etc. tell Trent that that cannot be]

Okay, I don't know what I was on. I think I was flailing for things to blame.
I have got it working with simply listing the .a on the command line.



Thanks,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Fri Aug 18 00:26:42 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 18:26:42 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
	<14748.21382.305979.784637@anthem.concentric.net>
	<20000817171352.B26730@kronos.cnri.reston.va.us>
Message-ID: <14748.26274.949428.733639@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> Fine.  Shall I just add it as-is?  (Opinion was generally
    AK> positive as I recall, unless the BDFL wants to exercise his
    AK> veto for some reason.)

Could you check and see if there are any substantial differences
between the version you've got and the version in the Mailman tree?
If there are none, then I'm +1.

Let me know if you want me to email it to you.
-Barry



From MarkH at ActiveState.com  Fri Aug 18 01:07:38 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 09:07:38 +1000
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.57236.264324.165612@beluga.mojam.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEFODFAA.MarkH@ActiveState.com>

> I don't realize that because a bug wasn't noticed for a long time was any
> reason not to fix it.  Guido was also involved in the repair of
> the bug, and

I think most people agreed that the new semantics were preferable to the
old.  I believe Tim was just having a dig at the fact the documentation was
not changed, and also wearing his grumpy-conservative hat (well, it is
election fever in the US!)

But remember - the original question was if the new semantics should return
the trailing "\\" as part of the common prefix, due to the demonstrated
fact that at least _some_ code out there depends on it.

Tim wanted a bug filed, but a few other people have chimed in saying
nothing needs fixing.

So what is it?  Do I file the bug as Tim requested?   Maybe I should just
do it, and assign the bug to Guido - at least that way he can make a quick
decision?

At-least-my-code-works-again ly,

Mark.




From akuchlin at cnri.reston.va.us  Fri Aug 18 01:27:06 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 17 Aug 2000 19:27:06 -0400
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <14748.26274.949428.733639@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 06:26:42PM -0400
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us> <14748.21382.305979.784637@anthem.concentric.net> <20000817171352.B26730@kronos.cnri.reston.va.us> <14748.26274.949428.733639@anthem.concentric.net>
Message-ID: <20000817192706.A28225@newcnri.cnri.reston.va.us>

On Thu, Aug 17, 2000 at 06:26:42PM -0400, Barry A. Warsaw wrote:
>Could you check and see if there are any substantial differences
>between the version you've got and the version in the Mailman tree?
>If there are none, then I'm +1.

If you're referring to misc/Cookie.py in Mailman, the two files are
vastly different (though not necessarily incompatible).  The Mailman
version derives from a version of Cookie.py dating from 1998,
according to the CVS tree.  Timo's current version has three different
flavours of cookie, the Mailman version doesn't, so you wind up with a
1000-line long diff between the two.

--amk




From tim_one at email.msn.com  Fri Aug 18 01:29:16 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 19:29:16 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEFODFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>

[Skip, as quoted by MarkH]
> I don't realize that because a bug wasn't noticed for a long
> time was any reason not to fix it.  Guido was also involved in the
> repair of the bug, and

[MarkH]
> I think most people agreed that the new semantics were preferable to the
> old.  I believe Tim was just having a dig at the fact the  documentation
> was not changed, and also wearing his grumpy-conservative hat (well, it is
> election fever in the US!)

Not at all, I meant it.  When the code and the docs have matched for more
than 6 years, there is no bug by any rational definition of the term, and
you can be certain that changing the library semantics then will break
existing code.  Presuming to change it anyway is developer arrogance of the
worst kind, no matter how many developers cheer it on.  The docs are a
contract, and if they were telling the truth, we have a responsibility to
stand by them -- and whether we like it or not (granted, I am overly
sensitive to contractual matters these days <0.3 wink>).

The principled solution is to put the new functionality in a new function.
Then nobody's code breaks, no user feels abused, and everyone gets what they
want.  If you despise what the old function did, that's fine too, deprecate
it -- but don't screw people who were using it happily for what it was
documented to do.

> But remember - the original question was if the new semantics
> should return the trailing "\\" as part of the common prefix, due
> to the demonstrated fact that at least _some_ code out there
> depends on it.
>
> Tim wanted a bug filed, but a few other people have chimed in saying
> nothing needs fixing.
>
> So what is it?  Do I file the bug as Tim requested?   Maybe I should just
> do it, and assign the bug to Guido - at least that way he can make a quick
> decision?

By my count, Unix and Windows people have each voted for both answers, and
the Mac contingent is silently laughing <wink>.

hell-stick-in-fifty-new-functions-if-that's-what-it-takes-but-leave-
    the-old-one-alone-ly y'rs  - tim





From gstein at lyra.org  Fri Aug 18 01:41:37 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 16:41:37 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 16, 2000 at 11:34:12PM -0400
References: <20000816172425.A32338@ActiveState.com> <LNBBLJKPBEHFEDALKOLCGEDEHAAA.tim_one@email.msn.com>
Message-ID: <20000817164137.U17689@lyra.org>

On Wed, Aug 16, 2000 at 11:34:12PM -0400, Tim Peters wrote:
>...
> So one of two things can be done:
> 
> 1. Bite the bullet and do it correctly.  For example, maintain a static
>    dict mapping the native pthread_self() return value to Python ints,
>    and return the latter as Python's thread.get_ident() value.  Much
>    better would to implement a x-platform thread-local storage
>    abstraction, and use that to hold a Python-int ident value.
> 
> 2. Continue in the tradition already established <wink>, and #ifdef the
>    snot out of it for Monterey.
> 
> In favor of #2, the code is already so hosed that making it hosier won't be
> a significant relative increase in its inherent hosiness.

The x-plat thread-local storage idea is the best thing to do. That will be
needed for some of the free-threading work in Python.

IOW, an x-plat TLS is going to be done at some point. If you need it now,
then please do it now. That will help us immeasurably in the long run.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From MarkH at ActiveState.com  Fri Aug 18 01:59:18 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 09:59:18 +1000
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817164137.U17689@lyra.org>
Message-ID: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>

> IOW, an x-plat TLS is going to be done at some point. If you need it now,
> then please do it now. That will help us immeasurably in the long run.

I just discovered the TLS code in the Mozilla source tree.  This could be a
good place to start.

The definitions are in mozilla/nsprpub/pr/include/prthread.h, and I include
some of this file below...  I can confirm this code works _with_ Python -
but I have no idea how hard it would be to distill it _into_ Python!

Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

Mark.

/*
 * The contents of this file are subject to the Netscape Public License
 * Version 1.1 (the "NPL"); you may not use this file except in
 * compliance with the NPL.  You may obtain a copy of the NPL at
 * http://www.mozilla.org/NPL/
 *

[MarkH - it looks good to me - very open license]
...

/*
** This routine returns a new index for per-thread-private data table.
** The index is visible to all threads within a process. This index can
** be used with the PR_SetThreadPrivate() and PR_GetThreadPrivate()
routines
** to save and retrieve data associated with the index for a thread.
**
** Each index is associationed with a destructor function ('dtor'). The
function
** may be specified as NULL when the index is created. If it is not NULL,
the
** function will be called when:
**      - the thread exits and the private data for the associated index
**        is not NULL,
**      - new thread private data is set and the current private data is
**        not NULL.
**
** The index independently maintains specific values for each binding
thread.
** A thread can only get access to its own thread-specific-data.
**
** Upon a new index return the value associated with the index for all
threads
** is NULL, and upon thread creation the value associated with all indices
for
** that thread is NULL.
**
** Returns PR_FAILURE if the total number of indices will exceed the
maximun
** allowed.
*/
typedef void (PR_CALLBACK *PRThreadPrivateDTOR)(void *priv);

NSPR_API(PRStatus) PR_NewThreadPrivateIndex(
    PRUintn *newIndex, PRThreadPrivateDTOR destructor);

/*
** Define some per-thread-private data.
**     "tpdIndex" is an index into the per-thread private data table
**     "priv" is the per-thread-private data
**
** If the per-thread private data table has a previously registered
** destructor function and a non-NULL per-thread-private data value,
** the destructor function is invoked.
**
** This can return PR_FAILURE if the index is invalid.
*/
NSPR_API(PRStatus) PR_SetThreadPrivate(PRUintn tpdIndex, void *priv);

/*
** Recover the per-thread-private data for the current thread. "tpdIndex"
is
** the index into the per-thread private data table.
**
** The returned value may be NULL which is indistinguishable from an error
** condition.
**
** A thread can only get access to its own thread-specific-data.
*/
NSPR_API(void*) PR_GetThreadPrivate(PRUintn tpdIndex);




From gstein at lyra.org  Fri Aug 18 02:19:17 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 17:19:17 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 18, 2000 at 09:59:18AM +1000
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
Message-ID: <20000817171917.V17689@lyra.org>

On Fri, Aug 18, 2000 at 09:59:18AM +1000, Mark Hammond wrote:
> > IOW, an x-plat TLS is going to be done at some point. If you need it now,
> > then please do it now. That will help us immeasurably in the long run.
> 
> I just discovered the TLS code in the Mozilla source tree.  This could be a
> good place to start.
> 
> The definitions are in mozilla/nsprpub/pr/include/prthread.h, and I include
> some of this file below...  I can confirm this code works _with_ Python -
> but I have no idea how hard it would be to distill it _into_ Python!
> 
> Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

The NPL is not compatible with the Python license. While we could use their
API as a guide for our own code, we cannot use their code.


The real question is whether somebody has the time/inclination to sit down
now and write an x-plat TLS for Python. Always the problem :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From tim_one at email.msn.com  Fri Aug 18 02:18:08 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 20:18:08 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGIHAAA.tim_one@email.msn.com>

[MarkH]
> I just discovered the TLS code in the Mozilla source tree.  This
> could be a good place to start.
> ...
> Then-we-just-need-Tim-to-check-the-license-for-us<wink> ly,

Jesus, Mark, I haven't even been able to figure what the license means by
"you" yet:

    1. Definitions
    ...
    1.12. "You'' (or "Your") means an individual or a legal entity
    exercising rights under, and complying with all of the terms of,
    this License or a future version of this License issued under
    Section 6.1. For legal entities, "You'' includes any entity which
    controls, is controlled by, or is under common control with You.
    For purposes of this definition, "control'' means (a) the power,
    direct or indirect, to cause the direction or management of such
    entity, whether by contract or otherwise, or (b) ownership of more
    than fifty percent (50%) of the outstanding shares or beneficial
    ownership of such entity.

at-least-they-left-little-doubt-about-the-meaning-of-"fifty-percent"-ly
    y'rs  - tim (tee eye em, and neither you nor You.  I think.)





From bwarsaw at beopen.com  Fri Aug 18 02:18:34 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:18:34 -0400 (EDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
References: <E13PWVt-0006wC-00@kronos.cnri.reston.va.us>
	<14748.21382.305979.784637@anthem.concentric.net>
	<20000817171352.B26730@kronos.cnri.reston.va.us>
	<14748.26274.949428.733639@anthem.concentric.net>
	<20000817192706.A28225@newcnri.cnri.reston.va.us>
Message-ID: <14748.32986.835733.255687@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at cnri.reston.va.us> writes:

    >> Could you check and see if there are any substantial
    >> differences between the version you've got and the version in
    >> the Mailman tree?  If there are none, then I'm +1.

    AK> If you're referring to misc/Cookie.py in Mailman,

That's the one.
    
    AK> the two files are vastly different (though not necessarily
    AK> incompatible).  The Mailman version derives from a version of
    AK> Cookie.py dating from 1998, according to the CVS tree.  Timo's
    AK> current version has three different flavours of cookie, the
    AK> Mailman version doesn't, so you wind up with a 1000-line long
    AK> diff between the two.

Okay, don't sweat it.  If the new version makes sense to you, I'll
just be sure to make any Mailman updates that are necessary.  I'll
take a look once it's been checked in.

-Barry



From tim_one at email.msn.com  Fri Aug 18 02:24:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 20:24:04 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817171917.V17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEGIHAAA.tim_one@email.msn.com>

[Gret Stein]
> The NPL is not compatible with the Python license.

Or human comprehensibility either, far as I can tell.

> While we could use their API as a guide for our own code, we cannot
> use their code.
>
> The real question is whether somebody has the time/inclination to sit
> down now and write an x-plat TLS for Python. Always the problem :-)

The answer to Trent's original question is determined by whether he wants to
get a Monterey hack in as a bugfix for 2.0, or can wait a few years <0.9
wink> (the 2.0 feature set is frozen now).

If somebody wants to *buy* the time/inclination to get x-plat TLS, I'm sure
BeOpen or ActiveState would be keen to cash the check.  Otherwise ... don't
know.

all-it-takes-is-50-people-to-write-50-one-platform-packages-and-
    then-50-years-to-iron-out-their-differences-ly y'rs  - tim





From bwarsaw at beopen.com  Fri Aug 18 02:26:10 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:26:10 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000817164137.U17689@lyra.org>
	<ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
	<20000817171917.V17689@lyra.org>
Message-ID: <14748.33442.7609.588513@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> The NPL is not compatible with the Python license. While we
    GS> could use their API as a guide for our own code, we cannot use
    GS> their code.

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> Jesus, Mark, I haven't even been able to figure what the
    TP> license means by "you" yet:

Is the NPL compatible with /anything/? :)

-Barry



From trentm at ActiveState.com  Fri Aug 18 02:41:37 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 17 Aug 2000 17:41:37 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <14748.33442.7609.588513@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 08:26:10PM -0400
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com> <20000817171917.V17689@lyra.org> <14748.33442.7609.588513@anthem.concentric.net>
Message-ID: <20000817174137.B18811@ActiveState.com>

On Thu, Aug 17, 2000 at 08:26:10PM -0400, Barry A. Warsaw wrote:
> 
> Is the NPL compatible with /anything/? :)
> 


Mozilla will be dual licenced with the GPL. But you already read that.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From gstein at lyra.org  Fri Aug 18 02:55:56 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 17 Aug 2000 17:55:56 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <14748.33442.7609.588513@anthem.concentric.net>; from bwarsaw@beopen.com on Thu, Aug 17, 2000 at 08:26:10PM -0400
References: <20000817164137.U17689@lyra.org> <ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com> <20000817171917.V17689@lyra.org> <14748.33442.7609.588513@anthem.concentric.net>
Message-ID: <20000817175556.Y17689@lyra.org>

On Thu, Aug 17, 2000 at 08:26:10PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "GS" == Greg Stein <gstein at lyra.org> writes:
> 
>     GS> The NPL is not compatible with the Python license. While we
>     GS> could use their API as a guide for our own code, we cannot use
>     GS> their code.
> 
> >>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:
> 
>     TP> Jesus, Mark, I haven't even been able to figure what the
>     TP> license means by "you" yet:
> 
> Is the NPL compatible with /anything/? :)

All kinds of stuff. It is effectively a non-viral GPL. Any changes to the
NPL/MPL licensed stuff must be released. It does not affect the stuff that
it is linked/dist'd with.

However, I was talking about the Python source code base. The Python license
and the NPL/MPL are definitely compatible. I mean that we don't want both
licenses in the Python code base.

Hmm. Should have phrased that differently.

And one nit: the NPL is very different from the MPL. NPL x.x is nasty, while
MPL 1.1 is very nice.

Note the whole MPL/GPL dual-license stuff that you see (Expat and now
Mozilla) is not because they are trying to be nice, but because they are
trying to compensate for the GPL's nasty viral attitude. You cannot use MPL
code in a GPL product because the *GPL* says so. The MPL would be perfectly
happy, but no... Therefore, people dual-license so that you can choose the
GPL when linking with GPL code.

Ooops. I'll shut up now. :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From bwarsaw at beopen.com  Fri Aug 18 02:49:17 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 17 Aug 2000 20:49:17 -0400 (EDT)
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000817164137.U17689@lyra.org>
	<ECEPKNMJLHAPFFJHDOJBKEGADFAA.MarkH@ActiveState.com>
	<20000817171917.V17689@lyra.org>
	<14748.33442.7609.588513@anthem.concentric.net>
	<20000817174137.B18811@ActiveState.com>
Message-ID: <14748.34829.130052.124407@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Mozilla will be dual licenced with the GPL. But you already
    TM> read that.

Yup, but it'll still be a big hurdle to include any GPL'd code in
Python.

-Barry



From MarkH at ActiveState.com  Fri Aug 18 02:55:02 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 18 Aug 2000 10:55:02 +1000
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817175556.Y17689@lyra.org>
Message-ID: <ECEPKNMJLHAPFFJHDOJBIEGDDFAA.MarkH@ActiveState.com>

[Greg]
> However, I was talking about the Python source code base. The
> Python license
> and the NPL/MPL are definitely compatible.

Phew.  Obviously IANAL, but I thought I was going senile.  I didn't seek
clarification for fear of further demonstrating my ignorance :-)

> I mean that we don't want both licenses in the Python code base.

That makes much more sense to me!

Thanks for the clarification.

Mark.




From greg at cosc.canterbury.ac.nz  Fri Aug 18 03:01:17 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:01:17 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <20000817090942.L376@xs4all.nl>
Message-ID: <200008180101.NAA15496@s454.cosc.canterbury.ac.nz>

Thomas Wouters:

> Bzzzt. This is unfortunately not true. Observe:
>
> daemon2:~/python > rmdir perl/
> rmdir: perl/: Is a directory

I'd say that's a bug in rmdir in whatever Unix you're using.
Solaris doesn't have the problem:

s454% cd ~/tmp
s454% mkdir foo
s454% rmdir foo/
s454% 

There's always room for a particular program to screw up.  However,
the usual principle in Unices is that trailing slashes are optional.

> Note that the trailing slash is added by all tab-completing shells that I
> know.

This is for the convenience of the user, who is probably going to type
another pathname component, and also to indicate that the object found
is a directory. It makes sense in an interactive tool, but not
necessarily in other places.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 03:27:33 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:27:33 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <1245608987-154490918@hypernet.com>
Message-ID: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>

Gordon:

> os.chdir(os.pardir)

Ah, I missed that somehow. Probably I was looking in os.path
instead of os.

Shouldn't everything to do with pathname semantics be in os.path?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 03:52:32 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 13:52:32 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.55394.783997.167234@beluga.mojam.com>
Message-ID: <200008180152.NAA15507@s454.cosc.canterbury.ac.nz>

Skip:

> Since we already have os.path.commonprefix and it's not going away,

If it's to stay the way it is, we need another function to
do what it should have been designed to do in the first place.
That means two new functions, one to find a common prefix,
and one to remove a given prefix.

But it's not clear exactly what a function such as

   removeprefix(prefix, path)

should do. What happens, for instance, if 'prefix' is not actually a
prefix of 'path', or only part of it is a prefix?

A reasonable definition might be that however much of 'prefix' is
a prefix of 'path' is removed. But that requires finding the common
prefix of the prefix and the path, which is intruding on commonprefix's 
territory!

This is what led me to think of combining the two operations
into one, which would have a clear, unambiguous definition
covering all cases.

> there's nothing in the name factorize that suggests that it would
> split the paths at the common prefix.

I'm not particularly attached to that name. Call it
'splitcommonprefix' or something if you like.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 04:02:09 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:02:09 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14747.57236.264324.165612@beluga.mojam.com>
Message-ID: <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>

Skip:

> maybe we should move it to some other module
> that has no directory path implications.

I agree!

> Perhaps string?  Oh, that's deprecated.

Is the whole string module deprecated, or only those parts
which are now available as string methods? I think trying to
eliminate the string module altogether would be a mistake,
since it would leave nowhere for string operations that don't
make sense as methods of a string.

The current version of commonprefix is a case in point,
since it operates symmetrically on a collection of strings.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+





From gmcm at hypernet.com  Fri Aug 18 04:07:04 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Thu, 17 Aug 2000 22:07:04 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000817231942.O376@xs4all.nl>
Message-ID: <1245558070-157553278@hypernet.com>

Thomas Wouters wrote:

> The other issue is the change in semantics for 'from-import'.

Um, maybe I'm not seeing something, but isn't the effect of 
"import goom.bah as snarf" the same as "from goom import 
bah as snarf"? Both forms mean that we don't end up looking 
for (the aliased) bah in another namespace, (thus both forms 
fall prey to the circular import problem).

Why not just disallow "from ... import ... as ..."?



- Gordon



From fdrake at beopen.com  Fri Aug 18 04:13:25 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:13:25 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180202.OAA15511@s454.cosc.canterbury.ac.nz>
References: <14747.57236.264324.165612@beluga.mojam.com>
	<200008180202.OAA15511@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.39877.3411.744665@cj42289-a.reston1.va.home.com>

Skip:
 > Perhaps string?  Oh, that's deprecated.

Greg Ewing writes:
 > Is the whole string module deprecated, or only those parts
 > which are now available as string methods? I think trying to

  I wasn't aware of any actual deprecation, just a shift of
preference.  There's not a notice of the deprecation in the docs.  ;)
In fact, there are things that are in the module that are not
available as string methods.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Aug 18 04:38:06 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:38:06 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
References: <1245608987-154490918@hypernet.com>
	<200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.41358.61606.202184@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > Gordon:
 > 
 > > os.chdir(os.pardir)
 > 
 > Ah, I missed that somehow. Probably I was looking in os.path
 > instead of os.
 > 
 > Shouldn't everything to do with pathname semantics be in os.path?

  Should be, yes.  I'd vote that curdir, pardir, sep, altsep, and
pathsep be added to the *path modules, and os could pick them up from
there.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From amk at s154.tnt3.ann.va.dialup.rcn.com  Fri Aug 18 04:46:32 2000
From: amk at s154.tnt3.ann.va.dialup.rcn.com (A.M. Kuchling)
Date: Thu, 17 Aug 2000 22:46:32 -0400
Subject: [Python-Dev] Request for help w/ bsddb module
Message-ID: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>

[CC'ed to python-dev, python-list]

I've started writing a straight C version of Greg Smith's BSDDB3
module (http://electricrain.com/greg/python/bsddb3/), which currently
uses SWIG.  The code isn't complete enough to do anything yet, though
it does compile.  

Now I'm confronted with writing around 60 different methods for 3
different types; the job doesn't look difficult, but it does look
tedious and lengthy.  Since the task will parallelize well, I'm asking
if anyone wants to help out by writing the code for one of the types.

If you want to help, grab Greg's code from the above URL, and my
incomplete module from
ftp://starship.python.net/pub/crew/amk/new/_bsddb.c.  Send me an
e-mail telling me which set of methods (those for the DB, DBC, DB_Env
types) you want to implement before starting to avoid duplicating
work.  I'll coordinate, and will debug the final product.

(Can this get done in time for Python 2.0?  Probably.  Can it get
tested in time for 2.0?  Ummm....)

--amk









From greg at cosc.canterbury.ac.nz  Fri Aug 18 04:45:46 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:45:46 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>
Message-ID: <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>

Tim Peters:

> The principled solution is to put the new functionality in a new
> function.

I agree with that.

> By my count, Unix and Windows people have each voted for both answers, and
> the Mac contingent is silently laughing <wink>.

The Mac situation is somewhat complicated. Most of the time
a single trailing colon makes no difference, but occasionally
it does. For example, "abc" is a relative pathname, but
"abc:" is an absolute pathname!

The best way to resolve this, I think, is to decree that it
should do the same as what os.path.split does, on all
platforms. That function seems to know how to deal with 
all the tricky cases correctly.

Don't-even-think-of-asking-about-VMS-ly,

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake at beopen.com  Fri Aug 18 04:55:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 17 Aug 2000 22:55:59 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180245.OAA15517@s454.cosc.canterbury.ac.nz>
References: <LNBBLJKPBEHFEDALKOLCAEGGHAAA.tim_one@email.msn.com>
	<200008180245.OAA15517@s454.cosc.canterbury.ac.nz>
Message-ID: <14748.42431.165537.946022@cj42289-a.reston1.va.home.com>

Greg Ewing writes:
 > Don't-even-think-of-asking-about-VMS-ly,

  Really!  I looked at some docs for the path names on that system,
and didn't come away so much as convinced DEC/Compaq knew what they
looked like.  Or where they stopped.  Or started.
  I think a fully general path algebra will be *really* hard to do,
but it's something I've thought about a little.  Don't know when I'll
have time to dig back into it.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From greg at cosc.canterbury.ac.nz  Fri Aug 18 04:57:34 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 14:57:34 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399B94EB.E95260EE@lemburg.com>
Message-ID: <200008180257.OAA15523@s454.cosc.canterbury.ac.nz>

M.-A. Lemburg:

> By dropping the trailing slash from the path
> you are removing important information from the path information.

No, you're not. A trailing slash on a Unix pathname doesn't
tell you anything about whether it refers to a directory.
Actually, it doesn't tell you anything at all. Slashes
simply delimit pathname components, nothing more.

A demonstration of this:

s454% cat > foo/
asdf
s454% cat foo/
asdf
s454% 

A few utilites display pathnames with trailing slashes in
order to indicate that they refer to directories, but that's
a special convention confined to those tools. It doesn't
apply in general.

The only sure way to find out whether a given pathname refers 
to a directory or not is to ask the filesystem. And if the 
object referred to doesn't exist, the question of whether it's 
a directory is meaningless.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Aug 18 05:34:57 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 18 Aug 2000 15:34:57 +1200 (NZST)
Subject: [Python-Dev] 'import as'
In-Reply-To: <1245558070-157553278@hypernet.com>
Message-ID: <200008180334.PAA15543@s454.cosc.canterbury.ac.nz>

Gordon McMillan <gmcm at hypernet.com>:

> isn't the effect of "import goom.bah as snarf" the same as "from goom
> import bah as snarf"?

Only if goom.bah is a submodule or subpackage, I think.
Otherwise "import goom.bah" doesn't work in the first place.

I'm not sure that "import goom.bah as snarf" should
be allowed, even if goom.bah is a module. Should the
resulting object be referred to as snarf, snarf.bah
or goom.snarf?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Fri Aug 18 05:39:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 23:39:29 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817142207.A5592@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>

[Trent Mick]
> ...
> I'm all for being a hoser then.

Canadian <wink>.

> #ifdef's a-comin' down the pipe.
> One thing, the only #define that I know I have a handle on for
> Monterey is '_LP64'. Do you have an objection to that (seeing at
> is kind of misleading)? I will accompany it with an explicative
> comment of course.

Hmm!  I hate "mystery #defines", even when they do make sense.  In my last
commerical project, we had a large set of #defines in its equivalent of
pyport.h, along the lines of Py_COMPILER_MSVC, Py_COMPILER_GCC, Py_ARCH_X86,
Py_ARCH_KATMAI, etc etc.  Over time, *nobody* can remember what goofy
combinations of mystery preprocessor symbols vendors define, and vendors
come and go, and you're left with piles of code you can't make head or tail
of.  "#ifdef __SC__" -- what?

So there was A Rule that vendor-supplied #defines could *only* appear in
(that project's version of) pyport.h, used there to #define symbols whose
purpose was clear from extensive comments and naming conventions.  That
proved to be an excellent idea over years of practice!

So I'll force Python to do that someday too.  In the meantime, _LP64 is a
terrible name for this one, because its true *meaning* (the way you want to
use it) appears to be "sizeof(pthread_t) < sizeof(long)", and that's
certainly not a property of all LP64 platforms.  So how about a runtime test
for what's actually important (and it's not Monterey!)?

	if (sizeof(threadid) <= sizeof(long))
		return (long)threadid;

End of problem, right?  It's a cheap runtime test in a function whose speed
isn't critical anyway.  And it will leave the God-awful casting to the one
platform where it appears to be needed -- while also (I hope) making it
clearer that that's absolutely the wrong thing to be doing on that platform
(throwing away half the bits in the threadid value is certain to make
get_ident return the same value for two distinct threads sooner or later
...).

less-preprocessor-more-sense-ly y'rs  - tim





From tim_one at email.msn.com  Fri Aug 18 05:58:13 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 17 Aug 2000 23:58:13 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180257.OAA15523@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHFHAAA.tim_one@email.msn.com>

[Greg Ewing]
> ...
> A trailing slash on a Unix pathname doesn't tell you anything
> about whether it refers to a directory.

It does if it's also the only character in the pathname <0.5 wink>.  The
same thing bites people on Windows, except even worse, because in UNC
pathnames the leading

   \\machine\volume

"acts like a root", and the presence or absence of a trailing backslash
there makes a world of difference too.

> ...
> The only sure way to find out whether a given pathname refers
> to a directory or not is to ask the filesystem.

On Windows again,

>>> from os import path
>>> path.exists("/python16")
1
>>> path.exists("/python16/")
0
>>>

This insane behavior is displayed by the MS native APIs too, but isn't
documented (at least not last time I peed away hours looking for it).

just-more-evidence-that-windows-weenies-shouldn't-get-a-vote!-ly
    y'rs  - tim





From moshez at math.huji.ac.il  Fri Aug 18 06:39:18 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 18 Aug 2000 07:39:18 +0300 (IDT)
Subject: [Python-Dev] Cookie.py module, and Web PEP
In-Reply-To: <E13PWSp-0006w9-00@kronos.cnri.reston.va.us>
Message-ID: <Pine.GSO.4.10.10008180738100.23483-100000@sundial>

On Thu, 17 Aug 2000, Andrew Kuchling wrote:

> Tim O'Malley finally mailed me the correct URL for the latest version
> of the cookie module: http://www.timo-tasi.org/python/Cookie.py 
> 
> *However*...  I think the Web support in Python needs more work in
> generally, and certainly more than can be done for 2.0.

This is certainly true, but is that reason enough to keep Cookie.py 
out of 2.0?

(+1 on enhancing the Python standard library, of course)
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Aug 18 07:26:51 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 01:26:51 -0400
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <399BE124.9920B0B6@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>

Note that Guido rejected all the loop-gimmick proposals ("indexing",
indices(), irange(), and list.items()) on Thursday, so let's stifle this
debate until after 2.0 (or, even better, until after I'm dead <wink>).

hope-springs-eternal-ly y'rs  - tim





From tim_one at email.msn.com  Fri Aug 18 07:43:14 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 01:43:14 -0400
Subject: [Python-Dev] PyErr_NoMemory
In-Reply-To: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEHJHAAA.tim_one@email.msn.com>

[Vladimir Marangozov]
> The current PyErr_NoMemory() function reads:
>
> PyObject *
> PyErr_NoMemory(void)
> {
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
>         else
>                 /* this will probably fail since there's no
> memory and hee,
>                    hee, we have to instantiate this class
>                 */
>                 PyErr_SetNone(PyExc_MemoryError);
>
>         return NULL;
> }
>
> thus overriding any previous exceptions unconditionally. This is a
> problem when the current exception already *is* PyExc_MemoryError,
> notably when we have a chain (cascade) of memory errors. It is a
> problem because the original memory error and eventually its error
> message is lost.
>
> I suggest to make this code look like:
>
> PyObject *
> PyErr_NoMemory(void)
> {
> 	if (PyErr_ExceptionMatches(PyExc_MemoryError))
> 		/* already current */
> 		return NULL;
>
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
> ...
>
>
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

Looks good to me.  And if it breaks something, it will be darned hard to
tell <wink>.





From nowonder at nowonder.de  Fri Aug 18 10:06:23 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 18 Aug 2000 08:06:23 +0000
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
Message-ID: <399CEE7F.F2B865D2@nowonder.de>

Tim Peters wrote:
> 
> Note that Guido rejected all the loop-gimmick proposals ("indexing",
> indices(), irange(), and list.items()) on Thursday, so let's stifle this
> debate until after 2.0 (or, even better, until after I'm dead <wink>).

That's sad. :-/

One of the reasons I implemented .items() is that I wanted
to increase the probability that at least *something* is
available instead of:

  for i in range(len(list):
      e = list[i]
      ...

or

  for i, e in zip(range(len(list)), list):
      ...

I'm going to teach Python to a lot of newbies (ca. 30) in
October. From my experience (I already tried my luck on two
individuals from that group) 'range(len(list))' is one
of the harder concepts to get across. Even indices(list)
would help here.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From moshez at math.huji.ac.il  Fri Aug 18 08:12:39 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 18 Aug 2000 09:12:39 +0300 (IDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <200008180127.NAA15502@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.GSO.4.10.10008180911430.23483-100000@sundial>

On Fri, 18 Aug 2000, Greg Ewing wrote:

> Gordon:
> 
> > os.chdir(os.pardir)
> 
> Ah, I missed that somehow. Probably I was looking in os.path
> instead of os.
> 
> Shouldn't everything to do with pathname semantics be in os.path?

Nope, os.path is just for "abstract pathname algebra". Anything
dealing with real paths on a real machine belongs elsewhere (os, probably)
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Fri Aug 18 08:30:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 02:30:40 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000817164137.U17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>

[Greg Stein]
> ...
> IOW, an x-plat TLS is going to be done at some point. If you need it now,
> then please do it now. That will help us immeasurably in the long run.

It appears that a correct thread.get_ident() for DEC threads needed it 6
years ago (or at least that would have been-- and remains --the most elegant
way to solve it).  Trent doesn't need it to fix Monterey, though -- his only
problem there is that the Alpha hack doesn't work on his platform, due to
the former's utter bogosity.  From Trent's POV, I bet the one-liner
workaround sounds more appealing.





From cgw at fnal.gov  Fri Aug 18 09:01:59 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 02:01:59 -0500 (CDT)
Subject: [Python-Dev] [Bug #111620] lots of use of send() without verifying amount
 of data sent
Message-ID: <14748.57191.25642.168078@buffalo.fnal.gov>

I'm jumping in late to this discussion to mention to mention that even
for sockets in blocking mode, you can do sends with the MSG_DONTWAIT
flag:

sock.send(msg, socket.MSG_DONTWAIT)

and this will send only as much data as can be written immediately.
I.E., a per-message non-blocking write, without putting the socket
into blocking mode.

So if somebody decides to raise an exception on short TCP writes, they
need to be aware of this.  Personally I think it's a bad idea to be
raising an exception at all for short writes.




From thomas at xs4all.net  Fri Aug 18 09:07:43 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:07:43 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <1245558070-157553278@hypernet.com>; from gmcm@hypernet.com on Thu, Aug 17, 2000 at 10:07:04PM -0400
References: <20000817231942.O376@xs4all.nl> <1245558070-157553278@hypernet.com>
Message-ID: <20000818090743.S376@xs4all.nl>

On Thu, Aug 17, 2000 at 10:07:04PM -0400, Gordon McMillan wrote:
> Thomas Wouters wrote:

> > The other issue is the change in semantics for 'from-import'.

> Um, maybe I'm not seeing something, but isn't the effect of "import
> goom.bah as snarf" the same as "from goom import bah as snarf"?

I don't understand what you're saying here. 'import goom.bah' imports goom,
then bah, and the resulting module in the local namespace is 'goom'. That's
existing behaviour (which I find perplexing, but I've never ran into before
;) which has changed in a reliable way: the local name being stored,
whatever it would have been in a normal import, is changed into the
"as-name" by "as <name>".

If you're saying that 'import goom.bah.baz as b' won't do what people
expect, I agree. (But neither does 'import goom.bah.baz', I think :-)

> Both forms mean that we don't end up looking for (the aliased) bah in
> another namespace, (thus both forms fall prey to the circular import
> problem).

Maybe it's the early hour, but I really don't understand the problem here.
Ofcourse we end up looking 'bah' in the other namespace, we have to import
it. And I don't know what it has to do with circular import either ;P

> Why not just disallow "from ... import ... as ..."?

That would kind of defeat the point of this change. I don't see any
unexpected behaviour with 'from .. import .. as ..'; the object mentioned
after 'import' and before 'as' is the object stored with the local name
which follows 'as'. Why would we want to disallow that ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 18 09:17:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:17:03 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 17, 2000 at 11:39:29PM -0400
References: <20000817142207.A5592@ActiveState.com> <LNBBLJKPBEHFEDALKOLCCEHCHAAA.tim_one@email.msn.com>
Message-ID: <20000818091703.T376@xs4all.nl>

On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:

> So how about a runtime test for what's actually important (and it's not
> Monterey!)?
> 
> 	if (sizeof(threadid) <= sizeof(long))
> 		return (long)threadid;
> 
> End of problem, right?  It's a cheap runtime test in a function whose speed
> isn't critical anyway.

Note that this is what autoconf is for. It also helps to group all that
behaviour-testing code together, in one big lump noone pretends to
understand ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Fri Aug 18 09:35:17 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 09:35:17 +0200
Subject: [Python-Dev] 'import as'
References: <20000817231942.O376@xs4all.nl>
Message-ID: <001901c008e6$dc222760$f2a6b5d4@hagrid>

thomas wrote:
> I have two remaining issues regarding the 'import as' statement, which I'm
> just about ready to commit.

has this been tested with import hooks?

what's passed to the __import__ function's fromlist argument
if you do "from spam import egg as bacon"?

</F>




From thomas at xs4all.net  Fri Aug 18 09:30:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 09:30:49 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <001901c008e6$dc222760$f2a6b5d4@hagrid>; from effbot@telia.com on Fri, Aug 18, 2000 at 09:35:17AM +0200
References: <20000817231942.O376@xs4all.nl> <001901c008e6$dc222760$f2a6b5d4@hagrid>
Message-ID: <20000818093049.I27945@xs4all.nl>

On Fri, Aug 18, 2000 at 09:35:17AM +0200, Fredrik Lundh wrote:
> thomas wrote:
> > I have two remaining issues regarding the 'import as' statement, which I'm
> > just about ready to commit.

> has this been tested with import hooks?

Not really, I'm afraid. I don't know how to use import hooks ;-P But nothing
substantial changed, and I took care to make sure 'find_from_args' gave the
same information, still. For what it's worth, the test suite passed fine,
but I don't know if there's a test for import hooks in there.

> what's passed to the __import__ function's fromlist argument
> if you do "from spam import egg as bacon"?

The same as 'from spam import egg', currently. Better ideas are welcome, of
course, especially if you know how to use import hooks, and how they
generally are used. Pointers towards the right sections are also welcome.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Fri Aug 18 10:02:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 04:02:40 -0400
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]  Lockstep iteration - eureka!)
In-Reply-To: <399CEE7F.F2B865D2@nowonder.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>

I'm stifling it, but, FWIW, I've been trying to sell "indexing" for most of
my adult life <wink -- but yes, in my experience too range(len(seq)) is
extraordinarly hard to get across to newbies at first; and I *expect*
[:len(seq)] to be at least as hard>.


> -----Original Message-----
> From: nowonder at stud.ntnu.no [mailto:nowonder at stud.ntnu.no]On Behalf Of
> Peter Schneider-Kamp
> Sent: Friday, August 18, 2000 4:06 AM
> To: Tim Peters
> Cc: python-dev at python.org
> Subject: Re: indexing, indices(), irange(), list.items() (was RE:
> [Python-Dev] Lockstep iteration - eureka!)
>
>
> Tim Peters wrote:
> >
> > Note that Guido rejected all the loop-gimmick proposals ("indexing",
> > indices(), irange(), and list.items()) on Thursday, so let's stifle this
> > debate until after 2.0 (or, even better, until after I'm dead <wink>).
>
> That's sad. :-/
>
> One of the reasons I implemented .items() is that I wanted
> to increase the probability that at least *something* is
> available instead of:
>
>   for i in range(len(list):
>       e = list[i]
>       ...
>
> or
>
>   for i, e in zip(range(len(list)), list):
>       ...
>
> I'm going to teach Python to a lot of newbies (ca. 30) in
> October. From my experience (I already tried my luck on two
> individuals from that group) 'range(len(list))' is one
> of the harder concepts to get across. Even indices(list)
> would help here.
>
> Peter
> --
> Peter Schneider-Kamp          ++47-7388-7331
> Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
> N-7050 Trondheim              http://schneider-kamp.de





From mal at lemburg.com  Fri Aug 18 10:05:30 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 10:05:30 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <200008180101.NAA15496@s454.cosc.canterbury.ac.nz>
Message-ID: <399CEE49.8E646DC3@lemburg.com>

Greg Ewing wrote:
> 
> > Note that the trailing slash is added by all tab-completing shells that I
> > know.
> 
> This is for the convenience of the user, who is probably going to type
> another pathname component, and also to indicate that the object found
> is a directory. It makes sense in an interactive tool, but not
> necessarily in other places.

Oh, C'mon Greg... haven't you read my reply to this ?

The trailing slash contains important information which might
otherwise not be regainable or only using explicit queries to
the storage system.

The "/" tells the program that the last path component is
a directory. Removing the slash will also remove that information
from the path (and yes: files without extension are legal).

Now, since e.g. posixpath is also used as basis for fiddling
with URLs and other tools using Unix style paths, removing
the slash will result in problems... just look at what your
browser does when you request http://www.python.org/search ...
the server redirects you to search/ to make sure that the 
links embedded in the page are relative to search/ and not
www.python.org/.

Skip, have you already undone that change in CVS ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Fri Aug 18 10:10:01 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 04:10:01 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818091703.T376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>

-0 on autoconf for this.

I doubt that Trent ever needs to know more than in this one place the
relative sizes of threadid and a long, and this entire function is braindead
(hence will be gutted someday) anyway.  Using the explicit test makes it
obvious to everyone; winding back thru layers of autoconf crust makes it A
Project and yet another goofy preprocessor symbol cluttering the code.

> -----Original Message-----
> From: Thomas Wouters [mailto:thomas at xs4all.net]
> Sent: Friday, August 18, 2000 3:17 AM
> To: Tim Peters
> Cc: Trent Mick; python-dev at python.org
> Subject: Re: [Python-Dev] pthreads question: typedef ??? pthread_t and
> hacky return statements
>
>
> On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
>
> > So how about a runtime test for what's actually important (and it's not
> > Monterey!)?
> >
> > 	if (sizeof(threadid) <= sizeof(long))
> > 		return (long)threadid;
> >
> > End of problem, right?  It's a cheap runtime test in a function
> > whose speed isn't critical anyway.
>
> Note that this is what autoconf is for. It also helps to group all that
> behaviour-testing code together, in one big lump noone pretends to
> understand ;)
>
> --
> Thomas Wouters <thomas at xs4all.net>
>
> Hi! I'm a .signature virus! copy me into your .signature file to
> help me spread!





From mal at lemburg.com  Fri Aug 18 10:30:51 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 10:30:51 +0200
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
Message-ID: <399CF43A.478D7165@lemburg.com>

Tim Peters wrote:
> 
> Note that Guido rejected all the loop-gimmick proposals ("indexing",
> indices(), irange(), and list.items()) on Thursday, so let's stifle this
> debate until after 2.0 (or, even better, until after I'm dead <wink>).

Hey, we still have mxTools which gives you most of those goodies 
and lots more ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Fri Aug 18 13:07:43 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Fri, 18 Aug 2000 11:07:43 +0000
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
Message-ID: <399D18FF.BD807ED5@nowonder.de>

What about 'indexing' xor 'in' ? Like that:

for i indexing sequence:      # good
for e in sequence:            # good
for i indexing e in sequence: # BAD!

This might help Guido to understand what it does in the
'indexing' case. I admit that the third one may be a
bit harder to parse, so why not *leave it out*?

But then I'm sure this has also been discussed before.
Nevertheless I'll mail Barry and volunteer for a PEP
on this.

[Tim Peters about his life]
> I've been trying to sell "indexing" for most of my adult life

then-I'll-have-to-spend-another-life-on-it-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From sjoerd at oratrix.nl  Fri Aug 18 11:42:38 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Fri, 18 Aug 2000 11:42:38 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
Message-ID: <20000818094239.A3A1931047C@bireme.oratrix.nl>

Your changes for the import X as Y feature introduced a serious bug:
I can no longer run Python at all.

The problem is that in line 2150 of compile.c com_addopname is called
with a NULL last argument, and the firat thing com_addopname does is
indirect off of that very argument.  On my machine (and on many other
machines) that results in a core dump.

In case it helps, here is the stack trace.  The crash happens when
importing site.py.  I have not made any changes to my site.py.

>  0 com_addopname(c = 0x7fff1e20, op = 90, n = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":738, 0x1006cb58]
   1 com_import_stmt(c = 0x7fff1e20, n = 0x101e2ad0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2150, 0x10071ecc]
   2 com_node(c = 0x7fff1e20, n = 0x101e2ad0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2903, 0x10074764]
   3 com_node(c = 0x7fff1e20, n = 0x101eaf68) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2855, 0x10074540]
   4 com_node(c = 0x7fff1e20, n = 0x101e2908) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2864, 0x100745b0]
   5 com_node(c = 0x7fff1e20, n = 0x1020d450) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":2855, 0x10074540]
   6 com_file_input(c = 0x7fff1e20, n = 0x101e28f0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3137, 0x10075324]
   7 compile_node(c = 0x7fff1e20, n = 0x101e28f0) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3241, 0x100759c0]
   8 jcompile(n = 0x101e28f0, filename = 0x7fff2430 = "./../Lib/site.py", base = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3400, 0x10076058]
   9 PyNode_Compile(n = 0x101e28f0, filename = 0x7fff2430 = "./../Lib/site.py") ["/ufs/sjoerd/src/Python/dist/src/Python/compile.c":3378, 0x10075f7c]
   10 parse_source_module(pathname = 0x7fff2430 = "./../Lib/site.py", fp = 0xfb563c8) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":632, 0x100151a4]
   11 load_source_module(name = 0x7fff28d8 = "site", pathname = 0x7fff2430 = "./../Lib/site.py", fp = 0xfb563c8) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":722, 0x100154c8]
   12 load_module(name = 0x7fff28d8 = "site", fp = 0xfb563c8, buf = 0x7fff2430 = "./../Lib/site.py", type = 1) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1199, 0x1001629c]
   13 import_submodule(mod = 0x101b8478, subname = 0x7fff28d8 = "site", fullname = 0x7fff28d8 = "site") ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1727, 0x10017dc4]
   14 load_next(mod = 0x101b8478, altmod = 0x101b8478, p_name = 0x7fff2d04, buf = 0x7fff28d8 = "site", p_buflen = 0x7fff28d0) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1583, 0x100174c0]
   15 import_module_ex(name = (nil), globals = (nil), locals = (nil), fromlist = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1434, 0x10016d04]
   16 PyImport_ImportModuleEx(name = 0x101d9450 = "site", globals = (nil), locals = (nil), fromlist = (nil)) ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1475, 0x10016fe0]
   17 PyImport_ImportModule(name = 0x101d9450 = "site") ["/ufs/sjoerd/src/Python/dist/src/Python/import.c":1408, 0x10016c64]
   18 initsite() ["/ufs/sjoerd/src/Python/dist/src/Python/pythonrun.c":429, 0x10053148]
   19 Py_Initialize() ["/ufs/sjoerd/src/Python/dist/src/Python/pythonrun.c":166, 0x100529c8]
   20 Py_Main(argc = 1, argv = 0x7fff2ec4) ["/ufs/sjoerd/src/Python/dist/src/Modules/main.c":229, 0x10013690]
   21 main(argc = 1, argv = 0x7fff2ec4) ["/ufs/sjoerd/src/Python/dist/src/Modules/python.c":10, 0x10012f24]
   22 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x10012ec8]

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From fredrik at pythonware.com  Fri Aug 18 12:42:54 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 12:42:54 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
References: <20000816172425.A32338@ActiveState.com>
Message-ID: <003001c00901$11fd8ae0$0900a8c0@SPIFF>

trent mick wrote:
>     return (long) *(long *) &threadid;

from what I can tell, pthread_t is a pointer under OSF/1.

I've been using OSF/1 since the early days, and as far as I can
remember, you've never needed to use stupid hacks like that
to convert a pointer to a long integer. an ordinary (long) cast
should be sufficient.

> Could this be changed to
>   return threadid;
> safely?

safely, yes.  but since it isn't a long on all platforms, you might
get warnings from the compiler (see Mark's mail).

:::

from what I can tell, it's compatible with a long on all sane plat-
forms (Win64 doesn't support pthreads anyway ;-), so I guess the
right thing here is to remove volatile and simply use:

    return (long) threadid;

(Mark: can you try this out on your box?  setting up a Python 2.0
environment on our alphas would take more time than I can spare
right now...)

</F>




From gstein at lyra.org  Fri Aug 18 13:00:34 2000
From: gstein at lyra.org (Greg Stein)
Date: Fri, 18 Aug 2000 04:00:34 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 04:10:01AM -0400
References: <20000818091703.T376@xs4all.nl> <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com>
Message-ID: <20000818040034.F17689@lyra.org>

That is a silly approach, Tim. This is exactly what autoconf is for. Using
run-time logic to figure out something that is compile-time is Badness.

And the "but it will eventually be fixed" rationale is bogus. Gee, should we
just start loading bogus patches into Python, knowing that everything will
be fixed in the next version? Whoops. We forgot some. Oh, we can't change
those now. Well, gee. Maybe Py3K will fix it.

I realize that you're only -0 on this, but it should be at least +0...

Cheers,
-g

On Fri, Aug 18, 2000 at 04:10:01AM -0400, Tim Peters wrote:
> -0 on autoconf for this.
> 
> I doubt that Trent ever needs to know more than in this one place the
> relative sizes of threadid and a long, and this entire function is braindead
> (hence will be gutted someday) anyway.  Using the explicit test makes it
> obvious to everyone; winding back thru layers of autoconf crust makes it A
> Project and yet another goofy preprocessor symbol cluttering the code.
> 
> > -----Original Message-----
> > From: Thomas Wouters [mailto:thomas at xs4all.net]
> > Sent: Friday, August 18, 2000 3:17 AM
> > To: Tim Peters
> > Cc: Trent Mick; python-dev at python.org
> > Subject: Re: [Python-Dev] pthreads question: typedef ??? pthread_t and
> > hacky return statements
> >
> >
> > On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
> >
> > > So how about a runtime test for what's actually important (and it's not
> > > Monterey!)?
> > >
> > > 	if (sizeof(threadid) <= sizeof(long))
> > > 		return (long)threadid;
> > >
> > > End of problem, right?  It's a cheap runtime test in a function
> > > whose speed isn't critical anyway.
> >
> > Note that this is what autoconf is for. It also helps to group all that
> > behaviour-testing code together, in one big lump noone pretends to
> > understand ;)
> >
> > --
> > Thomas Wouters <thomas at xs4all.net>
> >
> > Hi! I'm a .signature virus! copy me into your .signature file to
> > help me spread!
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From gmcm at hypernet.com  Fri Aug 18 14:35:42 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 08:35:42 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818090743.S376@xs4all.nl>
References: <1245558070-157553278@hypernet.com>; from gmcm@hypernet.com on Thu, Aug 17, 2000 at 10:07:04PM -0400
Message-ID: <1245520353-159821909@hypernet.com>

Thomas Wouters wrote:
> On Thu, Aug 17, 2000 at 10:07:04PM -0400, Gordon McMillan wrote:

> > Um, maybe I'm not seeing something, but isn't the effect of
> > "import goom.bah as snarf" the same as "from goom import bah as
> > snarf"?
> 
> I don't understand what you're saying here. 'import goom.bah'
> imports goom, then bah, and the resulting module in the local
> namespace is 'goom'. That's existing behaviour (which I find
> perplexing, but I've never ran into before ;) which has changed
> in a reliable way: the local name being stored, whatever it would
> have been in a normal import, is changed into the "as-name" by
> "as <name>".

A whole lot rides on what you mean by "resulting" above. If by 
"resulting" you mean "goom", then "import goom.bah as snarf" 
would result in my namespace having "snarf" as an alias for 
"goom", and I would use "bah" as "snarf.bah". In which case 
Greg Ewing is right, and it's "import <dotted name> as ..." 
that should be outlawed, (since that's not what anyone would 
expect).

OTOH, if by "resulting" you meant "bah", things are much 
worse, because it means you must patched code you didn't 
understand ;-b.

> If you're saying that 'import goom.bah.baz as b' won't do what
> people expect, I agree. (But neither does 'import goom.bah.baz',
> I think :-)

I disagree with paranthetical comment. Maybe some don't 
understand the first time they see it, but it has precedent 
(Java), and it's the only form that works in circular imports.
 
> Maybe it's the early hour, but I really don't understand the
> problem here. Ofcourse we end up looking 'bah' in the other
> namespace, we have to import it. And I don't know what it has to
> do with circular import either ;P

"goom.bah" ends up looking in "goom" when *used*. If, in a 
circular import situation, "goom" is already being imported, an 
"import goom.bah" will succeed, even though it can't access 
"bah" in "goom". The code sees it in sys.modules, sees that 
it's being imported, and says, "Oh heck, lets keep going, it'll 
be there before it gets used".

But "from goom import bah" will fail with an import error 
because "goom" is an empty shell, so there's no way to grab 
"bah" and bind it into the local namespace.
 


- Gordon



From bwarsaw at beopen.com  Fri Aug 18 15:27:05 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 09:27:05 -0400 (EDT)
Subject: [Python-Dev] 'import as'
References: <1245558070-157553278@hypernet.com>
	<1245520353-159821909@hypernet.com>
Message-ID: <14749.14761.275554.898385@anthem.concentric.net>

>>>>> "Gordo" == Gordon McMillan <gmcm at hypernet.com> writes:

    Gordo> A whole lot rides on what you mean by "resulting" above. If
    Gordo> by "resulting" you mean "goom", then "import goom.bah as
    Gordo> snarf" would result in my namespace having "snarf" as an
    Gordo> alias for "goom", and I would use "bah" as "snarf.bah". In
    Gordo> which case Greg Ewing is right, and it's "import <dotted
    Gordo> name> as ..."  that should be outlawed, (since that's not
    Gordo> what anyone would expect).

Right.

    Gordo> OTOH, if by "resulting" you meant "bah", things are much 
    Gordo> worse, because it means you must patched code you didn't 
    Gordo> understand ;-b.

But I think it /is/ useful behavior for "import <dotted name> as" to
bind the rightmost attribute to the local name.  I agree though that
if that can't be done in a sane way, it has to raise an exception.
But that will frustrate users.

-Barry



From gmcm at hypernet.com  Fri Aug 18 15:28:00 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 09:28:00 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399CEE49.8E646DC3@lemburg.com>
Message-ID: <1245517214-160010723@hypernet.com>

M.-A. Lemburg wrote:

> ... just look at what your browser does
> when you request http://www.python.org/search ... the server
> redirects you to search/ to make sure that the links embedded in
> the page are relative to search/ and not www.python.org/.

While that seems to be what Apache does, I get 40x's from 
IIS and Netscape server. Greg Ewing's demonstrated a Unix 
where the trailing slash indicates nothing useful, Tim's 
demonstrated that Windows gets confused by a trailing slash 
unless we're talking about the root directory on a drive (and 
BTW, same results if you use backslash).

On WIndows, os.path.commonprefix doesn't use normcase 
and normpath, so it's completely useless anyway. (That is, it's 
really a "string" function and has nothing to do with paths).

- Gordon



From nascheme at enme.ucalgary.ca  Fri Aug 18 15:55:41 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Fri, 18 Aug 2000 07:55:41 -0600
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
In-Reply-To: <399CF43A.478D7165@lemburg.com>; from M.-A. Lemburg on Fri, Aug 18, 2000 at 10:30:51AM +0200
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com> <399CF43A.478D7165@lemburg.com>
Message-ID: <20000818075541.A14919@keymaster.enme.ucalgary.ca>

On Fri, Aug 18, 2000 at 10:30:51AM +0200, M.-A. Lemburg wrote:
> Hey, we still have mxTools which gives you most of those goodies 
> and lots more ;-)

Yes, I don't understand what's wrong with a function.  It would be nice
if it was a builtin.  IMHO, all this new syntax is a bad idea.

  Neil



From fdrake at beopen.com  Fri Aug 18 16:12:24 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 10:12:24 -0400 (EDT)
Subject: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]  Lockstep iteration - eureka!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
References: <399CEE7F.F2B865D2@nowonder.de>
	<LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
Message-ID: <14749.17480.153311.549655@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > I'm stifling it, but, FWIW, I've been trying to sell "indexing" for most of
 > my adult life <wink -- but yes, in my experience too range(len(seq)) is
 > extraordinarly hard to get across to newbies at first; and I *expect*
 > [:len(seq)] to be at least as hard>.

  And "for i indexing o in ...:" is the best proposal I've seen to
resolve the whole problem in what *I* would describe as a Pythonic
way.  And it's not a new proposal.
  I haven't read the specific patch, but bugs can be fixed.  I guess a
lot of us will just have to disagree with the Guido on this one.  ;-(
  Linguistic coup, anyone?  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 18 16:17:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 16:17:46 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818094239.A3A1931047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Fri, Aug 18, 2000 at 11:42:38AM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
Message-ID: <20000818161745.U376@xs4all.nl>

On Fri, Aug 18, 2000 at 11:42:38AM +0200, Sjoerd Mullender wrote:

> Your changes for the import X as Y feature introduced a serious bug:
> I can no longer run Python at all.

> The problem is that in line 2150 of compile.c com_addopname is called
> with a NULL last argument, and the firat thing com_addopname does is
> indirect off of that very argument.  On my machine (and on many other
> machines) that results in a core dump.

Hm. That's very strange. Line 2150 of compile.c calls com_addopname with
'CHILD(CHILD(subn, 0), 0)' as argument. 'subn' is supposed to be a
'dotted_as_name', which always has at least one child (a dotted_name), which
also always has at least one child (a NAME). I don't see how dotted_as_name
and dotted_name can be valid nodes, but the first child of dotted_name be
NULL.

Can you confirm that the tree is otherwise unmodified ? If you have local
patches, can you try to compile and test a 'clean' tree ? I can't reproduce
this on the machines I have access to, so if you could find out what
statement exactly is causing this behaviour, I'd be very grateful. Something
like this should do the trick, changing:

                        } else
                                com_addopname(c, STORE_NAME,
                                              CHILD(CHILD(subn, 0),0));

into

                        } else {
                                if (CHILD(CHILD(subn, 0), 0) == NULL) {
                                        com_error(c, PyExc_SyntaxError,
                                                  "NULL name for import");
                                        return;
                                }
                                com_addopname(c, STORE_NAME,
                                              CHILD(CHILD(subn, 0),0));
                        }

And then recompile, and remove site.pyc if there is one. (Unlikely, if a
crash occured while compiling site.py, but possible.) This should raise a
SyntaxError on or about the appropriate line, at least identifying what the
problem *could* be ;)

If that doesn't yield anything obvious, and you have the time for it (and
want to do it), some 'print' statements in the debugger might help. (I'm
assuming it works more or less like GDB, in which case 'print n', 'print
n->n_child[1]', 'print subn', 'print subn->n_child[0]' and 'print
subn->n_child[1]' would be useful. I'm also assuming there isn't an easier
way to debug this, like you sending me a corefile, because corefiles
normally aren't very portable :P If it *is* portable, that'd be great.)

> In case it helps, here is the stack trace.  The crash happens when
> importing site.py.  I have not made any changes to my site.py.

Oh, it's probably worth it to re-make the Grammar (just to be sure) and
remove Lib/*.pyc. The bytecode magic changes in the patch, so that last
measure should be unecessary, but who knows :P

breaky-breaky-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 18 16:21:20 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 10:21:20 -0400 (EDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev]
 Lockstep iteration - eureka!)
In-Reply-To: <399D18FF.BD807ED5@nowonder.de>
References: <LNBBLJKPBEHFEDALKOLCIEIAHAAA.tim_one@email.msn.com>
	<399D18FF.BD807ED5@nowonder.de>
Message-ID: <14749.18016.323403.295212@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > What about 'indexing' xor 'in' ? Like that:
 > 
 > for i indexing sequence:      # good
 > for e in sequence:            # good
 > for i indexing e in sequence: # BAD!
 > 
 > This might help Guido to understand what it does in the
 > 'indexing' case. I admit that the third one may be a
 > bit harder to parse, so why not *leave it out*?

  I hadn't considered *not* using an "in" clause, but that is actually
pretty neat.  I'd like to see all of these allowed; disallowing "for i
indexing e in ...:" reduces the intended functionality substantially.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gmcm at hypernet.com  Fri Aug 18 16:42:20 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 10:42:20 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <14749.14761.275554.898385@anthem.concentric.net>
Message-ID: <1245512755-160278942@hypernet.com>

Barry "5 String" Warsaw wrote:

> But I think it /is/ useful behavior for "import <dotted name> as"
> to bind the rightmost attribute to the local name. 

That is certainly what I would expect (and thus the confusion 
created by my original post).

> I agree
> though that if that can't be done in a sane way, it has to raise
> an exception. But that will frustrate users.

"as" is minorly useful in dealing with name clashes between 
packages, and with reallyreallylongmodulename.

Unfortunately, it's also yet another way of screwing up circular 
imports and reloads, (unless you go whole hog, and use Jim 
Fulton's idea of an assoctiation object).

Then there's all the things that can go wrong with relative 
imports (loading the same module twice; masking anything 
outside the package with the same name).

It's not surprising that most import problems posted to c.l.py 
get more wrong answers than right. Unfortunately, there's no 
way to fix it in a backwards compatible way.

So I'm -0: it just adds complexity to an already overly complex 
area.

- Gordon



From bwarsaw at beopen.com  Fri Aug 18 16:55:18 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 10:55:18 -0400 (EDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items() (was RE: [Python-Dev] Lockstep iteration - eureka!)
References: <LNBBLJKPBEHFEDALKOLCEEHHHAAA.tim_one@email.msn.com>
	<399CF43A.478D7165@lemburg.com>
	<20000818075541.A14919@keymaster.enme.ucalgary.ca>
Message-ID: <14749.20054.495550.467507@anthem.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

    NS> On Fri, Aug 18, 2000 at 10:30:51AM +0200, M.-A. Lemburg wrote:
    >> Hey, we still have mxTools which gives you most of those
    >> goodies and lots more ;-)

    NS> Yes, I don't understand what's wrong with a function.  It
    NS> would be nice if it was a builtin.  IMHO, all this new syntax
    NS> is a bad idea.

I agree, but Guido nixed even the builtin.  Let's move on; there's
always Python 2.1.

-Barry



From akuchlin at mems-exchange.org  Fri Aug 18 17:00:37 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 11:00:37 -0400
Subject: [Python-Dev] Adding insint() function
Message-ID: <20000818110037.C27419@kronos.cnri.reston.va.us>

Four modules define insint() functions to insert an integer into a
dictionary in order to initialize constants in their module
dictionaries:

kronos Modules>grep -l insint *.c
pcremodule.c
shamodule.c
socketmodule.c
zlibmodule.c
kronos Modules>          

(Hm... I was involved with 3 of them...)  Other modules don't use a
helper function, but just do PyDict_SetItemString(d, "foo",
PyInt_FromLong(...)) directly.  

This duplication bugs me.  Shall I submit a patch to add an API
convenience function to do this, and change the modules to use it?
Suggested prototype and name: PyDict_InsertInteger( dict *, string,
long)

--amk






From bwarsaw at beopen.com  Fri Aug 18 17:06:11 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 11:06:11 -0400 (EDT)
Subject: [Python-Dev] 'import as'
References: <14749.14761.275554.898385@anthem.concentric.net>
	<1245512755-160278942@hypernet.com>
Message-ID: <14749.20707.347217.763385@anthem.concentric.net>

>>>>> "Gordo" == Gordon "Punk Cellist" McMillan <gmcm at hypernet.com> writes:

    Gordo> So I'm -0: it just adds complexity to an already overly
    Gordo> complex area.

I agree, -0 from me too.



From sjoerd at oratrix.nl  Fri Aug 18 17:06:37 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Fri, 18 Aug 2000 17:06:37 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Fri, 18 Aug 2000 16:17:46 +0200.
             <20000818161745.U376@xs4all.nl> 
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> 
            <20000818161745.U376@xs4all.nl> 
Message-ID: <20000818150639.6685C31047C@bireme.oratrix.nl>

Ok, problem solved.

The problem was that because of your (I think it was you :-) earlier
change to have a Makefile in Grammar, I had an old graminit.c lying
around in my build directory.  I don't build in the source directory
and the changes for a Makefile in Grammar resulted in a file
graminit.c in the wrong place.  My subsequent change to this part of
the build process resulted in a different place for graminit.c and I
never removed the bogus graminit.c (because I didn't know about it).
However, the compiler found the bogus one so that's why python
crashed.

On Fri, Aug 18 2000 Thomas Wouters wrote:

> On Fri, Aug 18, 2000 at 11:42:38AM +0200, Sjoerd Mullender wrote:
> 
> > Your changes for the import X as Y feature introduced a serious bug:
> > I can no longer run Python at all.
> 
> > The problem is that in line 2150 of compile.c com_addopname is called
> > with a NULL last argument, and the firat thing com_addopname does is
> > indirect off of that very argument.  On my machine (and on many other
> > machines) that results in a core dump.
> 
> Hm. That's very strange. Line 2150 of compile.c calls com_addopname with
> 'CHILD(CHILD(subn, 0), 0)' as argument. 'subn' is supposed to be a
> 'dotted_as_name', which always has at least one child (a dotted_name), which
> also always has at least one child (a NAME). I don't see how dotted_as_name
> and dotted_name can be valid nodes, but the first child of dotted_name be
> NULL.
> 
> Can you confirm that the tree is otherwise unmodified ? If you have local
> patches, can you try to compile and test a 'clean' tree ? I can't reproduce
> this on the machines I have access to, so if you could find out what
> statement exactly is causing this behaviour, I'd be very grateful. Something
> like this should do the trick, changing:
> 
>                         } else
>                                 com_addopname(c, STORE_NAME,
>                                               CHILD(CHILD(subn, 0),0));
> 
> into
> 
>                         } else {
>                                 if (CHILD(CHILD(subn, 0), 0) == NULL) {
>                                         com_error(c, PyExc_SyntaxError,
>                                                   "NULL name for import");
>                                         return;
>                                 }
>                                 com_addopname(c, STORE_NAME,
>                                               CHILD(CHILD(subn, 0),0));
>                         }
> 
> And then recompile, and remove site.pyc if there is one. (Unlikely, if a
> crash occured while compiling site.py, but possible.) This should raise a
> SyntaxError on or about the appropriate line, at least identifying what the
> problem *could* be ;)
> 
> If that doesn't yield anything obvious, and you have the time for it (and
> want to do it), some 'print' statements in the debugger might help. (I'm
> assuming it works more or less like GDB, in which case 'print n', 'print
> n->n_child[1]', 'print subn', 'print subn->n_child[0]' and 'print
> subn->n_child[1]' would be useful. I'm also assuming there isn't an easier
> way to debug this, like you sending me a corefile, because corefiles
> normally aren't very portable :P If it *is* portable, that'd be great.)
> 
> > In case it helps, here is the stack trace.  The crash happens when
> > importing site.py.  I have not made any changes to my site.py.
> 
> Oh, it's probably worth it to re-make the Grammar (just to be sure) and
> remove Lib/*.pyc. The bytecode magic changes in the patch, so that last
> measure should be unecessary, but who knows :P
> 
> breaky-breaky-ly y'rs,
> -- 
> Thomas Wouters <thomas at xs4all.net>
> 
> Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From bwarsaw at beopen.com  Fri Aug 18 17:27:41 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 11:27:41 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <14749.21997.872741.463566@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> Four modules define insint() functions to insert an integer
    AK> into a dictionary in order to initialize constants in their
    AK> module dictionaries:

    | kronos Modules>grep -l insint *.c
    | pcremodule.c
    | shamodule.c
    | socketmodule.c
    | zlibmodule.c
    | kronos Modules>          

    AK> (Hm... I was involved with 3 of them...)  Other modules don't
    AK> use a helper function, but just do PyDict_SetItemString(d,
    AK> "foo", PyInt_FromLong(...)) directly.

    AK> This duplication bugs me.  Shall I submit a patch to add an
    AK> API convenience function to do this, and change the modules to
    AK> use it?  Suggested prototype and name: PyDict_InsertInteger(
    AK> dict *, string, long)

+0, but it should probably be called PyDict_SetItemSomething().  It
seems more related to the other PyDict_SetItem*() functions, even
though in those cases the `*' refers to the type of the key, not the
value.

-Barry



From akuchlin at mems-exchange.org  Fri Aug 18 17:29:47 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 11:29:47 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.21997.872741.463566@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:27:41AM -0400
References: <20000818110037.C27419@kronos.cnri.reston.va.us> <14749.21997.872741.463566@anthem.concentric.net>
Message-ID: <20000818112947.F27419@kronos.cnri.reston.va.us>

On Fri, Aug 18, 2000 at 11:27:41AM -0400, Barry A. Warsaw wrote:
>+0, but it should probably be called PyDict_SetItemSomething().  It
>seems more related to the other PyDict_SetItem*() functions, even
>though in those cases the `*' refers to the type of the key, not the
>value.

PyDict_SetItemInteger seems misleading; PyDict_SetItemStringToInteger 
is simply long.  PyDict_SetIntegerItem, maybe?  :)

Anyway, I'll start working on a patch and change the name later once
there's a good suggestion.

--amk



From mal at lemburg.com  Fri Aug 18 17:41:14 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 17:41:14 +0200
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <1245517214-160010723@hypernet.com>
Message-ID: <399D591A.F909CF9D@lemburg.com>

Gordon McMillan wrote:
> 
> M.-A. Lemburg wrote:
> 
> > ... just look at what your browser does
> > when you request http://www.python.org/search ... the server
> > redirects you to search/ to make sure that the links embedded in
> > the page are relative to search/ and not www.python.org/.
> 
> While that seems to be what Apache does, I get 40x's from
> IIS and Netscape server. Greg Ewing's demonstrated a Unix
> where the trailing slash indicates nothing useful, Tim's
> demonstrated that Windows gets confused by a trailing slash
> unless we're talking about the root directory on a drive (and
> BTW, same results if you use backslash).
> 
> On WIndows, os.path.commonprefix doesn't use normcase
> and normpath, so it's completely useless anyway. (That is, it's
> really a "string" function and has nothing to do with paths).

I still don't get it: what's the point in carelessly dropping
valid and useful information for no obvious reason at all ?

Besides the previous behaviour was documented and most probably
used in some apps. Why break those ?

And last not least: what if the directory in question doesn't
even exist anywhere and is only encoded in the path by the fact
that there is a slash following it ?

Puzzled by needless discussions ;-),
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Fri Aug 18 17:42:59 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 11:42:59 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.21997.872741.463566@anthem.concentric.net>
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
	<14749.21997.872741.463566@anthem.concentric.net>
Message-ID: <14749.22915.717712.613834@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > +0, but it should probably be called PyDict_SetItemSomething().  It
 > seems more related to the other PyDict_SetItem*() functions, even
 > though in those cases the `*' refers to the type of the key, not the
 > value.

  Hmm... How about PyDict_SetItemStringInt() ?  It's still long, but I
don't think that's actually a problem.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 18 18:22:46 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 18:22:46 +0200
Subject: [Python-Dev] 'import as'
In-Reply-To: <14749.20707.347217.763385@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:06:11AM -0400
References: <14749.14761.275554.898385@anthem.concentric.net> <1245512755-160278942@hypernet.com> <14749.20707.347217.763385@anthem.concentric.net>
Message-ID: <20000818182246.V376@xs4all.nl>

On Fri, Aug 18, 2000 at 11:06:11AM -0400, Barry A. Warsaw wrote:

>     Gordo> So I'm -0: it just adds complexity to an already overly
>     Gordo> complex area.

> I agree, -0 from me too.

What are we voting on, here ?

import <name> as <localname> (in general)

or

import <name1>.<nameN>+ as <localname> (where localname turns out an alias
for name1)

or

import <name1>.<nameN>*.<nameX> as <localname> (where localname turns out an
alias for nameX, that is, the last part of the dotted name that's being
imported)

? 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Fri Aug 18 18:28:49 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 12:28:49 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <399D591A.F909CF9D@lemburg.com>
Message-ID: <1245506365-160663281@hypernet.com>

M.-A. Lemburg wrote:
> Gordon McMillan wrote:
> > 
> > M.-A. Lemburg wrote:
> > 
> > > ... just look at what your browser does
> > > when you request http://www.python.org/search ... the server
> > > redirects you to search/ to make sure that the links embedded
> > > in the page are relative to search/ and not www.python.org/.
> > 
> > While that seems to be what Apache does, I get 40x's from
> > IIS and Netscape server. Greg Ewing's demonstrated a Unix
> > where the trailing slash indicates nothing useful, Tim's
> > demonstrated that Windows gets confused by a trailing slash
> > unless we're talking about the root directory on a drive (and
> > BTW, same results if you use backslash).
> > 
> > On WIndows, os.path.commonprefix doesn't use normcase
> > and normpath, so it's completely useless anyway. (That is, it's
> > really a "string" function and has nothing to do with paths).
> 
> I still don't get it: what's the point in carelessly dropping
> valid and useful information for no obvious reason at all ?

I wasn't advocating anything. I was pointing out that it's not 
necessarily "valid" or "useful" information in all contexts.
 
> Besides the previous behaviour was documented and most probably
> used in some apps. Why break those ?

I don't think commonprefix should be changed, precisely 
because it might break apps. I also think it should not live in 
os.path, because it is not an abstract path operation. It's just 
a string operation. But it's there, so the best I can advise is 
not to use it.
 
> And last not least: what if the directory in question doesn't
> even exist anywhere and is only encoded in the path by the fact
> that there is a slash following it ?

If it doesn't exist, it's not a directory with or without a slash 
following it. The fact that Python largely successfully reuses 
os.path code to deal with URLs does not mean that the 
syntax of URLs should be mistaken for the syntax of file 
systems, even at an abstract level. At the level where 
commonprefix operates, abstraction isn't even a concept.

- Gordon



From gmcm at hypernet.com  Fri Aug 18 18:33:12 2000
From: gmcm at hypernet.com (Gordon McMillan)
Date: Fri, 18 Aug 2000 12:33:12 -0400
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
References: <14749.20707.347217.763385@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 11:06:11AM -0400
Message-ID: <1245506103-160679086@hypernet.com>

Thomas Wouters wrote:
> On Fri, Aug 18, 2000 at 11:06:11AM -0400, Barry A. Warsaw wrote:
> 
> >     Gordo> So I'm -0: it just adds complexity to an already
> >     overly Gordo> complex area.
> 
> > I agree, -0 from me too.
> 
> What are we voting on, here ?
> 
> import <name> as <localname> (in general)

 -0

> import <name1>.<nameN>+ as <localname> (where localname turns out
> an alias for name1)

-1000

> import <name1>.<nameN>*.<nameX> as <localname> (where localname
> turns out an alias for nameX, that is, the last part of the
> dotted name that's being imported)

-0



- Gordon



From thomas at xs4all.net  Fri Aug 18 18:41:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 18:41:31 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818150639.6685C31047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Fri, Aug 18, 2000 at 05:06:37PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl>
Message-ID: <20000818184131.W376@xs4all.nl>

On Fri, Aug 18, 2000 at 05:06:37PM +0200, Sjoerd Mullender wrote:
> Ok, problem solved.

> The problem was that because of your (I think it was you :-) earlier
> change to have a Makefile in Grammar, I had an old graminit.c lying
> around in my build directory. 

Right. That patch was mine, and I think we should remove it again :P We
aren't changing Grammar *that* much, and we'll just have to 'make Grammar'
individually. Grammar now also gets re-made much too often (though that
doesn't really present any problems, it's just a tad sloppy.) Do we really
want that in the released package ?

The Grammar dependency can't really be solved until dependencies in general
are handled better (or at all), especially between directories. It'll only
create more Makefile spaghetti :P
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 18 18:41:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 12:41:35 -0400 (EDT)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <1245506365-160663281@hypernet.com>
References: <399D591A.F909CF9D@lemburg.com>
	<1245506365-160663281@hypernet.com>
Message-ID: <14749.26431.198802.970572@cj42289-a.reston1.va.home.com>

Gordon McMillan writes:
 > I don't think commonprefix should be changed, precisely 
 > because it might break apps. I also think it should not live in 
 > os.path, because it is not an abstract path operation. It's just 
 > a string operation. But it's there, so the best I can advise is 
 > not to use it.

  This works.  Let's accept (some variant) or Skip's desired
functionality as os.path.splitprefix(); this avoid breaking existing
code and uses a name that's consistent with the others.  The result
can be (prefix, [list of suffixes]).  Trailing slashes should be
handled so that os.path.join(prefix, suffix) does the "right thing".


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Fri Aug 18 18:37:02 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 12:37:02 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
References: <14749.14761.275554.898385@anthem.concentric.net>
	<1245512755-160278942@hypernet.com>
	<14749.20707.347217.763385@anthem.concentric.net>
	<20000818182246.V376@xs4all.nl>
Message-ID: <14749.26158.777771.458507@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > What are we voting on, here ?

  We should be really clear about this, since it is confusing.

 > import <name> as <localname> (in general)

  +1 for this basic usage.

 > import <name1>.<nameN>+ as <localname> (where localname turns out an alias
 > for name1)

  -1, because it's confusing for users

 > import <name1>.<nameN>*.<nameX> as <localname> (where localname turns out an
 > alias for nameX, that is, the last part of the dotted name that's being
 > imported)

  +1 on the idea, but the circular import issue is very real and I'm
not sure of the best way to solve it.
  For now, let's support:

	import name1 as localname

where neither name1 nor localname can be dotted.  The dotted-name1
case can be added when the circular import issue can be dealt with.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Fri Aug 18 18:54:12 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 09:54:12 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 02:30:40AM -0400
References: <20000817164137.U17689@lyra.org> <LNBBLJKPBEHFEDALKOLCGEHNHAAA.tim_one@email.msn.com>
Message-ID: <20000818095412.C11316@ActiveState.com>

On Fri, Aug 18, 2000 at 02:30:40AM -0400, Tim Peters wrote:
> [Greg Stein]
> > ...
> > IOW, an x-plat TLS is going to be done at some point. If you need it now,
> > then please do it now. That will help us immeasurably in the long run.
> 
> the former's utter bogosity.  From Trent's POV, I bet the one-liner
> workaround sounds more appealing.
> 

Yes.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From trentm at ActiveState.com  Fri Aug 18 18:56:24 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 09:56:24 -0700
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818040034.F17689@lyra.org>; from gstein@lyra.org on Fri, Aug 18, 2000 at 04:00:34AM -0700
References: <20000818091703.T376@xs4all.nl> <LNBBLJKPBEHFEDALKOLCCEIBHAAA.tim_one@email.msn.com> <20000818040034.F17689@lyra.org>
Message-ID: <20000818095624.D11316@ActiveState.com>

On Fri, Aug 18, 2000 at 04:00:34AM -0700, Greg Stein wrote:
> That is a silly approach, Tim. This is exactly what autoconf is for. Using
> run-time logic to figure out something that is compile-time is Badness.
> 
> > > On Thu, Aug 17, 2000 at 11:39:29PM -0400, Tim Peters wrote:
> > >
> > > > So how about a runtime test for what's actually important (and it's not
> > > > Monterey!)?
> > > >
> > > > 	if (sizeof(threadid) <= sizeof(long))
> > > > 		return (long)threadid;
> > > >
> > > > End of problem, right?  It's a cheap runtime test in a function
> > > > whose speed isn't critical anyway.
> > >

I am inclined to agrre with Thomas and Greg on this one. Why not check for
sizeof(pthread_t) if pthread.h exists and test:

#if SIZEOF_PTHREAD_T < SIZEOF_LONG
    return (long)threadid;
#endif


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Fri Aug 18 19:09:05 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:09:05 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818040034.F17689@lyra.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEIPHAAA.tim_one@email.msn.com>

[Greg Stein]
> That is a silly approach, Tim. This is exactly what autoconf is for.
> Using run-time logic to figure out something that is compile-time
> is Badness.

Remain -0.  autoconf may work slick as snot on Unix derivatives, but each
new symbol it introduces also serves to make people on other platforms
scratch their heads about what it means and what they're supposed to do with
it in their manual config files.  In this case, the alternative is an
obvious and isolated 1-liner that's transparent on inspection regardless of
the reader's background.  You haven't noted a *downside* to that approach
that I can see, and your technical objection is incorrect:  sizeof is not a
compile-time operation (try it in an #if, but make very sure it does what
you think it does <wink>).





From tim_one at email.msn.com  Fri Aug 18 19:09:07 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:09:07 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <003001c00901$11fd8ae0$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIPHAAA.tim_one@email.msn.com>

[/F]
> from what I can tell, it's compatible with a long on all sane plat-
> forms (Win64 doesn't support pthreads anyway ;-), so I guess the
> right thing here is to remove volatile and simply use:
>
>     return (long) threadid;

That's what the code originally did, and the casting was introduced in
version 2.5.  As for the "volatile", Vladimir reported that he needed that.

This isn't worth the brain cell it's getting.  Put in the hack and move on
already!





From trentm at ActiveState.com  Fri Aug 18 19:23:39 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 10:23:39 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <200008180501.WAA28237@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Thu, Aug 17, 2000 at 10:01:22PM -0700
References: <200008180501.WAA28237@slayer.i.sourceforge.net>
Message-ID: <20000818102339.E11316@ActiveState.com>

On Thu, Aug 17, 2000 at 10:01:22PM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/dist/src/Objects
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv28173
> 
> Modified Files:
> 	object.c 
> Log Message:
> make_pair(): When comparing the pointers, they must be cast to integer
> types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t).  ANSI
> specifies that pointer compares other than == and != to non-related
> structures are undefined.  This quiets an Insure portability warning.
> 
> 
> Index: object.c
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Objects/object.c,v
> retrieving revision 2.95
> retrieving revision 2.96
> diff -C2 -r2.95 -r2.96
> *** object.c	2000/08/16 12:24:51	2.95
> --- object.c	2000/08/18 05:01:19	2.96
> ***************
> *** 372,375 ****
> --- 372,377 ----
>   {
>   	PyObject *pair;
> + 	Py_uintptr_t iv = (Py_uintptr_t)v;
> + 	Py_uintptr_t iw = (Py_uintptr_t)w;
>   
>   	pair = PyTuple_New(2);
> ***************
> *** 377,381 ****
>   		return NULL;
>   	}
> ! 	if (v <= w) {
>   		PyTuple_SET_ITEM(pair, 0, PyLong_FromVoidPtr((void *)v));
>   		PyTuple_SET_ITEM(pair, 1, PyLong_FromVoidPtr((void *)w));
> --- 379,383 ----
>   		return NULL;
>   	}
> ! 	if (iv <= iw) {
>   		PyTuple_SET_ITEM(pair, 0, PyLong_FromVoidPtr((void *)v));
>   		PyTuple_SET_ITEM(pair, 1, PyLong_FromVoidPtr((void *)w));
> ***************
> *** 488,492 ****
>   	}
>   	if (vtp->tp_compare == NULL) {
> ! 		return (v < w) ? -1 : 1;
>   	}
>   	_PyCompareState_nesting++;
> --- 490,496 ----
>   	}
>   	if (vtp->tp_compare == NULL) {
> ! 		Py_uintptr_t iv = (Py_uintptr_t)v;
> ! 		Py_uintptr_t iw = (Py_uintptr_t)w;
> ! 		return (iv < iw) ? -1 : 1;
>   	}
>   	_PyCompareState_nesting++;
> 

Can't you just do the cast for the comparison instead of making new
variables?

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Fri Aug 18 19:41:50 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 13:41:50 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
References: <200008180501.WAA28237@slayer.i.sourceforge.net>
	<20000818102339.E11316@ActiveState.com>
Message-ID: <14749.30046.345520.779328@anthem.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

    TM> Can't you just do the cast for the comparison instead of
    TM> making new variables?

Does it matter?



From trentm at ActiveState.com  Fri Aug 18 19:47:52 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Fri, 18 Aug 2000 10:47:52 -0700
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <14749.30046.345520.779328@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 01:41:50PM -0400
References: <200008180501.WAA28237@slayer.i.sourceforge.net> <20000818102339.E11316@ActiveState.com> <14749.30046.345520.779328@anthem.concentric.net>
Message-ID: <20000818104752.A15002@ActiveState.com>

On Fri, Aug 18, 2000 at 01:41:50PM -0400, Barry A. Warsaw wrote:
> 
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
> 
>     TM> Can't you just do the cast for the comparison instead of
>     TM> making new variables?
> 
> Does it matter?

No, I guess not. Just being a nitpicker first thing in the morning. Revving
up for real work. :)

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Fri Aug 18 19:52:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 13:52:20 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>

[Andrew Kuchling]
> Four modules define insint() functions to insert an integer into a

  Five                         or macro

> dictionary in order to initialize constants in their module
> dictionaries:
>
> kronos Modules>grep -l insint *.c
> pcremodule.c
> shamodule.c
> socketmodule.c
> zlibmodule.c
> kronos Modules>

It's actually a macro in shamodule.  Missing is _winreg.c (in the PC
directory).  The perils of duplication manifest in subtle differences among
these guys (like _winreg's inserts a long while the others insert an int --
and _winreg is certainly more correct here because a Python int *is* a C
long; and they differ in treatment of errors, and it's not at all clear
that's intentional).

> ...
> This duplication bugs me.  Shall I submit a patch to add an API
> convenience function to do this, and change the modules to use it?
> Suggested prototype and name: PyDict_InsertInteger( dict *, string,
> long)

+1, provided the treatment of errors is clearly documented.





From akuchlin at mems-exchange.org  Fri Aug 18 19:58:33 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 13:58:33 -0400
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Fri, Aug 18, 2000 at 01:52:20PM -0400
References: <20000818110037.C27419@kronos.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCIEJCHAAA.tim_one@email.msn.com>
Message-ID: <20000818135833.K27419@kronos.cnri.reston.va.us>

On Fri, Aug 18, 2000 at 01:52:20PM -0400, Tim Peters wrote:
>+1, provided the treatment of errors is clearly documented.

The treatment of errors in module init functions seems to be simply
charge ahead and do the inserts, and then do 'if (PyErr_Occurred())
Py_FatalError())'.  The new function will probably return NULL if
there's an error, but I doubt anyone will check it; it's too ungainly
to write 
  if ( (PyDict_SetItemStringInt(d, "foo", FOO)) == NULL ||
       (PyDict_SetItemStringInt(d, "bar", BAR)) == NULL || 
       ... repeat for 187 more constants ...

--amk
       




From Vladimir.Marangozov at inrialpes.fr  Fri Aug 18 20:17:53 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 18 Aug 2000 20:17:53 +0200 (CEST)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <20000818135833.K27419@kronos.cnri.reston.va.us> from "Andrew Kuchling" at Aug 18, 2000 01:58:33 PM
Message-ID: <200008181817.UAA07799@python.inrialpes.fr>

Andrew Kuchling wrote:
> 
> On Fri, Aug 18, 2000 at 01:52:20PM -0400, Tim Peters wrote:
> >+1, provided the treatment of errors is clearly documented.
> 
> The treatment of errors in module init functions seems to be simply
> charge ahead and do the inserts, and then do 'if (PyErr_Occurred())
> Py_FatalError())'.  The new function will probably return NULL if
> there's an error, but I doubt anyone will check it; it's too ungainly
> to write 
>   if ( (PyDict_SetItemStringInt(d, "foo", FOO)) == NULL ||
>        (PyDict_SetItemStringInt(d, "bar", BAR)) == NULL || 
>        ... repeat for 187 more constants ...

:-)

So name it PyModule_AddConstant(module, name, constant),
which fails with "can't add constant to module" err msg.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Fri Aug 18 20:24:57 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 14:24:57 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky return statements
In-Reply-To: <20000818095624.D11316@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEJGHAAA.tim_one@email.msn.com>

[Trent Mick]
> I am inclined to agrre with Thomas and Greg on this one. Why not check for
> sizeof(pthread_t) if pthread.h exists and test:
>
> #if SIZEOF_PTHREAD_T < SIZEOF_LONG
>     return (long)threadid;
> #endif

Change "<" to "<=" and I won't gripe.





From fdrake at beopen.com  Fri Aug 18 20:40:48 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 14:40:48 -0400 (EDT)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <200008181817.UAA07799@python.inrialpes.fr>
References: <20000818135833.K27419@kronos.cnri.reston.va.us>
	<200008181817.UAA07799@python.inrialpes.fr>
Message-ID: <14749.33584.683341.684523@cj42289-a.reston1.va.home.com>

Vladimir Marangozov writes:
 > So name it PyModule_AddConstant(module, name, constant),
 > which fails with "can't add constant to module" err msg.

  Even better!  I expect there should be at least a couple of these;
one for ints, one for strings.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Fri Aug 18 20:37:19 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 14:37:19 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects object.c,2.95,2.96
In-Reply-To: <20000818102339.E11316@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJGHAAA.tim_one@email.msn.com>

[Trent Mick]
> > ...
> >   	if (vtp->tp_compare == NULL) {
> > ! 		Py_uintptr_t iv = (Py_uintptr_t)v;
> > ! 		Py_uintptr_t iw = (Py_uintptr_t)w;
> > ! 		return (iv < iw) ? -1 : 1;
> >   	}
> Can't you just do the cast for the comparison instead of making new
> variables?

Any compiler worth beans will optimize them out of existence.  In the
meantime, it makes each line (to my eyes) short, clear, and something I can
set a debugger breakpoint on in debug mode if I suspect the cast isn't
working as intended.





From effbot at telia.com  Fri Aug 18 20:42:34 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 20:42:34 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>             <20000818161745.U376@xs4all.nl>  <20000818150639.6685C31047C@bireme.oratrix.nl>
Message-ID: <000001c00945$a8d37e40$f2a6b5d4@hagrid>

sjoerd wrote:

> The problem was that because of your (I think it was you :-) earlier
> change to have a Makefile in Grammar, I had an old graminit.c lying
> around in my build directory.  I don't build in the source directory
> and the changes for a Makefile in Grammar resulted in a file
> graminit.c in the wrong place.

is the Windows build system updated to generate new
graminit files if the Grammar are updated?

or is python development a unix-only thingie these days?

</F>




From tim_one at email.msn.com  Fri Aug 18 21:05:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:05:29 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJIHAAA.tim_one@email.msn.com>

[/F]
> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?

No, not yet.

> or is python development a unix-only thingie these days?

It pretty much always has been!  Just ask Jack <wink>.  It's unreasonable to
expect Unix(tm) weenies to keep the other builds working (although vital
that they tell Python-Dev when they do something "new & improved"), and
counterproductive to imply that their inability to do so should deter them
from improving the build process on their platform.  In some ways, building
is more pleasant under Windows, and if turnabout is fair play the Unix
droids could insist we build them a honking powerful IDE <wink>.





From m.favas at per.dem.csiro.au  Fri Aug 18 21:08:36 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 03:08:36 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky 
 return statements
References: <20000816172425.A32338@ActiveState.com> <003001c00901$11fd8ae0$0900a8c0@SPIFF>
Message-ID: <399D89B4.476FB5EF@per.dem.csiro.au>

OK - 

return (long) threadid;

compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
Removing the "volatile" is also fine for me, but may affect Vladimir.
I'm still a bit (ha!) confused by Tim's comments that the function is
bogus for OSF/1 because it throws away half the bits, and will therefore
result in id collisions - this will only happen on platforms where
sizeof(long) is less than sizeof(pointer), which is not OSF/1 (but is
Win64). Also, one of the suggested tests only cast the pointer to a long
SIZEOF_PTHREAD_T < SIZEOF_LONG - that should surely be <= ...

In summary, whatever issue there was for OSF/1 six (or so) years ago
appears to be no longer relevant - but there will be the truncation
issue for Win64-like platforms.

Mark

Fredrik Lundh wrote:
> 
> trent mick wrote:
> >     return (long) *(long *) &threadid;
> 
> from what I can tell, pthread_t is a pointer under OSF/1.
> 
> I've been using OSF/1 since the early days, and as far as I can
> remember, you've never needed to use stupid hacks like that
> to convert a pointer to a long integer. an ordinary (long) cast
> should be sufficient.
> 
> > Could this be changed to
> >   return threadid;
> > safely?
> 
> safely, yes.  but since it isn't a long on all platforms, you might
> get warnings from the compiler (see Mark's mail).
> 
> :::
> 
> from what I can tell, it's compatible with a long on all sane plat-
> forms (Win64 doesn't support pthreads anyway ;-), so I guess the
> right thing here is to remove volatile and simply use:
> 
>     return (long) threadid;
> 
> (Mark: can you try this out on your box?  setting up a Python 2.0
> environment on our alphas would take more time than I can spare
> right now...)
> 
> </F>

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From Vladimir.Marangozov at inrialpes.fr  Fri Aug 18 21:09:48 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Fri, 18 Aug 2000 21:09:48 +0200 (CEST)
Subject: [Python-Dev] Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEHJHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 18, 2000 01:43:14 AM
Message-ID: <200008181909.VAA08003@python.inrialpes.fr>

[Tim, on PyErr_NoMemory]
>
> Looks good to me.  And if it breaks something, it will be darned hard to
> tell <wink>.

It's easily demonstrated with the memprof.c module I'd like to introduce
quickly here.

Note: I'll be out of town next week and if someone wants to
play with this, tell me what to do quickly: upload a (postponed) patch
which goes in pair with obmalloc.c, put it in a web page or remain quiet.

The object allocator is well tested, the memory profiler is not so
thouroughly tested... The interface needs more work IMHO, but it's
already quite useful and fun in it's current state <wink>.


Demo:


~/python/dev>python -p
Python 2.0b1 (#9, Aug 18 2000, 20:11:29)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> 
>>> # Note the -p option -- it starts any available profilers through
... # a newly introduced Py_ProfileFlag. Otherwise you'll get funny results
... # if you start memprof in the middle of an execution
... 
>>> import memprof
>>> memprof.__doc__
'This module provides access to the Python memory profiler.'
>>> 
>>> dir(memprof)
['ALIGNMENT', 'ERROR_ABORT', 'ERROR_IGNORE', 'ERROR_RAISE', 'ERROR_REPORT', 'ERROR_STOP', 'MEM_CORE', 'MEM_OBCLASS', 'MEM_OBJECT', '__doc__', '__name__', 'geterrlevel', 'getpbo', 'getprofile', 'getthreshold', 'isprofiling', 'seterrlevel', 'setpbo', 'setproftype', 'setthreshold', 'start', 'stop']
>>> 
>>> memprof.isprofiling()
1
>>> # It's running -- cool. We're now ready to get the current memory profile
... 
>>> print memprof.getprofile.__doc__
getprofile([type]) -> object

Return a snapshot of the current memory profile of the interpreter.
An optional type argument may be provided to request the profile of
a specific memory layer. It must be one of the following constants:

        MEM_CORE    - layer 1: Python core memory
        MEM_OBJECT  - layer 2: Python object memory
        MEM_OBCLASS - layer 3: Python object-specific memory 

If a type argument is not specified, the default profile is returned.
The default profile type can be set with the setproftype() function.
>>> 
>>> mp = memprof.getprofile()
>>> mp
<global memory profile, layer 2, detailed in 33 block size classes>
>>> 
>>> # now see how much mem we're using, it's a 3 tuple
... # (requested mem, minimum allocated mem, estimated mem)
... 
>>> mp.memory
(135038, 142448, 164792)
>>> mp.peakmemory
(137221, 144640, 167032)
>>> 
>>> # indeed, peak values are important. Now let's see what this gives in
... # terms of memory blocks
... 
>>> mp.blocks
(2793, 2793)
>>> mp.peakblocks
(2799, 2799)
>>> 
>>> # Again this is a 2-tuple (requested blocks, allocated blocks)
... # Now let's see the stats of the calls to the allocator.
... 
>>> mp.malloc
(4937, 0, 0)
>>> mp.calloc
(0, 0, 0)
>>> mp.realloc
(43, 0, 0)
>>> mp.free
(2144, 0, 0)
>>> 
>>> # A 3-tuple (nb of calls, nb of errors, nb of warnings by memprof)
... #
... # Good. Now let's see the memory profile detailed by size classes
... they're memory profile objects too, similar to the global profile:
>>>
>>> mp.sizeclass[0]
<size class memory profile, layer 2, block size range [1..8]>
>>> mp.sizeclass[1]
<size class memory profile, layer 2, block size range [9..16]>
>>> mp.sizeclass[2]
<size class memory profile, layer 2, block size range [17..24]>
>>> len(mp.sizeclass)
33
>>> mp.sizeclass[-1]
<size class memory profile, layer 2, block size range [257..-1]>
>>> 
>>> # The last one is for big blocks: 257 bytes and up.
... # Now let's see ithe detailed memory picture:
>>>
>>> for s in mp.sizeclass:                                                     
...     print "%.2d - " % s.sizeclass, "%8d %8d %8d" % s.memory
... 
00 -         0        0        0
01 -      3696     3776     5664
02 -       116      120      160
03 -     31670    34464    43080
04 -     30015    32480    38976
05 -     10736    11760    13720
06 -     10846    11200    12800
07 -      2664     2816     3168
08 -      1539     1584     1760
09 -      1000     1040     1144
10 -      2048     2112     2304
11 -      1206     1248     1352
12 -       598      624      672
13 -       109      112      120
14 -       575      600      640
15 -       751      768      816
16 -       407      408      432
17 -       144      144      152
18 -       298      304      320
19 -       466      480      504
20 -       656      672      704
21 -       349      352      368
22 -       542      552      576
23 -       188      192      200
24 -       392      400      416
25 -       404      416      432
26 -       640      648      672
27 -       441      448      464
28 -         0        0        0
29 -       236      240      248
30 -       491      496      512
31 -       501      512      528
32 -     31314    31480    31888
>>>
>>> for s in mp.sizeclass:
...     print "%.2d - " % s.sizeclass, "%8d %8d" % s.blocks
... 
00 -         0        0
01 -       236      236
02 -         5        5
03 -      1077     1077
04 -       812      812
05 -       245      245
06 -       200      200
07 -        44       44
08 -        22       22
09 -        13       13
10 -        24       24
11 -        13       13
12 -         6        6
13 -         1        1
14 -         5        5
15 -         6        6
16 -         3        3
17 -         1        1
18 -         2        2
19 -         3        3
20 -         4        4
21 -         2        2
22 -         3        3
23 -         1        1
24 -         2        2
25 -         2        2
26 -         3        3
27 -         2        2
28 -         0        0
29 -         1        1
30 -         2        2
31 -         2        2
32 -        51       51
>>>
>>> # Note that just started the interpreter and analysed it's initial
... # memory profile. You can repeat this game at any point of time,
... # look at the stats and enjoy a builtin memory profiler.
... #
... # Okay, now to the point on PyErr_NoMemory: but we need to restart
... # Python without "-p"
>>>
~/python/dev>python 
Python 2.0b1 (#9, Aug 18 2000, 20:11:29)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> 
>>> import memprof
>>> memprof.isprofiling()
0
>>> memprof.start()
memprof: freeing unknown block (0x40185e60)
memprof: freeing unknown block (0x40175098)
memprof: freeing unknown block (0x40179288)
>>>
>>> # See? We're freeing unknown blocks for memprof.
... # Okay, enough. See the docs for more:
... 
>>> print memprof.seterrlevel.__doc__
seterrlevel(flags) -> None

Set the error level of the profiler. The provided argument instructs the
profiler on how tolerant it should be against any detected simple errors
or memory corruption. The following non-exclusive values are recognized:

    ERROR_IGNORE - ignore silently any detected errors
    ERROR_REPORT - report all detected errors to stderr
    ERROR_STOP   - stop the profiler on the first detected error
    ERROR_RAISE  - raise a MemoryError exception for all detected errors
    ERROR_ABORT  - report the first error as fatal and abort immediately

The default error level is ERROR_REPORT.
>>> 
>>> # So here's you're PyErr_NoMemory effect:
... 
>>> memprof.seterrlevel(memprof.ERROR_REPORT | memprof.ERROR_RAISE)
>>> 
>>> import test.regrtest
memprof: resizing unknown block (0x82111b0)
memprof: raised MemoryError.
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "./Lib/test/regrtest.py", line 39, in ?
    import random
  File "./Lib/random.py", line 23, in ?
    import whrandom
  File "./Lib/whrandom.py", line 40, in ?
    class whrandom:
MemoryError: memprof: resizing unknown block (0x82111b0)
>>> 
>>> # Okay, gotta run. There are no docs for the moment. Just the source
... and function docs. (and to avoid another exception...)
>>>
>>> memprof.seterrlevel(memprof.ERROR_IGNORE)
>>>
>>> for i in dir(memprof):
...     x = memprof.__dict__[i]
...     if hasattr(x, "__doc__"):
...             print ">>>>>>>>>>>>>>>>>>>>>>>>>>>>> [%s]" % i
...             print x.__doc__
...             print '='*70
... 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [geterrlevel]
geterrlevel() -> errflags

Get the current error level of the profiler.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getpbo]
getpbo() -> int

Return the fixed per block overhead (pbo) used for estimations.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getprofile]
getprofile([type]) -> object

Return a snapshot of the current memory profile of the interpreter.
An optional type argument may be provided to request the profile of
a specific memory layer. It must be one of the following constants:

        MEM_CORE    - layer 1: Python core memory
        MEM_OBJECT  - layer 2: Python object memory
        MEM_OBCLASS - layer 3: Python object-specific memory 

If a type argument is not specified, the default profile is returned.
The default profile type can be set with the setproftype() function.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [getthreshold]
getthreshold() -> int

Return the size threshold (in bytes) between small and big blocks.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [isprofiling]
isprofiling() -> 1 if profiling is currently in progress, 0 otherwise.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [seterrlevel]
seterrlevel(flags) -> None

Set the error level of the profiler. The provided argument instructs the
profiler on how tolerant it should be against any detected simple errors
or memory corruption. The following non-exclusive values are recognized:

    ERROR_IGNORE - ignore silently any detected errors
    ERROR_REPORT - report all detected errors to stderr
    ERROR_STOP   - stop the profiler on the first detected error
    ERROR_RAISE  - raise a MemoryError exception for all detected errors
    ERROR_ABORT  - report the first error as fatal and abort immediately

The default error level is ERROR_REPORT.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setpbo]
setpbo(int) -> None

Set the fixed per block overhead (pbo) used for estimations.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setproftype]
setproftype(type) -> None

Set the default profile type returned by getprofile() without arguments.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [setthreshold]
setthreshold(int) -> None

Set the size threshold (in bytes) between small and big blocks.
The maximum is 256. The argument is rounded up to the ALIGNMENT.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [start]
start() -> None

Start the profiler. If it has been started, this function has no effect.
======================================================================
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [stop]
stop() -> None

Stop the profiler. If it has been stopped, this function has no effect.
======================================================================


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Fri Aug 18 21:11:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:11:11 -0400
Subject: [Python-Dev] RE: Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <200008181909.VAA08003@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEJJHAAA.tim_one@email.msn.com>

[Tim, on PyErr_NoMemory]
> Looks good to me.  And if it breaks something, it will be darned hard to
> tell <wink>.

[Vladimir Marangozov]
> It's easily demonstrated with the memprof.c module I'd like to introduce
> quickly here.
>
> Note: I'll be out of town next week and if someone wants to
> play with this, tell me what to do quickly: upload a (postponed) patch
> which goes in pair with obmalloc.c, put it in a web page or remain
> quiet.
>
> The object allocator is well tested, the memory profiler is not so
> thouroughly tested... The interface needs more work IMHO, but it's
> already quite useful and fun in it's current state <wink>.
> ...

My bandwidth is consumed by 2.0 issues, so I won't look at it.  On the
chance that Guido gets hit by a bus, though, and I have time to kill at his
funeral, it would be nice to have it available on SourceForge.  Uploading a
postponed patch sounds fine!





From tim_one at email.msn.com  Fri Aug 18 21:26:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 15:26:31 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky  return statements
In-Reply-To: <399D89B4.476FB5EF@per.dem.csiro.au>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com>

[Mark Favas]
> return (long) threadid;
>
> compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
> Removing the "volatile" is also fine for me, but may affect Vladimir.
> I'm still a bit (ha!) confused by Tim's comments that the function is
> bogus for OSF/1 because it throws away half the bits, and will therefore
> result in id collisions - this will only happen on platforms where
> sizeof(long) is less than sizeof(pointer), which is not OSF/1

Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
bits were being lost.  Are you running this on an Alpha?  The comment in the
code specifically names "Alpha OSF/1" as the culprit.  I don't know anything
about OSF/1; perhaps "Alpha" is implied.

> ...
> In summary, whatever issue there was for OSF/1 six (or so) years ago
> appears to be no longer relevant - but there will be the truncation
> issue for Win64-like platforms.

And there's Vladimir's "volatile" hack.





From effbot at telia.com  Fri Aug 18 21:37:36 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 18 Aug 2000 21:37:36 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <LNBBLJKPBEHFEDALKOLCOEJIHAAA.tim_one@email.msn.com>
Message-ID: <00e501c0094b$c9ee2ac0$f2a6b5d4@hagrid>

tim peters wrote:


> [/F]
> > is the Windows build system updated to generate new
> > graminit files if the Grammar are updated?
> 
> No, not yet.
> 
> > or is python development a unix-only thingie these days?
> 
> It pretty much always has been!  Just ask Jack <wink>.  It's unreasonable to
> expect Unix(tm) weenies to keep the other builds working (although vital
> that they tell Python-Dev when they do something "new & improved"), and
> counterproductive to imply that their inability to do so should deter them
> from improving the build process on their platform. 

well, all I expect from them is that the repository should
be in a consistent state at all time.

(in other words, never assume that just because generated
files are rebuilt by the unix makefiles, they don't have to be
checked into the repository).

for a moment, sjoerd's problem report made me think that
someone had messed up here...  but I just checked, and
he hadn't ;-)

</F>




From thomas at xs4all.net  Fri Aug 18 21:43:34 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 21:43:34 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <000001c00945$a8d37e40$f2a6b5d4@hagrid>; from effbot@telia.com on Fri, Aug 18, 2000 at 08:42:34PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid>
Message-ID: <20000818214333.X376@xs4all.nl>

On Fri, Aug 18, 2000 at 08:42:34PM +0200, Fredrik Lundh wrote:
> sjoerd wrote:

> > The problem was that because of your (I think it was you :-) earlier
> > change to have a Makefile in Grammar, I had an old graminit.c lying
> > around in my build directory.  I don't build in the source directory
> > and the changes for a Makefile in Grammar resulted in a file
> > graminit.c in the wrong place.

> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?

No, and that's one more reason to reverse my patch ! :-) Note that I didn't
*add* the Makefile, I only added Grammar to the
directories-to-run-make-in-by-default. If the Grammar is changed, you
already need a way to run pgen (of which the source rests in Parser/) to
generate the new graminit.c/graminit.h files. I have no way of knowing
whether that is the case for the windows build files. The CVS tree should
always contain up to date graminit.c/.h files!

The reason it was added was the multitude of Grammar-changing patches on SF,
and the number of people that forgot to run 'make' in Grammar/ after
applying them. I mentioned adding Grammar/ to the directories to be made,
Guido said it was a good idea, and noone complained to it until after it was
added ;P I think we can drop the idea, though, at least for (alpha, beta,
final) releases.

> or is python development a unix-only thingie these days?

Well, *my* python development is a unix-only thingie, mostly because I don't
have a compiler for under Windows. If anyone wants to send me or point me to
the canonical Windows compiler & debugger and such, in a way that won't set
me back a couple of megabucks, I'd be happy to test & debug windows as well.
Gives me *two* uses for Windows: games and Python ;)

My-boss-doesn't-pay-me-to-work-on-Windows-ly y'rs,

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From m.favas at per.dem.csiro.au  Fri Aug 18 22:33:21 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 04:33:21 +0800
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky  
 return statements
References: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com>
Message-ID: <399D9D91.3E76ED8D@per.dem.csiro.au>

Tim Peters wrote:
> 
> [Mark Favas]
> > return (long) threadid;
> >
> > compiles without warnings, and all tests pass on OSF/1 (aka Tru64 Unix).
> > Removing the "volatile" is also fine for me, but may affect Vladimir.
> > I'm still a bit (ha!) confused by Tim's comments that the function is
> > bogus for OSF/1 because it throws away half the bits, and will therefore
> > result in id collisions - this will only happen on platforms where
> > sizeof(long) is less than sizeof(pointer), which is not OSF/1
> 
> Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
> bits were being lost.  Are you running this on an Alpha?  The comment in the
> code specifically names "Alpha OSF/1" as the culprit.  I don't know anything
> about OSF/1; perhaps "Alpha" is implied.

Yep - I'm running on an Alpha. The name of the OS has undergone a couple
of, um, appellation transmogrifications since the first Alpha was
produced by DEC: OSF/1 -> Digital Unix -> Tru64 Unix, although uname has
always reported "OSF1". (I don't think that there's any other
implementation of OSF/1 left these days... not that uses the name,
anyway.)

> 
> > ...
> > In summary, whatever issue there was for OSF/1 six (or so) years ago
> > appears to be no longer relevant - but there will be the truncation
> > issue for Win64-like platforms.
> 
> And there's Vladimir's "volatile" hack.

Wonder if that also is still relevant (was it required because of the
long * long * cast?)...

-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913



From bwarsaw at beopen.com  Fri Aug 18 22:45:03 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 16:45:03 -0400 (EDT)
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
	<20000818161745.U376@xs4all.nl>
	<20000818150639.6685C31047C@bireme.oratrix.nl>
	<000001c00945$a8d37e40$f2a6b5d4@hagrid>
	<20000818214333.X376@xs4all.nl>
Message-ID: <14749.41039.166847.942483@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> No, and that's one more reason to reverse my patch ! :-) Note
    TW> that I didn't *add* the Makefile, I only added Grammar to the
    TW> directories-to-run-make-in-by-default. If the Grammar is
    TW> changed, you already need a way to run pgen (of which the
    TW> source rests in Parser/) to generate the new
    TW> graminit.c/graminit.h files. I have no way of knowing whether
    TW> that is the case for the windows build files. The CVS tree
    TW> should always contain up to date graminit.c/.h files!

I don't think you need to reverse your patch because of this, although
I haven't been following this thread closely.  Just make sure that if
you commit a Grammar change, you must commit the newly generated
graminit.c and graminit.h files.

This is no different than if you change configure.in; you must commit
both that file and the generated configure file.

-Barry



From akuchlin at mems-exchange.org  Fri Aug 18 22:48:37 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 18 Aug 2000 16:48:37 -0400
Subject: [Python-Dev] Re: [Patches] [Patch #101055] Cookie.py
In-Reply-To: <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 18, 2000 at 04:06:20PM -0400
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
Message-ID: <20000818164837.A8423@kronos.cnri.reston.va.us>

[Overquoting for the sake of python-dev readers]

On Fri, Aug 18, 2000 at 04:06:20PM -0400, Fred L. Drake, Jr. wrote:
>amk writes:
> > I have a copy of Tim O'Malley's Cookie.py,v file (in order to preserve the
> > version history).  I can either ask the SF admins to drop it into
> > the right place in the CVS repository, but will that screw up the
> > Python 1.6 tagging in some way?  (I would expect not, but who
> > knows?)
>
>  That would have no effect on any of the Python tagging.  It's
>probably worthwhile making sure there are no tags in the ,v file, but
>that can be done after it gets dropped in place.
>  Now, Greg Stein will tell us that dropping this into place is the
>wrong thing to do.  What it *will* screw up is people asking for the
>state of Python at a specific date before the file was actually added;
>they'll get this file even for when it wasn't in the Python CVS tree.
>I can live with that, but we should make a policy decision for the
>Python tree regarding this sort of thing.

Excellent point.  GvR's probably the only person whose ruling matters
on this point; I'll try to remember it and bring it up whenever he
gets back (next week, right?).

>  Don't -- it's not worth it.

I hate throwing away information that stands even a tiny chance of
being useful; good thing the price of disk storage keeps dropping, eh?
The version history might contain details that will be useful in
future debugging or code comprehension, so I dislike losing it
forever.

(My minimalist side is saying that the enhanced Web tools should be a
separately downloadable package.  But you probably guessed that
already...)

--amk



From thomas at xs4all.net  Fri Aug 18 22:56:07 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 22:56:07 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <14749.41039.166847.942483@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 18, 2000 at 04:45:03PM -0400
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000818214333.X376@xs4all.nl> <14749.41039.166847.942483@anthem.concentric.net>
Message-ID: <20000818225607.Z376@xs4all.nl>

On Fri, Aug 18, 2000 at 04:45:03PM -0400, Barry A. Warsaw wrote:

> This is no different than if you change configure.in; you must commit
> both that file and the generated configure file.

Yes, but more critically so, since it'll screw up more than a couple of
defines on a handful of systems :-) However, this particular change in the
make process doesn't adress this at all. It would merely serve to mask this
problem, in the event of someone commiting a change to Grammar but not to
graminit.*. The reasoning behind the change was "if you change
Grammar/Grammar, and then type 'make', graminit.* should be regenerated
automatically, before they are used in other files." I thought the change
was a small and reasonable one, but now I don't think so, anymore ;P On the
other hand, perhaps the latest changes (not mine) fixed it for real.

But I still think that if this particular makefile setup is used in
releases, 'pgen' should at least be made a tad less verbose.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 18 23:04:01 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 18 Aug 2000 23:04:01 +0200
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: <20000818164837.A8423@kronos.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Aug 18, 2000 at 04:48:37PM -0400
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com> <20000818164837.A8423@kronos.cnri.reston.va.us>
Message-ID: <20000818230401.A376@xs4all.nl>

On Fri, Aug 18, 2000 at 04:48:37PM -0400, Andrew Kuchling wrote:

[ About adding Cookie.py including CVS history ]

> I hate throwing away information that stands even a tiny chance of
> being useful; good thing the price of disk storage keeps dropping, eh?
> The version history might contain details that will be useful in
> future debugging or code comprehension, so I dislike losing it
> forever.

It would be moderately nice to have the versioning info, though I think Fred
has a point when he says that it might be confusing for people: it would
look like the file had been in the CVS repository the whole time, and it
would be very hard to see where the file had been added to Python. And what
about new versions ? Would this file be adopted by Python, would changes by
the original author be incorporated ? How about version history for those
changes ? The way it usually goes (as far as my experience goes) is that
such files are updated only periodically. Would those updates incorporate
the history of those changes as well ?

Unless Cookie.py is really split off, and we're going to maintain a separate
version, I don't think it's worth worrying about the version history as
such. Pointing to the 'main' version and it's history should be enough.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Fri Aug 18 23:13:31 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:13:31 -0400 (EDT)
Subject: [Python-Dev] gettext in the standard library
Message-ID: <14749.42747.411862.940207@anthem.concentric.net>

Apologies for duplicates to those of you already on python-dev...

I've been working on merging all the various implementations of Python
interfaces to the GNU gettext libraries.  I've worked from code
contributed by Martin, James, and Peter.  I now have something that
seems to work fairly well so I thought I'd update you all.

After looking at all the various wizzy and experimental stuff in these
implementations, I opted for simplicity, mostly just so I could get my
head around what was needed.  My goal was to build a fast C wrapper
module around the C library, and to provide a pure Python
implementation of an identical API for platforms without GNU gettext.

I started with Martin's libintlmodule, renamed it _gettext and cleaned
up the C code a bit.  This provides gettext(), dgettext(),
dcgettext(), textdomain(), and bindtextdomain() functions.  The
gettext.py module imports these, and if it succeeds, it's done.

If that fails, then there's a bunch of code, mostly derived from
Peter's fintl.py module, that reads the binary .mo files and does the
look ups itself.  Note that Peter's module only supported the GNU
gettext binary format, and that's all mine does too.  It should be
easy to support other binary formats (Solaris?) by overriding one
method in one class, and contributions are welcome.

James's stuff looked cool too, what I grokked of it :) but I think
those should be exported as higher level features.  I didn't include
the ability to write .mo files or the exported Catalog objects.  I
haven't used the I18N services enough to know whether these are
useful.

I added one convenience function, gettext.install().  If you call
this, it inserts the gettext.gettext() function into the builtins
namespace as `_'.  You'll often want to do this, based on the I18N
translatable strings marking conventions.  Note that importing gettext
does /not/ install by default!

And since (I think) you'll almost always want to call bindtextdomain()
and textdomain(), you can pass the domain and localedir in as
arguments to install.  Thus, the simple and quick usage pattern is:

    import gettext
    gettext.install('mydomain', '/my/locale/dir')

    print _('this is a localized message')

I think it'll be easier to critique this stuff if I just check it in.
Before I do, I still need to write up a test module and hack together
docos.  In the meantime, here's the module docstring for gettext.py.
Talk amongst yourselves. :)

-Barry

"""Internationalization and localization support.

This module provides internationalization (I18N) and localization (L10N)
support for your Python programs by providing an interface to the GNU gettext
message catalog library.

I18N refers to the operation by which a program is made aware of multiple
languages.  L10N refers to the adaptation of your program, once
internationalized, to the local language and cultural habits.  In order to
provide multilingual messages for your Python programs, you need to take the
following steps:

    - prepare your program by specially marking translatable strings
    - run a suite of tools over your marked program files to generate raw
      messages catalogs
    - create language specific translations of the message catalogs
    - use this module so that message strings are properly translated

In order to prepare your program for I18N, you need to look at all the strings
in your program.  Any string that needs to be translated should be marked by
wrapping it in _('...') -- i.e. a call to the function `_'.  For example:

    filename = 'mylog.txt'
    message = _('writing a log message')
    fp = open(filename, 'w')
    fp.write(message)
    fp.close()

In this example, the string `writing a log message' is marked as a candidate
for translation, while the strings `mylog.txt' and `w' are not.

The GNU gettext package provides a tool, called xgettext that scans C and C++
source code looking for these specially marked strings.  xgettext generates
what are called `.pot' files, essentially structured human readable files
which contain every marked string in the source code.  These .pot files are
copied and handed over to translators who write language-specific versions for
every supported language.

For I18N Python programs however, xgettext won't work; it doesn't understand
the myriad of string types support by Python.  The standard Python
distribution provides a tool called pygettext that does though (usually in the
Tools/i18n directory).  This is a command line script that supports a similar
interface as xgettext; see its documentation for details.  Once you've used
pygettext to create your .pot files, you can use the standard GNU gettext
tools to generate your machine-readable .mo files, which are what's used by
this module and the GNU gettext libraries.

In the simple case, to use this module then, you need only add the following
bit of code to the main driver file of your application:

    import gettext
    gettext.install()

This sets everything up so that your _('...') function calls Just Work.  In
other words, it installs `_' in the builtins namespace for convenience.  You
can skip this step and do it manually by the equivalent code:

    import gettext
    import __builtin__
    __builtin__['_'] = gettext.gettext

Once you've done this, you probably want to call bindtextdomain() and
textdomain() to get the domain set up properly.  Again, for convenience, you
can pass the domain and localedir to install to set everything up in one fell
swoop:

    import gettext
    gettext.install('mydomain', '/my/locale/dir')

"""



From tim_one at email.msn.com  Fri Aug 18 23:13:29 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:13:29 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000818214333.X376@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKCHAAA.tim_one@email.msn.com>

[Thomas Wouters]
> No, and that's one more reason to reverse my patch ! :-) Note
> that I didn't *add* the Makefile, I only added Grammar to the
> directories-to-run-make-in-by-default.
> ...
> The reason it was added was the multitude of Grammar-changing
> patches on SF, and the number of people that forgot to run 'make'
> in Grammar/ after applying them. I mentioned adding Grammar/ to
> the directories to be made, Guido said it was a good idea, and
> noone complained to it until after it was added ;P

And what exactly is the complaint?  It's nice to build things that are out
of date;  I haven't used Unix(tm) for some time, but I vaguely recall that
was "make"'s purpose in life <wink>.  Or is it that the grammar files are
getting rebuilt unnecessarily?

> ...
> My-boss-doesn't-pay-me-to-work-on-Windows-ly y'rs,

Your boss *pays* you?!  Wow.  No wonder you get so much done.





From bwarsaw at beopen.com  Fri Aug 18 23:16:07 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:16:07 -0400 (EDT)
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl>
	<20000818161745.U376@xs4all.nl>
	<20000818150639.6685C31047C@bireme.oratrix.nl>
	<000001c00945$a8d37e40$f2a6b5d4@hagrid>
	<20000818214333.X376@xs4all.nl>
	<14749.41039.166847.942483@anthem.concentric.net>
	<20000818225607.Z376@xs4all.nl>
Message-ID: <14749.42903.342401.245594@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Yes, but more critically so, since it'll screw up more than a
    TW> couple of defines on a handful of systems :-)

Yes, but a "cvs update" should always clue you in that those files
needs committing too.  Every always does a "cvs update" before
committing any files, right? :)
    
    TW> But I still think that if this particular makefile setup is
    TW> used in releases, 'pgen' should at least be made a tad less
    TW> verbose.

pgen also leaks like a sieve, but it's not worth patching. ;}

-Barry



From bwarsaw at beopen.com  Fri Aug 18 23:17:14 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:17:14 -0400 (EDT)
Subject: [Python-Dev] Re: [Patches] [Patch #101055] Cookie.py
References: <200008181951.MAA30358@bush.i.sourceforge.net>
	<14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
	<20000818164837.A8423@kronos.cnri.reston.va.us>
Message-ID: <14749.42970.845587.90980@anthem.concentric.net>

>>>>> "AK" == Andrew Kuchling <akuchlin at mems-exchange.org> writes:

    AK> I hate throwing away information that stands even a tiny
    AK> chance of being useful; good thing the price of disk storage
    AK> keeps dropping, eh?  The version history might contain details
    AK> that will be useful in future debugging or code comprehension,
    AK> so I dislike losing it forever.

I agree.  Let's try to keep the revision history for Cookie.py.

-Barry



From tim_one at email.msn.com  Fri Aug 18 23:18:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:18:56 -0400
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
Message-ID: <LNBBLJKPBEHFEDALKOLCIEKDHAAA.tim_one@email.msn.com>

For those of you who stopped reading comp.lang.python, note that this is
*typical* of the advice users get in the absence of syntactic "print" sugar.

it's-not-exactly-intuitive-that-a-"print"-stmt-isn't-meant-to-
    do-output<wink>-ly y'rs  - tim

-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Jeff Kunce
Sent: Friday, August 18, 2000 3:41 PM
To: python-list at python.org
Subject: Re: sys.stndout syntax


> on win32, to get printed output to a file i use
> >>> sys.stndout = open('file.txt', 'w')
> how do i get it to go back to the command line

#Here is the specific solution for those who remember the secret password:
sys.stdout = open('file.txt', 'w')
print 'this is written to file.txt'
sys.stdout.close()
sys.stdout = sys.__stdout__
print 'this is written to console'

#Here is the general solution if you can't be bothered with remembering
secret passwords:
original_stdout = sys.stdout
sys.stdout = open('file.txt', 'w')
print 'this is written to file.txt'
sys.stdout.close()
sys.stdout = original_stdout
print 'this is written to console'


  --Jeff


--
http://www.python.org/mailman/listinfo/python-list





From mal at lemburg.com  Fri Aug 18 23:21:23 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 18 Aug 2000 23:21:23 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
Message-ID: <399DA8D3.70E85C58@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> Apologies for duplicates to those of you already on python-dev...
> 
> I've been working on merging all the various implementations of Python
> interfaces to the GNU gettext libraries.  I've worked from code
> contributed by Martin, James, and Peter.  I now have something that
> seems to work fairly well so I thought I'd update you all.
> 
> After looking at all the various wizzy and experimental stuff in these
> implementations, I opted for simplicity, mostly just so I could get my
> head around what was needed.  My goal was to build a fast C wrapper
> module around the C library, and to provide a pure Python
> implementation of an identical API for platforms without GNU gettext.

Sounds cool.

I know that gettext is a standard, but from a technology POV
I would have implemented this as codec wich can then be plugged
wherever l10n is needed, since strings have the new .encode()
method which could just as well be used to convert not only the
string into a different encoding, but also a different language.
Anyway, just a thought...

What I'm missing in your doc-string is a reference as to how
well gettext works together with Unicode. After all, i18n is
among other things about international character sets.
Have you done any experiments in this area ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Fri Aug 18 23:19:12 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:19:12 -0400 (EDT)
Subject: [Python-Dev] [Patch #101055] Cookie.py
References: <200008181951.MAA30358@bush.i.sourceforge.net>
	<14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
	<20000818164837.A8423@kronos.cnri.reston.va.us>
	<20000818230401.A376@xs4all.nl>
Message-ID: <14749.43088.855537.355621@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:
    TW> It would be moderately nice to have the versioning info,
    TW> though I think Fred has a point when he says that it might be
    TW> confusing for people: it would look like the file had been in
    TW> the CVS repository the whole time, and it would be very hard
    TW> to see where the file had been added to Python.

I don't think that's true, because the file won't have the tag
information in it.  That could be a problem in and of itself, but I
dunno.

-Barry



From tim_one at email.msn.com  Fri Aug 18 23:41:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 17:41:18 -0400
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <14749.42903.342401.245594@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEKEHAAA.tim_one@email.msn.com>

>     TW> But I still think that if this particular makefile setup is
>     TW> used in releases, 'pgen' should at least be made a tad less
>     TW> verbose.

[Barry]
> pgen also leaks like a sieve, but it's not worth patching. ;}

Huh!  And here I thought Thomas was suggesting we shorten its name to "pge".

or-even-"p"-if-we-wanted-it-a-lot-less-verbose-ly y'rs  - tim





From bwarsaw at beopen.com  Fri Aug 18 23:49:23 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 17:49:23 -0400 (EDT)
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
	<399DA8D3.70E85C58@lemburg.com>
Message-ID: <14749.44899.573649.483154@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> I know that gettext is a standard, but from a technology POV I
    M> would have implemented this as codec wich can then be plugged
    M> wherever l10n is needed, since strings have the new .encode()
    M> method which could just as well be used to convert not only the
    M> string into a different encoding, but also a different
    M> language.  Anyway, just a thought...

That might be cool to play with, but I haven't done anything with
Python's Unicode stuff (and painfully little with gettext too) so
right now I don't see how they'd fit together.  My gut reaction is
that gettext could be the lower level interface to
string.encode(language).

    M> What I'm missing in your doc-string is a reference as to how
    M> well gettext works together with Unicode. After all, i18n is
    M> among other things about international character sets.
    M> Have you done any experiments in this area ?

No, but I've thought about it, and I don't think the answer is good.
The GNU gettext functions take and return char*'s, which probably
isn't very compatible with Unicode.  _gettext therefore takes and
returns PyStringObjects.

We could do better with the pure-Python implementation, and that might
be a good reason to forgo any performance gains or platform-dependent
benefits you'd get with _gettext.  Of course the trick is using the
Unicode-unaware tools to build .mo files containing Unicode strings.
I don't track GNU gettext developement close enough to know whether
they are addressing Unicode issues or not.

-Barry



From effbot at telia.com  Sat Aug 19 00:06:35 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 19 Aug 2000 00:06:35 +0200
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky   return statements
References: <LNBBLJKPBEHFEDALKOLCEEJLHAAA.tim_one@email.msn.com> <399D9D91.3E76ED8D@per.dem.csiro.au>
Message-ID: <006801c00960$944da200$f2a6b5d4@hagrid>

tim wrote:
> > Pure guess on my part -- couldn't imagine why a compiler would warn *unless*
> > bits were being lost.

the compiler doesn't warn about bits being lost -- it complained
because the code was returning a pointer from a function declared
to return a long integer.

(explicitly casting the pthread_t to a long gets rid of the warning).

mark wrote:
> > > In summary, whatever issue there was for OSF/1 six (or so) years ago
> > > appears to be no longer relevant - but there will be the truncation
> > > issue for Win64-like platforms.
> > 
> > And there's Vladimir's "volatile" hack.
> 
> Wonder if that also is still relevant (was it required because of the
> long * long * cast?)...

probably.  messing up when someone abuses pointer casts is one thing,
but if the AIX compiler cannot cast a long to a long, it's broken beyond
repair ;-)

frankly, the code is just plain broken.  instead of adding even more dumb
hacks, just fix it.  here's how it should be done:

    return (long) pthread_self(); /* look! no variables! */

or change

 /* Jump through some hoops for Alpha OSF/1 */

to

 /* Jump through some hoops because Tim Peters wants us to ;-) */

</F>




From bwarsaw at beopen.com  Sat Aug 19 00:03:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 18 Aug 2000 18:03:24 -0400 (EDT)
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
References: <LNBBLJKPBEHFEDALKOLCIEKDHAAA.tim_one@email.msn.com>
Message-ID: <14749.45740.432586.615745@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> For those of you who stopped reading comp.lang.python, note
    TP> that this is *typical* of the advice users get in the absence
    TP> of syntactic "print" sugar.

Which is of course broken, if say, you print an object that has a
str() that raises an exception.



From tim_one at email.msn.com  Sat Aug 19 00:08:55 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 18:08:55 -0400
Subject: [Python-Dev] pthreads question: typedef ??? pthread_t and hacky   return statements
In-Reply-To: <006801c00960$944da200$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEKIHAAA.tim_one@email.msn.com>

[/F]
> the compiler doesn't warn about bits being lost -- it complained
> because the code was returning a pointer from a function declared
> to return a long integer.
>
> (explicitly casting the pthread_t to a long gets rid of the warning).

For the umpty-umpth time, the code with the simple cast to long is what was
there originally.  The convoluted casting was added later to stop "Alpha
OSF/1" compiler complaints.  Perhaps the compiler no longer complains,
though, or perhaps the one or two people who have tried it since don't have
a version of the compiler that cares about it.

> ...
> frankly, the code is just plain broken.  instead of adding even more dumb
> hacks, just fix it.  here's how it should be done:
>
>     return (long) pthread_self(); /* look! no variables! */

Fine by me, provided that works on all current platforms, and it's
understood that the function is inherently hosed anyway (the cast to long is
inherently unsafe, and we're still doing nothing to meet the promise in the
docs that this function returns a non-zero integer).

> or change
>
>  /* Jump through some hoops for Alpha OSF/1 */
>
> to
>
>  /* Jump through some hoops because Tim Peters wants us to ;-) */

Also fine by me, provided that works on all current platforms, and it's
understood that the function is inherently hosed anyway (the cast to long is
inherently unsafe, and we're still doing nothing to meet the promise in the
docs that this function returns a non-zero integer).





From tim_one at email.msn.com  Sat Aug 19 00:14:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 18 Aug 2000 18:14:25 -0400
Subject: [Python-Dev] [PEP 214] FW: sys.stndout syntax
In-Reply-To: <14749.45740.432586.615745@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEKJHAAA.tim_one@email.msn.com>

>     TP> For those of you who stopped reading comp.lang.python, note
>     TP> that this is *typical* of the advice users get in the absence
>     TP> of syntactic "print" sugar.

[Barry]
> Which is of course broken, if say, you print an object that has a
> str() that raises an exception.

Yes, and if you follow that thread on c.l.py, you'll find that it's also
typical for the suggestions to get more and more convoluted (for that and
other reasons).





From barry at scottb.demon.co.uk  Sat Aug 19 00:36:28 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 18 Aug 2000 23:36:28 +0100
Subject: [Python-Dev] Preventing recursion core dumps
In-Reply-To: <20000814094440.0BC7F303181@snelboot.oratrix.nl>
Message-ID: <000501c00964$c00e0de0$060210ac@private>


> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Jack Jansen
> Sent: 14 August 2000 10:45
> To: Guido van Rossum
> Cc: Vladimir Marangozov; Python core developers
> Subject: Re: [Python-Dev] Preventing recursion core dumps
> 
> 
> Isn't the solution to this problem to just implement PyOS_CheckStack() for 
> unix?

	And for Windows...

	I still want to control the recursion depth for other reasons than
	preventing crashes. Especially when I have embedded Python inside my
	app. (CUrrently I have to defend against a GPF under windows when
	def x(): x() is called.)

		Barry




From barry at scottb.demon.co.uk  Sat Aug 19 00:50:39 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Fri, 18 Aug 2000 23:50:39 +0100
Subject: [Python-Dev] Terminal programs (was: Python-dev summary: Jul 1-15)
In-Reply-To: <20000718124144.M29590@lyra.org>
Message-ID: <000601c00966$bac6f890$060210ac@private>

I can second Tera Term Pro. It is one of the few VT100 emulators that gets the
emulation right. Many term program get the emulation wrong, often
badly. If you do not have the docs for the VT series terminals a devo will not
know the details of how the escape sequences should work and apps will fail.

	BArry (Ex DEC VT expert)

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Greg Stein
> Sent: 18 July 2000 20:42
> To: python-dev at python.org
> Subject: [Python-Dev] Terminal programs (was: Python-dev summary: Jul
> 1-15)
> 
> 
> On Tue, Jul 18, 2000 at 10:13:21AM -0400, Andrew Kuchling wrote:
> > Thanks to everyone who made some suggestions.  The more minor
> > edits have been made, but I haven't added any of the threads I missed
> > because doing a long stretch of Emacs editing in this lame Windows terminal
> > program will drive me insane, so I just posted the summary to python-list.
> > 
> > <rant>How is it possible for Microsoft to not get a VT100-compatible
> > terminal program working correctly?  VT100s have been around since,
> > when, the 1970s?  Can anyone suggest a Windows terminal program that
> > *doesn't* suck dead bunnies through a straw?</rant>
> 
> yes.
> 
> I use "Tera Term Pro" with the SSH extensions. That gives me an excellent
> Telnet app, and it gives me SSH. I have never had a problem with it.
> 
> [ initially, there is a little tweakiness with creating the "known_hosts"
>   file, but you just hit "continue" and everything is fine after that. ]
> 
> Tera Term Pro can be downloaded from some .jp address. I think there is a
> 16-bit vs 32-bit program. I use the latter. The SSL stuff is located in Oz,
> me thinks.
> 
> I've got it on the laptop. Great stuff.
> 
> Cheers,
> -g
> 
> -- 
> Greg Stein, http://www.lyra.org/
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev
> 



From james at daa.com.au  Sat Aug 19 02:54:30 2000
From: james at daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 08:54:30 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399DA8D3.70E85C58@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008190846110.25020-100000@james.daa.com.au>

On Fri, 18 Aug 2000, M.-A. Lemburg wrote:

> What I'm missing in your doc-string is a reference as to how
> well gettext works together with Unicode. After all, i18n is
> among other things about international character sets.
> Have you done any experiments in this area ?

At the C level, the extent to which gettext supports unicode is if the
catalog was encoded in the UTF8 encoding.

As an example, GTK (a GUI toolkit) is moving to pango (a library used to
allow display of multiple languages at once), all the catalogs for GTK
programs will have to be reencoded in UTF8.

I don't know if it is worth adding explicit support to the python gettext
module though.

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From fdrake at beopen.com  Sat Aug 19 03:16:33 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 18 Aug 2000 21:16:33 -0400 (EDT)
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: <14749.43088.855537.355621@anthem.concentric.net>
References: <200008181951.MAA30358@bush.i.sourceforge.net>
	<14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
	<20000818164837.A8423@kronos.cnri.reston.va.us>
	<20000818230401.A376@xs4all.nl>
	<14749.43088.855537.355621@anthem.concentric.net>
Message-ID: <14749.57329.966314.171906@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > I don't think that's true, because the file won't have the tag
 > information in it.  That could be a problem in and of itself, but I
 > dunno.

  The confusion isn't from the tags, but the dates; if the ,v was
created 2 years ago, asking for the python tree as of a year ago
(using -D <date>) will include the file, even though it wasn't part of
our repository then.  Asking for a specific tag (using -r <tag>) will
properly not include it unless there's a matching tag there.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From james at daa.com.au  Sat Aug 19 03:26:44 2000
From: james at daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 09:26:44 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <14749.42747.411862.940207@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0008190854480.25020-100000@james.daa.com.au>

On Fri, 18 Aug 2000, Barry A. Warsaw wrote:

> 
> Apologies for duplicates to those of you already on python-dev...
> 
> I've been working on merging all the various implementations of Python
> interfaces to the GNU gettext libraries.  I've worked from code
> contributed by Martin, James, and Peter.  I now have something that
> seems to work fairly well so I thought I'd update you all.
> 
> After looking at all the various wizzy and experimental stuff in these
> implementations, I opted for simplicity, mostly just so I could get my
> head around what was needed.  My goal was to build a fast C wrapper
> module around the C library, and to provide a pure Python
> implementation of an identical API for platforms without GNU gettext.

Sounds good.  Most of the experimental stuff in my module turned out to
not be very useful.  Having a simple gettext module plus your pyxgettext
script should be enough.

> 
> I started with Martin's libintlmodule, renamed it _gettext and cleaned
> up the C code a bit.  This provides gettext(), dgettext(),
> dcgettext(), textdomain(), and bindtextdomain() functions.  The
> gettext.py module imports these, and if it succeeds, it's done.
> 
> If that fails, then there's a bunch of code, mostly derived from
> Peter's fintl.py module, that reads the binary .mo files and does the
> look ups itself.  Note that Peter's module only supported the GNU
> gettext binary format, and that's all mine does too.  It should be
> easy to support other binary formats (Solaris?) by overriding one
> method in one class, and contributions are welcome.

I think support for Solaris big endian format .po files would probably be
a good idea.  It is not very difficult and doesn't really add to the
complexity.

> 
> James's stuff looked cool too, what I grokked of it :) but I think
> those should be exported as higher level features.  I didn't include
> the ability to write .mo files or the exported Catalog objects.  I
> haven't used the I18N services enough to know whether these are
> useful.

As I said above, most of that turned out not to be very useful.  Did you
include any of the language selection code in the last version of my
gettext module?  It gave behaviour very close to C gettext in this
respect.  It expands the locale name given by the user using the
locale.alias files found on the systems, then decomposes that into the
simpler forms.  For instance, if LANG=en_GB, then my gettext module would
search for catalogs by names:
  ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']

This also allows things like expanding LANG=catalan to:
  ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
(provided the appropriate locale.alias files are found)

If you missed that that version I sent you I can send it again.  It
stripped out a lot of the experimental code giving a much simpler module.

> 
> I added one convenience function, gettext.install().  If you call
> this, it inserts the gettext.gettext() function into the builtins
> namespace as `_'.  You'll often want to do this, based on the I18N
> translatable strings marking conventions.  Note that importing gettext
> does /not/ install by default!

That sounds like a good idea that will make things a lot easier in the
common case.

> 
> And since (I think) you'll almost always want to call bindtextdomain()
> and textdomain(), you can pass the domain and localedir in as
> arguments to install.  Thus, the simple and quick usage pattern is:
> 
>     import gettext
>     gettext.install('mydomain', '/my/locale/dir')
> 
>     print _('this is a localized message')
> 
> I think it'll be easier to critique this stuff if I just check it in.
> Before I do, I still need to write up a test module and hack together
> docos.  In the meantime, here's the module docstring for gettext.py.
> Talk amongst yourselves. :)
> 
> -Barry

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 05:27:20 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 05:27:20 +0200 (CEST)
Subject: [Python-Dev] Adding insint() function
In-Reply-To: <14749.33584.683341.684523@cj42289-a.reston1.va.home.com> from "Fred L. Drake, Jr." at Aug 18, 2000 02:40:48 PM
Message-ID: <200008190327.FAA10001@python.inrialpes.fr>

Fred L. Drake, Jr. wrote:
> 
> 
> Vladimir Marangozov writes:
>  > So name it PyModule_AddConstant(module, name, constant),
>  > which fails with "can't add constant to module" err msg.
> 
>   Even better!  I expect there should be at least a couple of these;
> one for ints, one for strings.
> 

What about something like this (untested):

------------------------------------------------------------------------
int
PyModule_AddObject(PyObject *m, char *name, PyObject *o)
{
        if (!PyModule_Check(m) || o == NULL)
                return -1;
        if (PyDict_SetItemString(((PyModuleObject *)m)->md_dict, name, o))
                return -1;
        Py_DECREF(o);
        return 0;
}

#define PyModule_AddConstant(m, x) \
        PyModule_AddObject(m, #x, PyInt_FromLong(x))

#define PyModule_AddString(m, x) \  
        PyModule_AddObject(m, x, PyString_FromString(x))

------------------------------------------------------------------------
void 
initmymodule(void)
{
        int CONSTANT = 123456;
        char *STR__doc__  = "Vlad";

        PyObject *m = Py_InitModule4("mymodule"...);


 
        if (PyModule_AddString(m, STR__doc__) ||
            PyModule_AddConstant(m, CONSTANT) ||
            ... 
        {
            Py_FatalError("can't init mymodule");
        }
}           


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From cgw at fnal.gov  Sat Aug 19 05:55:21 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 22:55:21 -0500 (CDT)
Subject: [Python-Dev] RE: compile.c: problem with duplicate argument bugfix
Message-ID: <14750.1321.978274.117748@buffalo.fnal.gov>

I'm catching up on the python-dev archives and see your message.

Note that I submitted a patch back in May to fix this same problem:

 http://www.python.org/pipermail/patches/2000-May/000638.html

There you will find a working patch, and a detailed discussion which
explains why your approach results in a core-dump.

I submitted this patch back before Python moved over to SourceForge,
there was a small amount of discussion about it and then the word from
Guido was "I'm too busy to look at this now", and the patch got
dropped on the floor.




From tim_one at email.msn.com  Sat Aug 19 06:11:41 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 00:11:41 -0400
Subject: [Python-Dev] RE: [Patches] [Patch #101055] Cookie.py
In-Reply-To: <14749.38716.228254.649957@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELAHAAA.tim_one@email.msn.com>

Moving this over from patches to python-dev.

My 2 cents:  The primary job of a source control system is to maintain an
accurate and easily retrieved historical record of a project.  Tim
O'Malley's ,v file records the history of his project, Python's should
record the history of its.  While a handful of people at CNRI have been able
to (or could, if they were of a common mind to) keep track of a handful of
exceptions in their heads, Python's CVS tree is available to the world now,
and approximately nobody looking at it will have any knowledge of this
discussion.  If they ask CVS for a date-based snapshot of the past, they're
using CVS for what it's *designed* for, and they should get back what they
asked for.

Have these kinds of tricks already been played in the CVS tree?  I'm mildly
concerned about that too, because whenever license or copyright issues are
in question, an accurate historical record is crucial ("Now, Mr. Kuchling,
isn't it true that you deliberately sabotaged the history of the Python
project in order to obscure your co-conspirators' theft of my client's
intellectual property?" <0.9 wink>).

let's-honor-the-past-by-keeping-it-honest-ly y'rs  - tim

> -----Original Message-----
> From: patches-admin at python.org [mailto:patches-admin at python.org]On
> Behalf Of Fred L. Drake, Jr.
> Sent: Friday, August 18, 2000 4:06 PM
> To: noreply at sourceforge.net
> Cc: akuchlin at mems-exchange.org; patches at python.org
> Subject: Re: [Patches] [Patch #101055] Cookie.py
>
>
>
> noreply at sourceforge.net writes:
>  > I have a copy of Tim O'Malley's ,v file (in order to preserve the
>  > version history).  I can either ask the SF admins to drop it into
>  > the right place in the CVS repository, but will that screw up the
>  > Python 1.6 tagging in some way?  (I would expect not, but who
>  > knows?)
>
>   That would have no effect on any of the Python tagging.  It's
> probably worthwhile making sure there are no tags in the ,v file, but
> that can be done after it gets dropped in place.
>   Now, Greg Stein will tell us that dropping this into place is the
> wrong thing to do.  What it *will* screw up is people asking for the
> state of Python at a specific date before the file was actually added;
> they'll get this file even for when it wasn't in the Python CVS tree.
> I can live with that, but we should make a policy decision for the
> Python tree regarding this sort of thing.
>
>  > The second option would be for me to retrace Cookie.py's
>  > development -- add revision 1.1, check in revision 1.2 with the
>  > right log message, check in revision 1.3, &c.  Obviously I'd prefer
>  > to not do this.
>
>   Don't -- it's not worth it.
>
>
>   -Fred
>
> --
> Fred L. Drake, Jr.  <fdrake at beopen.com>
> BeOpen PythonLabs Team Member
>
>
> _______________________________________________
> Patches mailing list
> Patches at python.org
> http://www.python.org/mailman/listinfo/patches





From cgw at fnal.gov  Sat Aug 19 06:31:06 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 18 Aug 2000 23:31:06 -0500 (CDT)
Subject: [Python-Dev] Eureka! (Re: test_fork fails --with-thread)
Message-ID: <14750.3466.34096.504552@buffalo.fnal.gov>


Last month there was a flurry of discussion, around 

http://www.python.org/pipermail/python-dev/2000-July/014208.html

about problems arising when combining threading and forking.  I've
been reading through the python-dev archives and as far as I can tell
this problem has not yet been resolved.

Well, I think I understand what's going on and I have a patch that
fixes the problem.

Contrary to some folklore, you *can* use fork() in threaded code; you
just have to be a bit careful about locks...

Rather than write up a long-winded explanation myself, allow me to
quote:

-----------------------------------------------------------------
from "man pthread_atfork":

       ... recall that fork(2) duplicates the whole memory space,
       including mutexes in their current locking state, but only the
       calling thread: other threads are not running in the child
       process. Thus, if a mutex is locked by a thread other than
       the thread calling fork, that  mutex  will  remain  locked
       forever in the child process, possibly blocking the execu-
       tion of the child process. 

and from http://www.lambdacs.com/newsgroup/FAQ.html#Q120

  Q120: Calling fork() from a thread 

  > Can I fork from within a thread ?

  Absolutely.

  > If that is not explicitly forbidden, then what happens to the
  > other threads in the child process ?

  There ARE no other threads in the child process. Just the one that
  forked. If your application/library has background threads that need
  to exist in a forked child, then you should set up an "atfork" child
  handler (by calling pthread_atfork) to recreate them. And if you use
  mutexes, and want your application/library to be "fork safe" at all,
  you also need to supply an atfork handler set to pre-lock all your
  mutexes in the parent, then release them in the parent and child
  handlers.  Otherwise, ANOTHER thread might have a mutex locked when
  one thread forks -- and because the owning thread doesn't exist in
  the child, the mutex could never be released. (And, worse, whatever
  data is protected by the mutex is in an unknown and inconsistent
  state.)

-------------------------------------------------------------------

Below is a patch (I will also post this to SourceForge)

Notes on the patch:

1) I didn't make use of pthread_atfork, because I don't know how
   portable it is.  So, if somebody uses "fork" in a C extension there
   will still be trouble.

2) I'm deliberately not cleaning up the old lock before creating 
   the new one, because the lock destructors also do error-checking.
   It might be better to add a PyThread_reset_lock function to all the
   thread_*.h files, but I'm hesitant to do this because of the amount
   of testing required.


Patch:

Index: Modules/signalmodule.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Modules/signalmodule.c,v
retrieving revision 2.53
diff -c -r2.53 signalmodule.c
*** Modules/signalmodule.c	2000/08/03 02:34:44	2.53
--- Modules/signalmodule.c	2000/08/19 03:37:52
***************
*** 667,672 ****
--- 667,673 ----
  PyOS_AfterFork(void)
  {
  #ifdef WITH_THREAD
+ 	PyEval_ReInitThreads();
  	main_thread = PyThread_get_thread_ident();
  	main_pid = getpid();
  #endif
Index: Parser/intrcheck.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Parser/intrcheck.c,v
retrieving revision 2.39
diff -c -r2.39 intrcheck.c
*** Parser/intrcheck.c	2000/07/31 15:28:04	2.39
--- Parser/intrcheck.c	2000/08/19 03:37:54
***************
*** 206,209 ****
--- 206,212 ----
  void
  PyOS_AfterFork(void)
  {
+ #ifdef WITH_THREAD
+ 	PyEval_ReInitThreads();
+ #endif
  }
Index: Python/ceval.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/ceval.c,v
retrieving revision 2.191
diff -c -r2.191 ceval.c
*** Python/ceval.c	2000/08/18 19:53:25	2.191
--- Python/ceval.c	2000/08/19 03:38:06
***************
*** 142,147 ****
--- 142,165 ----
  		Py_FatalError("PyEval_ReleaseThread: wrong thread state");
  	PyThread_release_lock(interpreter_lock);
  }
+ 
+ /* This function is called from PyOS_AfterFork to ensure that newly
+    created child processes don't hold locks referring to threads which
+    are not running in the child process.  (This could also be done using
+    pthread_atfork mechanism, at least for the pthreads implementation) */
+ void
+ PyEval_ReInitThreads(void)
+ {
+ 	if (!interpreter_lock)
+ 		return;
+ 	/*XXX Can't use PyThread_free_lock here because it does too
+ 	  much error-checking.  Doing this cleanly would require
+ 	  adding a new function to each thread_*.h.  Instead, just
+ 	  create a new lock and waste a little bit of memory */
+ 	interpreter_lock = PyThread_allocate_lock();
+ 	PyThread_acquire_lock(interpreter_lock, 1);
+ 	main_thread = PyThread_get_thread_ident();
+ }
  #endif
  
  /* Functions save_thread and restore_thread are always defined so





From esr at thyrsus.com  Sat Aug 19 07:17:17 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 19 Aug 2000 01:17:17 -0400
Subject: [Python-Dev] Request for help w/ bsddb module
In-Reply-To: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>; from amk@s154.tnt3.ann.va.dialup.rcn.com on Thu, Aug 17, 2000 at 10:46:32PM -0400
References: <20000817224632.A525@207-172-146-154.s154.tnt3.ann.va.dialup.rcn.com>
Message-ID: <20000819011717.L835@thyrsus.com>

A.M. Kuchling <amk at s154.tnt3.ann.va.dialup.rcn.com>:
> (Can this get done in time for Python 2.0?  Probably.  Can it get
> tested in time for 2.0?  Ummm....)

I have zero experience with writing C extensions, so I'm probably not
best deployed on this.  But I'm willing to help with testing.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

"As to the species of exercise, I advise the gun. While this gives
[only] moderate exercise to the body, it gives boldness, enterprise,
and independence to the mind.  Games played with the ball and others
of that nature, are too violent for the body and stamp no character on
the mind. Let your gun, therefore, be the constant companion to your
walks."
        -- Thomas Jefferson, writing to his teenaged nephew.



From tim_one at email.msn.com  Sat Aug 19 07:11:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 01:11:28 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
Message-ID: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>

Note that a patch has been posted to SourceForge that purports to solve
*some* thread vs fork problems:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470

Since nobody has made real progress on figuring out why test_fork1 fails on
some systems, would somebody who is able to make it fail please just try
this patch & see what happens?

understanding-is-better-than-a-fix-but-i'll-settle-for-the-latter-ly
    y'rs  - tim





From cgw at fnal.gov  Sat Aug 19 07:26:33 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Sat, 19 Aug 2000 00:26:33 -0500 (CDT)
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
Message-ID: <14750.6793.342815.211141@buffalo.fnal.gov>

Tim Peters writes:

 > Since nobody has made real progress on figuring out why test_fork1 
 > fails on some systems,  would somebody who is able to make it fail
 > please just try this patch & see what happens?

Or try this program (based on Neil's example), which will fail almost
immediately unless you apply my patch:


import thread
import os, sys
import time

def doit(name):
    while 1:
        if os.fork()==0:
            print name, 'forked', os.getpid()
            os._exit(0)
        r = os.wait()

for x in range(50):
    name = 't%s'%x
    print 'starting', name
    thread.start_new_thread(doit, (name,))

time.sleep(300)




From tim_one at email.msn.com  Sat Aug 19 07:59:12 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 01:59:12 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <14750.6793.342815.211141@buffalo.fnal.gov>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELGHAAA.tim_one@email.msn.com>

[Tim]
> Since nobody has made real progress on figuring out why test_fork1
> fails on some systems,  would somebody who is able to make it fail
> please just try this patch & see what happens?

[Charles G Waldman]
> Or try this program (based on Neil's example), which will fail almost
> immediately unless you apply my patch:

Not "or", please, "both".  Without understanding the problem in detail, we
have no idea how many bugs are lurking here.  For example, Python allocates
at least two locks besides "the global lock", and "doing something" about
the latter alone may not help with all the failing test cases.  Note too
that the pthread_atfork docs were discussed earlier, and neither Guido nor I
were able to dream up a scenario that accounted for the details of most
failures people *saw*:  we both stumbled into another (and the same) failing
scenario, but it didn't match the stacktraces people posted (which showed
deadlocks/hangs in the *parent* thread; but at a fork, only the state of the
locks in the child "can" get screwed up).  The patch you posted should plug
the "deadlock in the child" scenario we did understand, but that scenario
didn't appear to be relevant in most cases.

The more info the better, let's just be careful to test *everything* that
failed before writing this off.





From ping at lfw.org  Sat Aug 19 08:43:18 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 18 Aug 2000 23:43:18 -0700 (PDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <20000818182246.V376@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>

My $0.02.

+1 on:    import <modname> as <localmodname>
          import <pkgname> as <localpkgname>

+1 on:    from <modname> import <symname> as <localsymname>
          from <pkgname> import <modname> as <localmodname>

+1 on:    from <pkgname>.<modname> import <symname> as <localsymname>
          from <pkgname>.<pkgname> import <modname> as <localmodname>


-1 on *either* meaning of:

          import <pkgname>.<modname> as <localname>

...as it's not clear what the correct meaning is.

If the intent of this last form is to import a sub-module of a
package into the local namespace with an aliased name, then you
can just say

          from <pkgname> import <modname> as <localname>

and the meaning is then quite clear.



-- ?!ng




From ping at lfw.org  Sat Aug 19 08:38:10 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 18 Aug 2000 23:38:10 -0700 (PDT)
Subject: [Python-Dev] Re: indexing, indices(), irange(), list.items()
In-Reply-To: <14749.18016.323403.295212@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008182336010.416-100000@skuld.lfw.org>

On Fri, 18 Aug 2000, Fred L. Drake, Jr. wrote:
>   I hadn't considered *not* using an "in" clause, but that is actually
> pretty neat.  I'd like to see all of these allowed; disallowing "for i
> indexing e in ...:" reduces the intended functionality substantially.

I like them all as well (and had previously assumed that the "indexing"
proposal included the "for i indexing sequence" case!).

While we're sounding off on the issue, i'm quite happy (+1) on both of:

          for el in seq:
          for i indexing seq:
          for i indexing el in seq:

and

          for el in seq:
          for i in indices(seq):
          for i, el in irange(seq):

with a slight preference for the former.


-- ?!ng




From loewis at informatik.hu-berlin.de  Sat Aug 19 09:25:20 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 19 Aug 2000 09:25:20 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399DA8D3.70E85C58@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com>
Message-ID: <200008190725.JAA26022@pandora.informatik.hu-berlin.de>

> What I'm missing in your doc-string is a reference as to how
> well gettext works together with Unicode. After all, i18n is
> among other things about international character sets.
> Have you done any experiments in this area ?

I have, to some degree. As others pointed out, gettext maps byte
arrays to byte arrays. However, in the GNU internationalization
project, it is convention to put an entry like

msgid ""
msgstr ""
"Project-Id-Version: GNU grep 2.4\n"
"POT-Creation-Date: 1999-11-13 11:33-0500\n"
"PO-Revision-Date: 1999-12-07 10:10+01:00\n"
"Last-Translator: Martin von L?wis <martin at mira.isdn.cs.tu-berlin.de>\n"
"Language-Team: German <de at li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=ISO-8859-1\n"
"Content-Transfer-Encoding: 8-bit\n"

into the catalog, which can be accessed as translation of the empty
string. It typically has a charset= element, which allows to analyse
what character set is used in the catalog. Of course, this is a
convention only, so it may not be present. If it is absent, and
conversion to Unicode is requested, it is probably a good idea to
assume UTF-8 (as James indicated, that will be the GNOME coded
character set for catalogs, for example).

In any case, I think it is a good idea to support retrieval of
translated strings as Unicode objects. I can think of two alternative
interfaces:

gettext.gettext(msgid, unicode=1)
#or
gettext.unigettext(msgid)

Of course, if applications install _, they'd know whether they want
unicode or byte strings, so _ would still take a single argument.

However, I don't think that this feature must be there at the first
checkin; I'd volunteer to work on a patch after Barry has installed
his code, and after I got some indication what the interface should
be.

Regards,
Martin



From tim_one at email.msn.com  Sat Aug 19 11:19:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 05:19:23 -0400
Subject: [Python-Dev] RE: Call for reviewer!
In-Reply-To: <B5BF7652.7B39%dgoodger@bigfoot.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com>

[David Goodger]
> I thought the "backwards compatibility" issue might be a sticking point ;>
> And I *can* see why.
>
> So, if I were to rework the patch to remove the incompatibility, would it
> fly or still be shot down?

I'm afraid "shot down" is the answer, but it's no reflection on the quality
of your work.  Guido simply doesn't want any enhancements of any kind to
getopt to be distributed in the standard library.  He made that very clear
in a conference call with the PythonLabs team, and as the interim 2.0
release manager du jour I pass that on in his absence.

This wasn't a surprise to me, as there's a very long history of rejected
getopt patches.  There are certainly users who *want* fancier getopt
capabilities!  The problem in making them happy is threefold:  (1) most
users don't (as the lack of positive response in this thread on Python-Dev
confirms); (2) users who do want them seem unable to agree on how they
should work (witness the bifurcation in your own patch set); and, (3) Guido
actively wants to keep the core getopt simple in the absence of both demand
for, and consensus on, more than it offers already.

This needn't mean your work is dead.  It will find users if it you make it
available on the web, and even in the core Andrew Kuchling pointed out that
the Distutils folks are keen to have a fancier getopt for their own
purposes:

[Andrew]
> Note that there's Lib/distutils/fancy_getopt.py.  The docstring reads:
>
> Wrapper around the standard getopt module that provides the following
> additional features:
>  * short and long options are tied together
>  * options have help strings, so fancy_getopt could potentially
>    create a complete usage summary
>  * options set attributes of a passed-in object

So you might want to talk to Gred Ward about that too (Greg is the Distuils
Dood).

[back to David]
> ...
> BUT WAIT, THERE'S MORE! As part of the deal, you get a free
> test_getopt.py regression test module! Act now; vote +1! (Actually,
> you'll get that no matter what you vote. I'll remove the getoptdict-
> specific stuff and resubmit it if this patch is rejected.)

We don't have to ask Guido abut *that*:  a test module for getopt would be
accepted with extreme (yet intangible <wink>) gratitude.  Thank you!





From mal at lemburg.com  Sat Aug 19 11:28:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:28:32 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de>
Message-ID: <399E5340.B00811EF@lemburg.com>

Martin von Loewis wrote:
> 
> In any case, I think it is a good idea to support retrieval of
> translated strings as Unicode objects. I can think of two alternative
> interfaces:
> 
> gettext.gettext(msgid, unicode=1)
> #or
> gettext.unigettext(msgid)
> 
> Of course, if applications install _, they'd know whether they want
> unicode or byte strings, so _ would still take a single argument.

Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
chars then the traditional API would have to raise encoding
errors -- probably not a good idea since the errors would be
hard to deal with in large applications.

Perhaps the return value type of .gettext() should be given on
the .install() call: e.g. encoding='utf-8' would have .gettext()
return a string using UTF-8 while encoding='unicode' would have
it return Unicode objects.
 
[Which makes me think: perhaps I should add a new codec which
does pretty much the same as the unicode() call: convert the
input data to Unicode ?!]

> However, I don't think that this feature must be there at the first
> checkin; I'd volunteer to work on a patch after Barry has installed
> his code, and after I got some indication what the interface should
> be.

Right.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sat Aug 19 11:37:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:37:28 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
		<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net>
Message-ID: <399E5558.C7B6029B@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> I know that gettext is a standard, but from a technology POV I
>     M> would have implemented this as codec wich can then be plugged
>     M> wherever l10n is needed, since strings have the new .encode()
>     M> method which could just as well be used to convert not only the
>     M> string into a different encoding, but also a different
>     M> language.  Anyway, just a thought...
> 
> That might be cool to play with, but I haven't done anything with
> Python's Unicode stuff (and painfully little with gettext too) so
> right now I don't see how they'd fit together.  My gut reaction is
> that gettext could be the lower level interface to
> string.encode(language).

Oh, codecs are not just about Unicode. Normal string objects
also have an .encode() method which can be used for these
purposes as well.
 
>     M> What I'm missing in your doc-string is a reference as to how
>     M> well gettext works together with Unicode. After all, i18n is
>     M> among other things about international character sets.
>     M> Have you done any experiments in this area ?
> 
> No, but I've thought about it, and I don't think the answer is good.
> The GNU gettext functions take and return char*'s, which probably
> isn't very compatible with Unicode.  _gettext therefore takes and
> returns PyStringObjects.

Martin mentioned the possibility of using UTF-8 for the
catalogs and then decoding them into Unicode. That should be
a reasonable way of getting .gettext() to talk Unicode :-)
 
> We could do better with the pure-Python implementation, and that might
> be a good reason to forgo any performance gains or platform-dependent
> benefits you'd get with _gettext.  Of course the trick is using the
> Unicode-unaware tools to build .mo files containing Unicode strings.
> I don't track GNU gettext developement close enough to know whether
> they are addressing Unicode issues or not.

Just dreaming a little here: I would prefer that we use some
form of XML to write the catalogs. XML comes with Unicode support
and tools for writing XML are available too. We'd only need
a way to transform XML into catalog files of some Python
specific platform independent format (should be possible to
create .mo files from XML too).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sat Aug 19 11:44:19 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 19 Aug 2000 11:44:19 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <Pine.LNX.4.21.0008190854480.25020-100000@james.daa.com.au>
Message-ID: <399E56F3.53799860@lemburg.com>

James Henstridge wrote:
> 
> On Fri, 18 Aug 2000, Barry A. Warsaw wrote:
> 
> > I started with Martin's libintlmodule, renamed it _gettext and cleaned
> > up the C code a bit.  This provides gettext(), dgettext(),
> > dcgettext(), textdomain(), and bindtextdomain() functions.  The
> > gettext.py module imports these, and if it succeeds, it's done.
> >
> > If that fails, then there's a bunch of code, mostly derived from
> > Peter's fintl.py module, that reads the binary .mo files and does the
> > look ups itself.  Note that Peter's module only supported the GNU
> > gettext binary format, and that's all mine does too.  It should be
> > easy to support other binary formats (Solaris?) by overriding one
> > method in one class, and contributions are welcome.
> 
> I think support for Solaris big endian format .po files would probably be
> a good idea.  It is not very difficult and doesn't really add to the
> complexity.
> 
> >
> > James's stuff looked cool too, what I grokked of it :) but I think
> > those should be exported as higher level features.  I didn't include
> > the ability to write .mo files or the exported Catalog objects.  I
> > haven't used the I18N services enough to know whether these are
> > useful.
> 
> As I said above, most of that turned out not to be very useful.  Did you
> include any of the language selection code in the last version of my
> gettext module?  It gave behaviour very close to C gettext in this
> respect.  It expands the locale name given by the user using the
> locale.alias files found on the systems, then decomposes that into the
> simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> search for catalogs by names:
>   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> 
> This also allows things like expanding LANG=catalan to:
>   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> (provided the appropriate locale.alias files are found)
> 
> If you missed that that version I sent you I can send it again.  It
> stripped out a lot of the experimental code giving a much simpler module.

Uhm, can't you make some use of the new APIs in locale.py
for this ?

locale.py has a whole new set of encoding aware support for
LANG variables. It supports Unix and Windows (thanks to /F).
 
--
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mwh21 at cam.ac.uk  Sat Aug 19 11:52:00 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 19 Aug 2000 10:52:00 +0100
Subject: [Python-Dev] [Patch #101055] Cookie.py
In-Reply-To: "Fred L. Drake, Jr."'s message of "Fri, 18 Aug 2000 21:16:33 -0400 (EDT)"
References: <200008181951.MAA30358@bush.i.sourceforge.net> <14749.38716.228254.649957@cj42289-a.reston1.va.home.com> <20000818164837.A8423@kronos.cnri.reston.va.us> <20000818230401.A376@xs4all.nl> <14749.43088.855537.355621@anthem.concentric.net> <14749.57329.966314.171906@cj42289-a.reston1.va.home.com>
Message-ID: <m3itsxpohr.fsf@atrus.jesus.cam.ac.uk>

"Fred L. Drake, Jr." <fdrake at beopen.com> writes:

> Barry A. Warsaw writes:
>  > I don't think that's true, because the file won't have the tag
>  > information in it.  That could be a problem in and of itself, but I
>  > dunno.
> 
>   The confusion isn't from the tags, but the dates; if the ,v was
> created 2 years ago, asking for the python tree as of a year ago
> (using -D <date>) will include the file, even though it wasn't part of
> our repository then.  Asking for a specific tag (using -r <tag>) will
> properly not include it unless there's a matching tag there.

Is it feasible to hack the dates in the ,v file so that it looks like
all the revisions happened between say

2000.08.19.10.50.00

and

2000.08.19.10.51.00

?  This probably has problems too, but they will be more subtle...

Cheers,
M.

-- 
  That's why the smartest companies use Common Lisp, but lie about it
  so all their competitors think Lisp is slow and C++ is fast.  (This
  rumor has, however, gotten a little out of hand. :)
                                        -- Erik Naggum, comp.lang.lisp




From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 12:23:12 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 12:23:12 +0200 (CEST)
Subject: [Python-Dev] RE: Introducing memprof (was PyErr_NoMemory)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEJJHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 18, 2000 03:11:11 PM
Message-ID: <200008191023.MAA11071@python.inrialpes.fr>

Tim Peters wrote:
> 
> My bandwidth is consumed by 2.0 issues, so I won't look at it.  On the
> chance that Guido gets hit by a bus, though, and I have time to kill at his
> funeral, it would be nice to have it available on SourceForge.  Uploading a
> postponed patch sounds fine!

Done. Both patches are updated and relative to current CVS:

Optional object malloc:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470

Optional memory profiler:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470

Let me insist again that these are totally optional and off by default
(lately, a recurrent wish of mine regarding proposed features). 

Since they're optional, off by default, and consitute a solid base for
further work on mem + GC, and despite the tiny imperfections I see in
the profiler, I think I'm gonna push a bit, given that I'm pretty
confident in the code and that it barely affects anything.

So while I'm out of town, my mailbox would be happy to register any
opinions that the python-dev crowd might have (I'm thinking of Barry
and Neil Schemenauer in particular). Also, when BDFL is back from
Palo Alto, give him a chance to emit a statement (although I know
he's not a memory fan <wink>).

I'll *try* to find some time for docs and test cases, but I'd like to
get some preliminary feedback first (especially if someone care to try
this on a 64 bit machine). That's it for now.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From fdrake at beopen.com  Sat Aug 19 14:44:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 19 Aug 2000 08:44:00 -0400 (EDT)
Subject: [Python-Dev] 'import as'
In-Reply-To: <Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>
References: <20000818182246.V376@xs4all.nl>
	<Pine.LNX.4.10.10008182338190.416-100000@skuld.lfw.org>
Message-ID: <14750.33040.285051.600113@cj42289-a.reston1.va.home.com>

Ka-Ping Yee writes:
 > If the intent of this last form is to import a sub-module of a
 > package into the local namespace with an aliased name, then you
 > can just say
 > 
 >           from <pkgname> import <modname> as <localname>

  I could live with this restriction, and this expression is
unambiguous (a good thing for Python!).


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Sat Aug 19 15:54:21 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sat, 19 Aug 2000 16:54:21 +0300 (IDT)
Subject: [Python-Dev] Intent to document: Cookie.py
Message-ID: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>

This is just a notice that I'm currently in the middle of documenting
Cookie. I should be finished sometime today. This is just to stop anyone
else from wasting his time -- if you got time to kill, you can write a
test suite <wink>

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From james at daa.com.au  Sat Aug 19 16:14:12 2000
From: james at daa.com.au (James Henstridge)
Date: Sat, 19 Aug 2000 22:14:12 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E56F3.53799860@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008192202520.25394-100000@james.daa.com.au>

On Sat, 19 Aug 2000, M.-A. Lemburg wrote:

> > As I said above, most of that turned out not to be very useful.  Did you
> > include any of the language selection code in the last version of my
> > gettext module?  It gave behaviour very close to C gettext in this
> > respect.  It expands the locale name given by the user using the
> > locale.alias files found on the systems, then decomposes that into the
> > simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> > search for catalogs by names:
> >   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> > 
> > This also allows things like expanding LANG=catalan to:
> >   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> > (provided the appropriate locale.alias files are found)
> > 
> > If you missed that that version I sent you I can send it again.  It
> > stripped out a lot of the experimental code giving a much simpler module.
> 
> Uhm, can't you make some use of the new APIs in locale.py
> for this ?
> 
> locale.py has a whole new set of encoding aware support for
> LANG variables. It supports Unix and Windows (thanks to /F).

Well, it can do a little more than that.  It will also handle the case of
a number of locales listed in the LANG environment variable.  It also
doesn't look like it handles decomposition of a locale like
ll_CC.encoding at modifier into other matching encodings in the correct
precedence order.

Maybe something to do this sort of decomposition would fit better in
locale.py though.

This sort of thing is very useful for people who know more than one
language, and doesn't seem to be handled by plain setlocale()

>  
> --
> Marc-Andre Lemburg

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From fdrake at beopen.com  Sat Aug 19 16:14:27 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 19 Aug 2000 10:14:27 -0400 (EDT)
Subject: [Python-Dev] Intent to document: Cookie.py
In-Reply-To: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>
References: <Pine.GSO.4.10.10008191645250.7468-100000@sundial>
Message-ID: <14750.38467.274688.274349@cj42289-a.reston1.va.home.com>

Moshe Zadka writes:
 > This is just a notice that I'm currently in the middle of documenting
 > Cookie. I should be finished sometime today. This is just to stop anyone
 > else from wasting his time -- if you got time to kill, you can write a
 > test suite <wink>

  Great, thanks!  Just check it in as libcookie.tex when you're ready,
and I'll check the markup for details.  Someone familiar with the
module can proof it for content.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From m.favas at per.dem.csiro.au  Sat Aug 19 16:24:18 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Sat, 19 Aug 2000 22:24:18 +0800
Subject: [Python-Dev] [Fwd: Who can make test_fork1 fail?]
Message-ID: <399E9892.35A1AC79@per.dem.csiro.au>

 
-------------- next part --------------
An embedded message was scrubbed...
From: Mark Favas <m.favas at per.dem.csiro.au>
Subject: Who can make test_fork1 fail?
Date: Sat, 19 Aug 2000 17:59:13 +0800
Size: 658
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000819/d63fb085/attachment-0001.eml>

From tim_one at email.msn.com  Sat Aug 19 19:34:28 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 13:34:28 -0400
Subject: [Python-Dev] New anal crusade
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com>

Has anyone tried compiling Python under gcc with

    -Wmissing-prototypes -Wstrict-prototypes

?  Someone on Python-Help just complained about warnings under that mode,
but it's unclear to me which version of Python they were talking about.





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 20:01:52 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 20:01:52 +0200 (CEST)
Subject: [Python-Dev] New anal crusade
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 19, 2000 01:34:28 PM
Message-ID: <200008191801.UAA17999@python.inrialpes.fr>

Tim Peters wrote:
> 
> Has anyone tried compiling Python under gcc with
> 
>     -Wmissing-prototypes -Wstrict-prototypes
> 
> ?  Someone on Python-Help just complained about warnings under that mode,
> but it's unclear to me which version of Python they were talking about.

Just tried it. Indeed, there are a couple of warnings. Wanna list?

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From tim_one at email.msn.com  Sat Aug 19 20:33:57 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 14:33:57 -0400
Subject: [Python-Dev] New anal crusade
In-Reply-To: <200008191801.UAA17999@python.inrialpes.fr>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEMGHAAA.tim_one@email.msn.com>

[Tim, on gcc -Wmissing-prototypes -Wstrict-prototypes]

[Vladimir]
> Just tried it. Indeed, there are a couple of warnings. Wanna list?

Not me personally, no.  The very subtle <wink> implied request in that was
that someone who *can* run gcc this way actually commit to doing so as a
matter of course, and fix warnings as they pop up.  But, in the absence of
joy, the occasional one-shot list is certainly better than nothing.





From Vladimir.Marangozov at inrialpes.fr  Sat Aug 19 20:58:18 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Sat, 19 Aug 2000 20:58:18 +0200 (CEST)
Subject: [Python-Dev] New anal crusade
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEMGHAAA.tim_one@email.msn.com> from "Tim Peters" at Aug 19, 2000 02:33:57 PM
Message-ID: <200008191858.UAA31550@python.inrialpes.fr>

Tim Peters wrote:
> 
> [Tim, on gcc -Wmissing-prototypes -Wstrict-prototypes]
> 
> [Vladimir]
> > Just tried it. Indeed, there are a couple of warnings. Wanna list?
> 
> Not me personally, no.  The very subtle <wink> implied request in that was
> that someone who *can* run gcc this way actually commit to doing so as a
> matter of course, and fix warnings as they pop up.  But, in the absence of
> joy, the occasional one-shot list is certainly better than nothing.

Sorry, I'm running after my plane (and I need to run fast <wink>) so please
find another volunteer. They're mostly ansification thingies, as expected.

Here's the list from a the default ./configure, make, on Linux, so that
even someone without gcc can fix them <wink>:

----------------------------------------------------------------------------

pgenmain.c:43: warning: no previous prototype for `Py_Exit'
pgenmain.c:169: warning: no previous prototype for `PyOS_Readline'

myreadline.c:66: warning: no previous prototype for `PyOS_StdioReadline'

intrcheck.c:138: warning: no previous prototype for `PyErr_SetInterrupt'
intrcheck.c:191: warning: no previous prototype for `PyOS_FiniInterrupts'

fileobject.c:253: warning: function declaration isn't a prototype
fileobject.c:302: warning: function declaration isn't a prototype

floatobject.c:242: warning: no previous prototype for `PyFloat_AsStringEx'
floatobject.c:285: warning: no previous prototype for `PyFloat_AsString'

unicodeobject.c:548: warning: no previous prototype for `_PyUnicode_AsDefaultEncodedString'
unicodeobject.c:5142: warning: no previous prototype for `_PyUnicode_Init'
unicodeobject.c:5159: warning: no previous prototype for `_PyUnicode_Fini'

codecs.c:423: warning: no previous prototype for `_PyCodecRegistry_Init'
codecs.c:434: warning: no previous prototype for `_PyCodecRegistry_Fini'

frozenmain.c:34: warning: no previous prototype for `Py_FrozenMain'

getmtime.c:30: warning: no previous prototype for `PyOS_GetLastModificationTime'

import.c:2269: warning: no previous prototype for `initimp'

marshal.c:771: warning: no previous prototype for `PyMarshal_Init'

pyfpe.c:21: warning: no previous prototype for `PyFPE_dummy'

pythonrun.c: In function `run_pyc_file':
pythonrun.c:880: warning: function declaration isn't a prototype

dynload_shlib.c:49: warning: no previous prototype for `_PyImport_GetDynLoadFunc'

In file included from thread.c:125:
thread_pthread.h:205: warning: no previous prototype for `PyThread__exit_thread'

getopt.c:48: warning: no previous prototype for `getopt'

./threadmodule.c:389: warning: no previous prototype for `initthread'
./gcmodule.c:698: warning: no previous prototype for `initgc'
./regexmodule.c:661: warning: no previous prototype for `initregex'
./pcremodule.c:633: warning: no previous prototype for `initpcre'
./posixmodule.c:3698: warning: no previous prototype for `posix_strerror'
./posixmodule.c:5456: warning: no previous prototype for `initposix'
./signalmodule.c:322: warning: no previous prototype for `initsignal'
./_sre.c:2301: warning: no previous prototype for `init_sre'
./arraymodule.c:792: warning: function declaration isn't a prototype
./arraymodule.c:1511: warning: no previous prototype for `initarray'
./cmathmodule.c:412: warning: no previous prototype for `initcmath'
./mathmodule.c:254: warning: no previous prototype for `initmath'
./stropmodule.c:1205: warning: no previous prototype for `initstrop'
./structmodule.c:1225: warning: no previous prototype for `initstruct'
./timemodule.c:571: warning: no previous prototype for `inittime'
./operator.c:251: warning: no previous prototype for `initoperator'
./_codecsmodule.c:628: warning: no previous prototype for `init_codecs'
./unicodedata.c:277: warning: no previous prototype for `initunicodedata'
./ucnhash.c:107: warning: no previous prototype for `getValue'
./ucnhash.c:179: warning: no previous prototype for `initucnhash'
./_localemodule.c:408: warning: no previous prototype for `init_locale'
./fcntlmodule.c:322: warning: no previous prototype for `initfcntl'
./pwdmodule.c:129: warning: no previous prototype for `initpwd'
./grpmodule.c:136: warning: no previous prototype for `initgrp'
./errnomodule.c:74: warning: no previous prototype for `initerrno'
./mmapmodule.c:940: warning: no previous prototype for `initmmap'
./selectmodule.c:339: warning: no previous prototype for `initselect'
./socketmodule.c:2366: warning: no previous prototype for `init_socket'
./md5module.c:275: warning: no previous prototype for `initmd5'
./shamodule.c:550: warning: no previous prototype for `initsha'
./rotormodule.c:621: warning: no previous prototype for `initrotor'
./newmodule.c:205: warning: no previous prototype for `initnew'
./binascii.c:1014: warning: no previous prototype for `initbinascii'
./parsermodule.c:2637: warning: no previous prototype for `initparser'
./cStringIO.c:643: warning: no previous prototype for `initcStringIO'
./cPickle.c:358: warning: no previous prototype for `cPickle_PyMapping_HasKey'
./cPickle.c:2287: warning: no previous prototype for `Pickler_setattr'
./cPickle.c:4518: warning: no previous prototype for `initcPickle'

main.c:33: warning: function declaration isn't a prototype
main.c:79: warning: no previous prototype for `Py_Main'
main.c:292: warning: no previous prototype for `Py_GetArgcArgv'

getbuildinfo.c:34: warning: no previous prototype for `Py_GetBuildInfo'
./Modules/getbuildinfo.c:34: warning: no previous prototype for `Py_GetBuildInfo'


-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From guido at python.org  Fri Aug 18 21:13:14 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:13:14 -0400
Subject: [Python-Dev] Re: os.path.commonprefix breakage
References: <LNBBLJKPBEHFEDALKOLCIEDNHAAA.tim_one@email.msn.com>
Message-ID: <011301c00a1f$927e7980$7aa41718@beopen.com>

I'm reading this tread off-line. I'm feeling responsible because I gave Skip
the green light. I admit that that was a mistake: I didn't recall the
purpose of commonprefix() correctrly, and didn't refresh my memory by
reading the docs. I think Tim is right: as the docs say, the function was
*intended* to work on a character basis. This doesn't mean that it doesn't
belong in os.path! Note that os.dirname() will reliably return the common
directory, exactly because the trailing slash is kept.

I propose:

- restore the old behavior on all platforms
- add to the docs that to get the common directory you use dirname()
- add testcases that check that this works on all platforms

- don't add commonpathprefix(), because dirname() already does it

Note that I've only read email up to Thursday morning. If this has been
superseded by more recent resolution, I'll reconsider; but if it's still up
in the air this should be it.

It doesn't look like the change made it into 1.6.

--Guido





From guido at python.org  Fri Aug 18 21:20:06 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:20:06 -0400
Subject: [Python-Dev] PEP 214, extended print statement
References: <14747.22851.266303.28877@anthem.concentric.net><Pine.GSO.4.10.10008170915050.24783-100000@sundial><20000817083023.J376@xs4all.nl> <14747.63511.725610.771162@anthem.concentric.net>
Message-ID: <011401c00a1f$92db8da0$7aa41718@beopen.com>

I'm still reading my email off-line on the plane. I've now read PEP 214 and
think I'll reverse my opinion: it's okay. Barry, check it in! (And change
the SF PM status to 'Accepted'.) I think I'll start using it for error
messages: errors should go to stderr, but it's often inconvenient, so in
minor scripts instead of doing

  sys.stderr.write("Error: can't open %s\n" % filename)

I often write

  print "Error: can't open", filename

which is incorrect but more convenient. I can now start using

  print >>sys.stderr, "Error: can't open", filename

--Guido





From guido at python.org  Fri Aug 18 21:23:37 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 15:23:37 -0400
Subject: [Python-Dev] PyErr_NoMemory
References: <200008171509.RAA20891@python.inrialpes.fr>
Message-ID: <011501c00a1f$939bd060$7aa41718@beopen.com>

> The current PyErr_NoMemory() function reads:
>
> PyObject *
> PyErr_NoMemory(void)
> {
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
>         else
>                 /* this will probably fail since there's no memory and
hee,
>                    hee, we have to instantiate this class
>                 */
>                 PyErr_SetNone(PyExc_MemoryError);
>
>         return NULL;
> }
>
> thus overriding any previous exceptions unconditionally. This is a
> problem when the current exception already *is* PyExc_MemoryError,
> notably when we have a chain (cascade) of memory errors. It is a
> problem because the original memory error and eventually its error
> message is lost.
>
> I suggest to make this code look like:
>
> PyObject *
> PyErr_NoMemory(void)
> {
> if (PyErr_ExceptionMatches(PyExc_MemoryError))
> /* already current */
> return NULL;
>
>         /* raise the pre-allocated instance if it still exists */
>         if (PyExc_MemoryErrorInst)
>                 PyErr_SetObject(PyExc_MemoryError, PyExc_MemoryErrorInst);
> ...
>
> If nobody sees a problem with this, I'm very tempted to check it in.
> Any objections?

+1. The cascading memory error seems a likely scenario indeed: something
returns a memory error, the error handling does some more stuff, and hits
more memory errors.

--Guido






From guido at python.org  Fri Aug 18 22:57:15 2000
From: guido at python.org (Guido van Rossum)
Date: Fri, 18 Aug 2000 16:57:15 -0400
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net>
Message-ID: <011601c00a1f$9923d460$7aa41718@beopen.com>

Paul Prescod wrote:

> I don't think of iterators as indexing in terms of numbers. Otherwise I
> could do this:
>
> >>> a={0:"zero",1:"one",2:"two",3:"three"}
> >>> for i in a:
> ...     print i
> ...
>
> So from a Python user's point of view, for-looping has nothing to do
> with integers. From a Python class/module creator's point of view it
> does have to do with integers. I wouldn't be either surprised nor
> disappointed if that changed one day.

Bingo!

I've long had an idea for generalizing 'for' loops using iterators. This is
more a Python 3000 thing, but I'll explain it here anyway because I think
it's relevant. Perhaps this should become a PEP?  (Maybe we should have a
series of PEPs with numbers in the 3000 range for Py3k ideas?)

The statement

  for <variable> in <object>: <block>

should translate into this kind of pseudo-code:

  # variant 1
  __temp = <object>.newiterator()
  while 1:
      try: <variable> = __temp.next()
      except ExhaustedIterator: break
      <block>

or perhaps (to avoid the relatively expensive exception handling):

  # variant 2
  __temp = <object>.newiterator()
  while 1:
      __flag, <variable = __temp.next()
      if not __flag: break
      <block>

In variant 1, the next() method returns the next object or raises
ExhaustedIterator. In variant 2, the next() method returns a tuple (<flag>,
<variable>) where <flag> is 1 to indicate that <value> is valid or 0 to
indicate that there are no more items available. I'm not crazy about the
exception, but I'm even less crazy about the more complex next() return
value (careful observers may have noticed that I'm rarely crazy about flag
variables :-).

Another argument for variant 1 is that variant 2 changes what <variable> is
after the loop is exhausted, compared to current practice: currently, it
keeps the last valid value assigned to it. Most likely, the next() method
returns None when the sequence is exhausted. It doesn't make a lot of sense
to require it to return the last item of the sequence -- there may not *be*
a last item, if the sequence is empty, and not all sequences make it
convenient to keep hanging on to the last item in the iterator, so it's best
to specify that next() returns (0, None) when the sequence is exhausted.

(It would be tempting to suggeste a variant 1a where instead of raising an
exception, next() returns None when the sequence is exhausted, but this
won't fly: you couldn't iterate over a list containing some items that are
None.)

Side note: I believe that the iterator approach could actually *speed up*
iteration over lists compared to today's code. This is because currently the
interation index is a Python integer object that is stored on the stack.
This means an integer add with overflow check, allocation, and deallocation
on each iteration! But the iterator for lists (and other basic sequences)
could easily store the index as a C int! (As long as the sequence's length
is stored in an int, the index can't overflow.)

[Warning: thinking aloud ahead!]

Once we have the concept of iterators, we could support explicit use of them
as well. E.g. we could use a variant of the for statement to iterate over an
existing iterator:

  for <variable> over <iterator>: <block>

which would (assuming variant 1 above) translate to:

  while 1:
      try: <variable> = <iterator>.next()
      except ExhaustedIterator: break
      <block>

This could be used in situations where you have a loop iterating over the
first half of a sequence and a second loop that iterates over the remaining
items:

  it = something.newiterator()
  for x over it:
      if time_to_start_second_loop(): break
      do_something()
  for x over it:
      do_something_else()

Note that the second for loop doesn't reset the iterator -- it just picks up
where the first one left off! (Detail: the x that caused the break in the
first loop doesn't get dealt with in the second loop.)

I like the iterator concept because it allows us to do things lazily. There
are lots of other possibilities for iterators. E.g. mappings could have
several iterator variants to loop over keys, values, or both, in sorted or
hash-table order. Sequences could have an iterator for traversing them
backwards, and a few other ones for iterating over their index set (cf.
indices()) and over (index, value) tuples (cf. irange()). Files could be
their own iterator where the iterator is almost the same as readline()
except it raises ExhaustedIterator at EOF instead of returning "".  A tree
datastructure class could have an associated iterator class that maintains a
"finger" into the tree.

Hm, perhaps iterators could be their own iterator? Then if 'it' were an
iterator, it.newiterator() would return a reference to itself (not a copy).
Then we wouldn't even need the 'over' alternative syntax. Maybe the method
should be called iterator() then, not newiterator(), to avoid suggesting
anything about the newness of the returned iterator.

Other ideas:

- Iterators could have a backup() method that moves the index back (or
raises an exception if this feature is not supported, e.g. when reading data
from a pipe).

- Iterators over certain sequences might support operations on the
underlying sequence at the current position of the iterator, so that you
could iterate over a sequence and occasionally insert or delete an item (or
a slice).

Of course iterators also connect to generators. The basic list iterator
doesn't need coroutines or anything, it can be done as follows:

  class Iterator:
      def __init__(self, seq):
          self.seq = seq
          self.ind = 0
      def next(self):
          if self.ind >= len(self.seq): raise ExhaustedIterator
          val = self.seq[self.ind]
          self.ind += 1
          return val

so that <list>.iterator() could just return Iterator(<list>) -- at least
conceptually.

But for other data structures the amount of state needed might be
cumbersome. E.g. a tree iterator needs to maintain a stack, and it's much
easier to code that using a recursive Icon-style generator than by using an
explicit stack. On the other hand, I remember reading an article a while ago
(in Dr. Dobbs?) by someone who argued (in a C++ context) that such recursive
solutions are very inefficient, and that an explicit stack (1) is really not
that hard to code, and (2) gives much more control over the memory and time
consumption of the code. On the third hand, some forms of iteration really
*are* expressed much more clearly using recursion. On the fourth hand, I
disagree with Matthias ("Dr. Scheme") Felleisen about recursion as the root
of all iteration. Anyway, I believe that iterators (as explained above) can
be useful even if we don't have generators (in the Icon sense, which I
believe means coroutine-style).

--Guido





From amk at s222.tnt1.ann.va.dialup.rcn.com  Sat Aug 19 23:15:53 2000
From: amk at s222.tnt1.ann.va.dialup.rcn.com (A.M. Kuchling)
Date: Sat, 19 Aug 2000 17:15:53 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
Message-ID: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>

The handwritten BSDDB3 module has just started actually functioning.
It now runs the dbtest.py script without core dumps or reported
errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
db.py and the most recent _bsddb.c.

I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
have to struggle a bit with integrating it into Greg's package and
compiling it (replacing db.py with my version, and modifying Setup to
compile _bsddb.c).  I haven't integrated it more, because I'm not sure
how we want to proceed with it.  Robin/Greg, do you want to continue
to maintain the package?  ...in which I'll contribute the code to one
or both of you.  Or, I can take over maintaining the package, or we
can try to get the module into Python 2.0, but with the feature freeze
well advanced, I'm doubtful that it'll get in.

Still missing: Cursor objects still aren't implemented -- Martin, if
you haven't started yet, let me know and I'll charge ahead with them
tomorrow.  Docstrings.  More careful type-checking of function
objects.  Finally, general tidying, re-indenting, and a careful
reading to catch any stupid errors that I made.  

--amk




From esr at thyrsus.com  Sat Aug 19 23:37:27 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Sat, 19 Aug 2000 17:37:27 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>; from amk@s222.tnt1.ann.va.dialup.rcn.com on Sat, Aug 19, 2000 at 05:15:53PM -0400
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000819173727.A4015@thyrsus.com>

A.M. Kuchling <amk at s222.tnt1.ann.va.dialup.rcn.com>:
> The handwritten BSDDB3 module has just started actually functioning.
> It now runs the dbtest.py script without core dumps or reported
> errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
> db.py and the most recent _bsddb.c.

I see I wasn't on the explicit addressee list.  But if you can get any good
use out of another pair of hands, I'm willing.

> I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
> have to struggle a bit with integrating it into Greg's package and
> compiling it (replacing db.py with my version, and modifying Setup to
> compile _bsddb.c).  I haven't integrated it more, because I'm not sure
> how we want to proceed with it.  Robin/Greg, do you want to continue
> to maintain the package?  ...in which I'll contribute the code to one
> or both of you.  Or, I can take over maintaining the package, or we
> can try to get the module into Python 2.0, but with the feature freeze
> well advanced, I'm doubtful that it'll get in.

I'm +1 for slipping this one in under the wire, if it matters.

I'm not just randomly pushing a feature here -- I think the multiple-reader/
one-writer atomicity guarantees this will give us will be extremely important
for CGI programmers, who often need a light-duty database facility with exactly
this kind of concurrency guarantee.
-- 
		<a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a>

The people of the various provinces are strictly forbidden to have in their
possession any swords, short swords, bows, spears, firearms, or other types
of arms. The possession of unnecessary implements makes difficult the
collection of taxes and dues and tends to foment uprisings.
        -- Toyotomi Hideyoshi, dictator of Japan, August 1588



From martin at loewis.home.cs.tu-berlin.de  Sat Aug 19 23:52:56 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 19 Aug 2000 23:52:56 +0200
Subject: [Python-Dev] Re: BSDDB 3 module now somewhat functional
In-Reply-To: 	<20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
	(amk@s222.tnt1.ann.va.dialup.rcn.com)
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <200008192152.XAA00691@loewis.home.cs.tu-berlin.de>

> Still missing: Cursor objects still aren't implemented -- Martin, if
> you haven't started yet, let me know and I'll charge ahead with them
> tomorrow.

No, I haven't started yet, so go ahead.

Regards,
Martin



From trentm at ActiveState.com  Sun Aug 20 01:59:40 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sat, 19 Aug 2000 16:59:40 -0700
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sat, Aug 19, 2000 at 01:11:28AM -0400
References: <LNBBLJKPBEHFEDALKOLCIELEHAAA.tim_one@email.msn.com>
Message-ID: <20000819165940.A21864@ActiveState.com>

On Sat, Aug 19, 2000 at 01:11:28AM -0400, Tim Peters wrote:
> Note that a patch has been posted to SourceForge that purports to solve
> *some* thread vs fork problems:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470
> 
> Since nobody has made real progress on figuring out why test_fork1 fails on
> some systems, would somebody who is able to make it fail please just try
> this patch & see what happens?
> 

That patch *seems* to fix it for me. As before, I can get test_fork to fail
intermittently (i.e. it doesn't hang every time I run it) without the patch
and cannot get it to hang at all with the patch.

Would you like me to run and provide the instrumented output that I showed
last time this topic came up?


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Sun Aug 20 02:45:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 19 Aug 2000 20:45:32 -0400
Subject: [Python-Dev] Who can make test_fork1 fail?
In-Reply-To: <20000819165940.A21864@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEMPHAAA.tim_one@email.msn.com>

[Trent Mick, on
//sourceforge.net/patch/?func=detailpatch&patch_id=101226&group_id=5470
]
> That patch *seems* to fix it for me. As before, I can get test_fork
> to fail intermittently (i.e. it doesn't hang every time I run it) without
> the patch and cannot get it to hang at all with the patch.

Thanks a bunch, Trent!  (That's a Minnesotaism -- maybe that's far enough
North that it sounds natural to you, though <wink>.)

> Would you like me to run and provide the instrumented output that
> I showed last time this topic came up?

Na, it's enough to know that the problem appears to have gone away, and
since this was-- in some sense --the simplest of the test cases (just one
fork), it provides the starkest contrast we're going to get between the
behaviors people are seeing and my utter failure to account for them.  OTOH,
we knew the global lock *should be* a problem here (just not the problem we
actually saw!), and Charles is doing the right kind of thing to make that go
away.

I still encourage everyone to run all the tests that failed on all the SMP
systems they can get hold of, before and after the patch.  I'll talk with
Guido about it too (the patch is still a bit too hacky to put out in the
world with pride <wink>).





From dgoodger at bigfoot.com  Sun Aug 20 06:53:05 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Sun, 20 Aug 2000 00:53:05 -0400
Subject: [Python-Dev] Re: Call for reviewer!
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com>
Message-ID: <B5C4DC70.7D6C%dgoodger@bigfoot.com>

on 2000-08-19 05:19, Tim Peters (tim_one at email.msn.com) wrote:
> I'm afraid "shot down" is the answer...

That's too bad. Thanks for the 'gentle' explanation. This 'crusader' knows
when to give up on a lost cause. ;>

>> test_getopt.py
> 
> We don't have to ask Guido abut *that*:  a test module for getopt would be
> accepted with extreme (yet intangible <wink>) gratitude.  Thank you!

Glad to contribute. You'll find a regression test module for the current
getopt.py as revised patch #101110. I based it on some existing Lib/test/
modules, but haven't found the canonical example or instruction set. Is
there one?

FLASH: Tim's been busy. Just received the official rejections & acceptance
of test_getopt.py.

-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From moshez at math.huji.ac.il  Sun Aug 20 07:19:28 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 20 Aug 2000 08:19:28 +0300 (IDT)
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <20000819173727.A4015@thyrsus.com>
Message-ID: <Pine.GSO.4.10.10008200817510.13651-100000@sundial>

On Sat, 19 Aug 2000, Eric S. Raymond wrote:

> I'm +1 for slipping this one in under the wire, if it matters.
> 
> I'm not just randomly pushing a feature here -- I think the multiple-reader/
> one-writer atomicity guarantees this will give us will be extremely important
> for CGI programmers, who often need a light-duty database facility with exactly
> this kind of concurrency guarantee.

I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
yet, which makes it the perfect place to get stuff for 2.0.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From greg at electricrain.com  Sun Aug 20 08:04:51 2000
From: greg at electricrain.com (Gregory P. Smith)
Date: Sat, 19 Aug 2000 23:04:51 -0700
Subject: [Python-Dev] Re: BSDDB 3 module now somewhat functional
In-Reply-To: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>; from amk@s222.tnt1.ann.va.dialup.rcn.com on Sat, Aug 19, 2000 at 05:15:53PM -0400
References: <20000819171553.A11095@207-172-111-222.s222.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000819230451.A22669@yyz.electricrain.com>

On Sat, Aug 19, 2000 at 05:15:53PM -0400, A.M. Kuchling wrote:
> The handwritten BSDDB3 module has just started actually functioning.
> It now runs the dbtest.py script without core dumps or reported
> errors.  Code is at ftp://starship.python.net/pub/crew/amk/new/ ; grab
> db.py and the most recent _bsddb.c.
> 
> I started from Greg Smith's 3.1.x port of Robin Dunn's module.  You'll
> have to struggle a bit with integrating it into Greg's package and
> compiling it (replacing db.py with my version, and modifying Setup to
> compile _bsddb.c).  I haven't integrated it more, because I'm not sure
> how we want to proceed with it.  Robin/Greg, do you want to continue
> to maintain the package?  ...in which I'll contribute the code to one
> or both of you.  Or, I can take over maintaining the package, or we
> can try to get the module into Python 2.0, but with the feature freeze
> well advanced, I'm doubtful that it'll get in.

I just did a quick scan over your code and liked what I saw.  I was
thinking it'd be cool if someone did this (write a non-SWIG version based
on mine) but knew I wouldn't have time right now.  Thanks!  Note that I
haven't tested your module or looked closely to see if anything looks odd.

I'm happy to keep maintaining the bsddb3 module until it makes its way
into a future Python version.  I don't have a lot of time for it, but send
me updates/fixes as you make them (I'm not on python-dev at the moment).
If your C version is working well, I'll make a new release sometime next
week after I test it a bit more in our application on a few platforms
(linux, linux alpha and win98).

> Still missing: Cursor objects still aren't implemented -- Martin, if
> you haven't started yet, let me know and I'll charge ahead with them
> tomorrow.  Docstrings.  More careful type-checking of function
> objects.  Finally, general tidying, re-indenting, and a careful
> reading to catch any stupid errors that I made.  

It looked like you were keeping the same interface (good!), so I
recommend simply stealing the docstrings from mine if you haven't already
and reviewing them to make sure they make sense.  I pretty much pasted
trimmed down forms of the docs that come with BerkeleyDB in to make them
as well as using some of what Robin had from before.

Also, unless someone actually tests the Recno format databases should
we even bother to include support for it?  I haven't tested them at all.
If nothing else, writing some Recno tests for dbtest.py would be a good
idea before including it.

Greg

-- 
Gregory P. Smith   gnupg/pgp: http://suitenine.com/greg/keys/
                   C379 1F92 3703 52C9 87C4  BE58 6CDA DB87 105D 9163



From tim_one at email.msn.com  Sun Aug 20 08:11:52 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 02:11:52 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <Pine.GSO.4.10.10008200817510.13651-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCEENHHAAA.tim_one@email.msn.com>

[esr]
> I'm +1 for slipping this one in under the wire, if it matters.

[Moshe Zadka]
> I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
> yet, which makes it the perfect place to get stuff for 2.0.

I may be an asshole, but I'm not an idiot:  note that the planned release
date (PEP 200) for 2.0b1 is a week from Monday.  And since there is only one
beta cycle planned, *nothing* goes in except bugfixes after 2.0b1 is
released.  Guido won't like that, but he's not the release manager, and when
I'm replaced by the real release manager on Tuesday, he'll agree with me on
this and Guido will get noogied to death if he opposes us <wink>.

So whatever tricks you want to try to play, play 'em fast.

not-that-i-believe-the-beta-release-date-will-be-met-anyway-
    but-i-won't-admit-that-until-after-it-slips-ly y'rs  - tim





From moshez at math.huji.ac.il  Sun Aug 20 08:17:12 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sun, 20 Aug 2000 09:17:12 +0300 (IDT)
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEENHHAAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008200915410.13651-100000@sundial>

On Sun, 20 Aug 2000, Tim Peters wrote:

> [esr]
> > I'm +1 for slipping this one in under the wire, if it matters.
> 
> [Moshe Zadka]
> > I think this is a job for PAL (aka PEP206) -- PAL hasn't feature freezed
> > yet, which makes it the perfect place to get stuff for 2.0.
> 
> I may be an asshole, but I'm not an idiot:  note that the planned release
> date (PEP 200) for 2.0b1 is a week from Monday.  And since there is only one
> beta cycle planned, *nothing* goes in except bugfixes after 2.0b1 is
> released. 

But that's irrelevant. The sumo interpreter will be a different release,
and will probably be based on 2.0 for core. So what if it's only available
only a month after 2.0 is ready?

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tim_one at email.msn.com  Sun Aug 20 08:24:31 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 02:24:31 -0400
Subject: [Python-Dev] BSDDB 3 module now somewhat functional
In-Reply-To: <Pine.GSO.4.10.10008200915410.13651-100000@sundial>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENIHAAA.tim_one@email.msn.com>

[Moshe]
> But that's irrelevant. The sumo interpreter will be a different release,
> and will probably be based on 2.0 for core. So what if it's only available
> only a month after 2.0 is ready?

Like I said, I may be an idiot, but I'm not an asshole -- have fun!





From sjoerd at oratrix.nl  Sun Aug 20 11:22:28 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 11:22:28 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Fri, 18 Aug 2000 20:42:34 +0200.
             <000001c00945$a8d37e40$f2a6b5d4@hagrid> 
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> 
            <000001c00945$a8d37e40$f2a6b5d4@hagrid> 
Message-ID: <20000820092229.F3A2D31047C@bireme.oratrix.nl>

Why don't we handle graminit.c/graminit.h the same way as we currently
handle configure/config.h.in?  The person making a change to
configure.in is responsible for running autoconf and checking in the
result.  Similarly, the person making a change to Grammar should
regenerate graminit.c/graminit.h and check in the result.  In fact,
that is exactly what happened in this particular case.  I'd say there
isn't really a reason to create graminit.c/graminit.h automatically
whenever you do a build of Python.  Even worse, when you have a
read-only copy of the source and you build in a different directory
(and that used to be supported) the current setup will break since it
tries to overwrite Python/graminit.c and Include/graminit.h.

I'd say, go back to the old situation, possibly with a simple Makefile
rule added so that you *can* build graminit, but one that is not used
automatically.

On Fri, Aug 18 2000 "Fredrik Lundh" wrote:

> sjoerd wrote:
> 
> > The problem was that because of your (I think it was you :-) earlier
> > change to have a Makefile in Grammar, I had an old graminit.c lying
> > around in my build directory.  I don't build in the source directory
> > and the changes for a Makefile in Grammar resulted in a file
> > graminit.c in the wrong place.
> 
> is the Windows build system updated to generate new
> graminit files if the Grammar are updated?
> 
> or is python development a unix-only thingie these days?
> 
> </F>
> 
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From thomas at xs4all.net  Sun Aug 20 11:41:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 11:41:14 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000820092229.F3A2D31047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Sun, Aug 20, 2000 at 11:22:28AM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl>
Message-ID: <20000820114114.A4797@xs4all.nl>

On Sun, Aug 20, 2000 at 11:22:28AM +0200, Sjoerd Mullender wrote:

> I'd say, go back to the old situation, possibly with a simple Makefile
> rule added so that you *can* build graminit, but one that is not used
> automatically.

That *is* the old situation. The rule of making graminit as a matter of
course was for convenience with patches that change grammar. Now that most
have been checked in, and we've seen what havoc and confusion making
graminit automatically can cause, I'm all for going back to that too ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From loewis at informatik.hu-berlin.de  Sun Aug 20 12:51:16 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sun, 20 Aug 2000 12:51:16 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E5340.B00811EF@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com>
Message-ID: <200008201051.MAA05259@pandora.informatik.hu-berlin.de>

> Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
> chars then the traditional API would have to raise encoding
> errors

I don't know what you mean by "traditional" here. The gettext.gettext
implementation in Barry's patch will return the UTF-8 encoded byte
string, instead of raising encoding errors - no code conversion takes
place.

> Perhaps the return value type of .gettext() should be given on
> the .install() call: e.g. encoding='utf-8' would have .gettext()
> return a string using UTF-8 while encoding='unicode' would have
> it return Unicode objects.

No. You should have the option of either receiving byte strings, or
Unicode strings. If you want byte strings, you should get the ones
appearing in the catalog.

Regards,
Martin



From loewis at informatik.hu-berlin.de  Sun Aug 20 12:59:28 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sun, 20 Aug 2000 12:59:28 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <399E5558.C7B6029B@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net>
		<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com>
Message-ID: <200008201059.MAA05292@pandora.informatik.hu-berlin.de>

> Martin mentioned the possibility of using UTF-8 for the
> catalogs and then decoding them into Unicode. That should be
> a reasonable way of getting .gettext() to talk Unicode :-)

You misunderstood. Using UTF-8 in the catalogs is independent from
using Unicode. You can have the catalogs in UTF-8, and still access
the catalog as byte strings, and you can have the catalog in Latin-1,
and convert that to unicode strings upon retrieval.

> Just dreaming a little here: I would prefer that we use some
> form of XML to write the catalogs. 

Well, I hope that won't happen. We have excellent tools dealing with
the catalogs, and I see no value in replacing

#: src/grep.c:183 src/grep.c:200 src/grep.c:300 src/grep.c:408 src/kwset.c:184
#: src/kwset.c:190
msgid "memory exhausted"
msgstr "Virtueller Speicher ersch?pft."

with

<entry>
  <sourcelist>
    <source file="src/grep.c" line="183"/>
    <source file="src/grep.c" line="200"/>
    <source file="src/grep.c" line="300"/>
    <source file="src/grep.c" line="408"/>
    <source file="src/kwset.c" line="184"/>
    <source file="src/kwset.c" line="190"/>
  </sourcelist>
  <msgid>memory exhausted</msgid>
  <msgstr>Virtueller Speicher ersch?pft.</msgstr>
</entry>

> XML comes with Unicode support and tools for writing XML are
> available too.

Well, the catalog files also "come with unicode support", meaning that
you can write them in UTF-8 if you want; and tools could be easily
extended to process UCS-2 input if anybody desires.

OTOH, the tools for writing po files are much more advanced than any
XML editor I know.

> We'd only need a way to transform XML into catalog files of some
> Python specific platform independent format (should be possible to
> create .mo files from XML too).

Or we could convert the XML catalogs in Uniforum-style catalogs, and
then use the existing tools.

Regards,
Martin



From sjoerd at oratrix.nl  Sun Aug 20 13:26:05 2000
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 13:26:05 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: Your message of Sun, 20 Aug 2000 11:41:14 +0200.
             <20000820114114.A4797@xs4all.nl> 
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> 
            <20000820114114.A4797@xs4all.nl> 
Message-ID: <20000820112605.BF61431047C@bireme.oratrix.nl>

Here's another pretty serious bug.  Can you verify that this time it
isn't my configurations?

Try this:

from encodings import cp1006, cp1026

I get the error
ImportError: cannot import name cp1026
but if I try to import the two modules separately I get no error.

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From thomas at xs4all.net  Sun Aug 20 15:51:17 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 15:51:17 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
In-Reply-To: <20000820112605.BF61431047C@bireme.oratrix.nl>; from sjoerd@oratrix.nl on Sun, Aug 20, 2000 at 01:26:05PM +0200
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> <20000820114114.A4797@xs4all.nl> <20000820112605.BF61431047C@bireme.oratrix.nl>
Message-ID: <20000820155117.C4797@xs4all.nl>

On Sun, Aug 20, 2000 at 01:26:05PM +0200, Sjoerd Mullender wrote:

> Here's another pretty serious bug.  Can you verify that this time it
> isn't my configurations?

It isn't your config, this is a genuine bug. I'll be checking in a quick fix
in a few minutes, and start thinking about a test case that would've caught
this.

> Try this:
> from encodings import cp1006, cp1026

> I get the error
> ImportError: cannot import name cp1026
> but if I try to import the two modules separately I get no error.

Yes. 'find_from_args' wasn't trying hard enough to find out what the
argument to an import were. Previously, all it had to do was scan the
bytecodes immediately following an 'IMPORT_NAME' for IMPORT_FROM statements,
and record its names. However, now that IMPORT_FROM generates a STORE, it
stops looking after the first IMPORT_FROM. This worked fine for normal
object-retrieval imports, which don't use the list generated by
find_from_args, but not for dynamic loading tricks such as 'encodings' uses.

The fix I made was to unconditionally jump over 5 bytes, after an
IMPORT_FROM, rather than 2 (2 for the oparg, 1 for the next instruction (a
STORE) and two more for the oparg for the STORE)

This does present a problem for the proposed change in semantics for the
'as' clause, though. If we allow all expressions that yield valid l-values
in import-as and from-import-as, we can't easily find out what the import
arguments are by examining the future bytecode stream. (It might be
possible, if we changed the POP_TOP to, say, END_IMPORT, that pops the
module from the stack and can be used to end the search for import
arguments.

However, I find this hackery a bit appalling :) Why are we constructing a
list of import arguments at runtime, from compile-time information, if all
that information is available at compile time too ? And more easily so ?
What would break if I made IMPORT_NAME retrieve the from-arguments from a
list, which is built on the stack by com_import_stmt ? Or is there a more
convenient way of passing a variable list of strings to a bytecode ? It
won't really affect performance, since find_from_args is called for all
imports anyway.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Sun Aug 20 16:34:31 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 20 Aug 2000 16:34:31 +0200
Subject: [Python-Dev] [ Patch #101238 ] PyOS_CheckStack for Windows
References: <20000815104723.A27306@ActiveState.com> <005401c006ec$a95a74a0$f2a6b5d4@hagrid>
Message-ID: <019801c00ab3$c59e8d20$f2a6b5d4@hagrid>

I've prepared a patch based on the PyOS_CheckStack code
I posted earlier:

http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101238&group_id=5470

among other things, this fixes the recursive __repr__/__str__
problems under Windows.  it also makes it easier to use Python
with non-standard stack sizes (e.g. when embedding).

some additional notes:

- the new function was added to pythonrun.c, mostly because
it was already declared in pythonrun.h...

- for the moment, it's only enabled if you're using MSVC.  if any-
one here knows if structured exceptions are widely supported by
Windows compilers, let me know.

- it would probably be a good idea to make it an __inline function
(and put the entire function in the header file instead), but I don't
recall if MSVC does the right thing in that case, and I haven't had
time to try it out just yet...

enjoy /F




From sjoerd.mullender at oratrix.com  Sun Aug 20 16:54:43 2000
From: sjoerd.mullender at oratrix.com (Sjoerd Mullender)
Date: Sun, 20 Aug 2000 16:54:43 +0200
Subject: [Python-Dev] serious bug in new import X as Y code
References: <20000818094239.A3A1931047C@bireme.oratrix.nl> <20000818161745.U376@xs4all.nl> <20000818150639.6685C31047C@bireme.oratrix.nl> <000001c00945$a8d37e40$f2a6b5d4@hagrid> <20000820092229.F3A2D31047C@bireme.oratrix.nl> <20000820114114.A4797@xs4all.nl> <20000820112605.BF61431047C@bireme.oratrix.nl> <20000820155117.C4797@xs4all.nl>
Message-ID: <399FF133.63B83A52@oratrix.com>

This seems to have done the trick.  Thanks.

Thomas Wouters wrote:
> 
> On Sun, Aug 20, 2000 at 01:26:05PM +0200, Sjoerd Mullender wrote:
> 
> > Here's another pretty serious bug.  Can you verify that this time it
> > isn't my configurations?
> 
> It isn't your config, this is a genuine bug. I'll be checking in a quick fix
> in a few minutes, and start thinking about a test case that would've caught
> this.
> 
> > Try this:
> > from encodings import cp1006, cp1026
> 
> > I get the error
> > ImportError: cannot import name cp1026
> > but if I try to import the two modules separately I get no error.
> 
> Yes. 'find_from_args' wasn't trying hard enough to find out what the
> argument to an import were. Previously, all it had to do was scan the
> bytecodes immediately following an 'IMPORT_NAME' for IMPORT_FROM statements,
> and record its names. However, now that IMPORT_FROM generates a STORE, it
> stops looking after the first IMPORT_FROM. This worked fine for normal
> object-retrieval imports, which don't use the list generated by
> find_from_args, but not for dynamic loading tricks such as 'encodings' uses.
> 
> The fix I made was to unconditionally jump over 5 bytes, after an
> IMPORT_FROM, rather than 2 (2 for the oparg, 1 for the next instruction (a
> STORE) and two more for the oparg for the STORE)
> 
> This does present a problem for the proposed change in semantics for the
> 'as' clause, though. If we allow all expressions that yield valid l-values
> in import-as and from-import-as, we can't easily find out what the import
> arguments are by examining the future bytecode stream. (It might be
> possible, if we changed the POP_TOP to, say, END_IMPORT, that pops the
> module from the stack and can be used to end the search for import
> arguments.
> 
> However, I find this hackery a bit appalling :) Why are we constructing a
> list of import arguments at runtime, from compile-time information, if all
> that information is available at compile time too ? And more easily so ?
> What would break if I made IMPORT_NAME retrieve the from-arguments from a
> list, which is built on the stack by com_import_stmt ? Or is there a more
> convenient way of passing a variable list of strings to a bytecode ? It
> won't really affect performance, since find_from_args is called for all
> imports anyway.
> 
> --
> Thomas Wouters <thomas at xs4all.net>
> 
> Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev



From nascheme at enme.ucalgary.ca  Sun Aug 20 17:53:28 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Sun, 20 Aug 2000 09:53:28 -0600
Subject: [Python-Dev] Re: Eureka! (Re: test_fork fails --with-thread)
In-Reply-To: <14750.3466.34096.504552@buffalo.fnal.gov>; from Charles G Waldman on Fri, Aug 18, 2000 at 11:31:06PM -0500
References: <14750.3466.34096.504552@buffalo.fnal.gov>
Message-ID: <20000820095328.A25233@keymaster.enme.ucalgary.ca>

On Fri, Aug 18, 2000 at 11:31:06PM -0500, Charles G Waldman wrote:
> Well, I think I understand what's going on and I have a patch that
> fixes the problem.

Yes!  With this patch my nasty little tests run successfully both
single and dual CPU Linux machines.  Its still a mystery how the child
can screw up the parent after the fork.  Oh well.

  Neil



From trentm at ActiveState.com  Sun Aug 20 19:15:52 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Sun, 20 Aug 2000 10:15:52 -0700
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <B5C4DC70.7D6C%dgoodger@bigfoot.com>; from dgoodger@bigfoot.com on Sun, Aug 20, 2000 at 12:53:05AM -0400
References: <LNBBLJKPBEHFEDALKOLCOELJHAAA.tim_one@email.msn.com> <B5C4DC70.7D6C%dgoodger@bigfoot.com>
Message-ID: <20000820101552.A24181@ActiveState.com>

On Sun, Aug 20, 2000 at 12:53:05AM -0400, David Goodger wrote:
> Glad to contribute. You'll find a regression test module for the current
> getopt.py as revised patch #101110. I based it on some existing Lib/test/
> modules, but haven't found the canonical example or instruction set. Is
> there one?

I don't really think there is. Kind of folk lore. There are some good
examples to follow in the existing test suite. Skip Montanaro wrote a README
for writing tests and using the test suite (.../dist/src/Lib/test/README).

Really, the testing framework is extremely simple. Which is one of its
benefits. There is not a whole lot of depth that one has not grokked just by
writing one test_XXX.py.


Trent

-- 
Trent Mick
TrentM at ActiveState.com



From tim_one at email.msn.com  Sun Aug 20 19:46:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 20 Aug 2000 13:46:35 -0400
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <20000820101552.A24181@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>

[David Goodger]
> Glad to contribute. You'll find a regression test module for the current
> getopt.py as revised patch #101110. I based it on some existing
> Lib/test/ modules, but haven't found the canonical example or instruction
> set. Is there one?

[Trent Mick]
> I don't really think there is. Kind of folk lore. There are some good
> examples to follow in the existing test suite. Skip Montanaro
> wrote a README for writing tests and using the test suite
> (.../dist/src/Lib/test/README).
>
> Really, the testing framework is extremely simple. Which is one of its
> benefits. There is not a whole lot of depth that one has not
> grokked just by writing one test_XXX.py.

What he said.  The README explains it well, and I think the only thing you
(David) missed in your patch was the need to generate the "expected output"
file via running regrtest once with -g on the new test case.

I'd add one thing:  people use "assert" *way* too much in the test suite.
It's usually much better to just print what you got, and rely on regrtest's
output-comparison to complain if what you get isn't what you expected.  The
primary reason for this is that asserts "vanish" when Python is run
using -O, so running regrtest in -O mode simply doesn't test *anything*
caught by an assert.

A compromise is to do both:

    print what_i_expected, what_i_got
    assert what_i_expected == what_i_got

In Python 3000, I expect we'll introduce a new binary infix operator

    !!!

so that

    print x !!! y

both prints x and y and asserts that they're equal <wink>.





From mal at lemburg.com  Sun Aug 20 19:57:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sun, 20 Aug 2000 19:57:32 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <Pine.LNX.4.21.0008192202520.25394-100000@james.daa.com.au>
Message-ID: <39A01C0C.E6BA6CCB@lemburg.com>

James Henstridge wrote:
> 
> On Sat, 19 Aug 2000, M.-A. Lemburg wrote:
> 
> > > As I said above, most of that turned out not to be very useful.  Did you
> > > include any of the language selection code in the last version of my
> > > gettext module?  It gave behaviour very close to C gettext in this
> > > respect.  It expands the locale name given by the user using the
> > > locale.alias files found on the systems, then decomposes that into the
> > > simpler forms.  For instance, if LANG=en_GB, then my gettext module would
> > > search for catalogs by names:
> > >   ['en_GB.ISO8859-1', 'en_GB', 'en.ISO8859-1', 'en', 'C']
> > >
> > > This also allows things like expanding LANG=catalan to:
> > >   ['ca_ES.ISO-8859-1', 'ca_ES', 'ca.ISO-8859-1', 'ca', 'C']
> > > (provided the appropriate locale.alias files are found)
> > >
> > > If you missed that that version I sent you I can send it again.  It
> > > stripped out a lot of the experimental code giving a much simpler module.
> >
> > Uhm, can't you make some use of the new APIs in locale.py
> > for this ?
> >
> > locale.py has a whole new set of encoding aware support for
> > LANG variables. It supports Unix and Windows (thanks to /F).
> 
> Well, it can do a little more than that.  It will also handle the case of
> a number of locales listed in the LANG environment variable.  It also
> doesn't look like it handles decomposition of a locale like
> ll_CC.encoding at modifier into other matching encodings in the correct
> precedence order.
> 
> Maybe something to do this sort of decomposition would fit better in
> locale.py though.
> 
> This sort of thing is very useful for people who know more than one
> language, and doesn't seem to be handled by plain setlocale()

I'm not sure I can follow you here: are you saying that your
support in gettext.py does more or less than what's present
in locale.py ?

If it's more, I think it would be a good idea to add those
parts to locale.py.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bckfnn at worldonline.dk  Sun Aug 20 20:34:53 2000
From: bckfnn at worldonline.dk (Finn Bock)
Date: Sun, 20 Aug 2000 18:34:53 GMT
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
Message-ID: <39a024b1.5036672@smtp.worldonline.dk>

[Tim Peters]

>I'd add one thing:  people use "assert" *way* too much in the test suite.

I'll second that.

>It's usually much better to just print what you got, and rely on regrtest's
>output-comparison to complain if what you get isn't what you expected.  The
>primary reason for this is that asserts "vanish" when Python is run
>using -O, so running regrtest in -O mode simply doesn't test *anything*
>caught by an assert.

It can also stop the test script from being used with JPython. A
difference that is acceptable (perhaps by necessity) will prevent the
remaining test from executing.

regards,
finn



From amk1 at erols.com  Sun Aug 20 22:44:02 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Sun, 20 Aug 2000 16:44:02 -0400
Subject: [Python-Dev] ANN: BerkeleyDB 2.9.0 (experimental)
Message-ID: <200008202044.QAA01842@207-172-111-161.s161.tnt1.ann.va.dialup.rcn.com>

This is an experimental release of a rewritten version of the
BerkeleyDB module by Robin Dunn.  Starting from Greg Smith's version,
which supports the 3.1.x versions of Sleepycat's DB library, I've
translated the SWIG wrapper into hand-written C code.  

Warnings: this module is experimental, so don't put it to production
use.  I've only compiled the code with the current Python CVS tree;
there might be glitches with 1.5.2 which will need to be fixed.
Cursor objects are implemented, but completely untested; methods might
not work or might dump core.  (DB and DbEnv objects *are* tested, and
seem to work fine.)

Grab the code from this FTP directory: 
     ftp://starship.python.net/pub/crew/amk/new/

Please report problems to me.  Thanks!

--amk



From thomas at xs4all.net  Sun Aug 20 22:49:18 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 22:49:18 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: <200008202002.NAA13530@delerium.i.sourceforge.net>; from noreply@sourceforge.net on Sun, Aug 20, 2000 at 01:02:32PM -0700
References: <200008202002.NAA13530@delerium.i.sourceforge.net>
Message-ID: <20000820224918.D4797@xs4all.nl>

On Sun, Aug 20, 2000 at 01:02:32PM -0700, noreply at sourceforge.net wrote:

> Summary: Allow all assignment expressions after 'import something as'

> Date: 2000-Aug-19 21:29
> By: twouters
> 
> Comment:
> This absurdly simple patch (4 lines changed in 2 files) turns 'import-as'
> and 'from-import-as' into true assignment expressions: the name given
> after 'as' can be any expression that is a valid l-value:

> >>> from sys import version_info as (maj,min,pl,relnam,relno)          
> >>> maj,min,pl,relnam,relno
> (2, 0, 0, 'beta', 1)

[snip other examples]

> This looks so natural, I would almost treat this as a bugfix instead of a
> new feature ;)

> -------------------------------------------------------
> 
> Date: 2000-Aug-20 20:02
> By: nowonder
> 
> Comment:
> Looks fine. Works as I expect. Doesn't break old code. I hope Guido likes
> it (assigned to gvanrossum).

Actually, it *will* break old code. Try using some of those tricks on, say,
'encodings', like so (excessively convoluted to prove a point ;):

>>> x = {}
>>> from encodings import cp1006 as x[oct(1)], cp1026 as x[hex(20)]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ImportError: cannot import name cp1026

I've another patch waiting which I'll upload after some cleanup, which
circumvents this. The problem is that find_from_args is having a hard time
figuring out how 'import' is being called, exactly. So instead, I create a
list *before* calling import, straight from the information available at
compile-time. (It's only a list because it is currently a list, I would
prefer to make it a tuple instead but I donno if it would break stuff)

That patch is necessary to be able to support this new behaviour, but I
think it's worth checking in even if this patch is rejected -- it speeds up
pystone ! :-) Basically it moves the logic of finding out what the import
arguments are to com_import_stmt() (at compiletime), rather than the
'IMPORT_NAME' bytecode (at runtime).

The only downside is that it adds all 'from-import' arguments to the
co_consts list (as PyString objects) as well as where they already are, the
co_names list (as normal strings). I don't think that's a high price to pay,
myself, and mayhaps the actual storage use could be reduced by making the
one point to the data of the other. Not sure if it's worth it, though.

I've just uploaded the other patch, it can be found here:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101243&group_id=5470

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From dgoodger at bigfoot.com  Sun Aug 20 23:08:05 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Sun, 20 Aug 2000 17:08:05 -0400
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call
	for reviewer!)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com>
Message-ID: <B5C5C0F4.7E0D%dgoodger@bigfoot.com>

on 2000-08-20 13:46, Tim Peters (tim_one at email.msn.com) wrote:
> What he said.  The README explains it well...

Missed that. Will read it & update the test module.

> In Python 3000, I expect we'll introduce a new binary infix operator
> 
> !!!

Looking forward to more syntax in future releases of Python. I'm sure you'll
lead the way, Tim.
-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From thomas at xs4all.net  Sun Aug 20 23:17:30 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 20 Aug 2000 23:17:30 +0200
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for reviewer!)
In-Reply-To: <B5C5C0F4.7E0D%dgoodger@bigfoot.com>; from dgoodger@bigfoot.com on Sun, Aug 20, 2000 at 05:08:05PM -0400
References: <LNBBLJKPBEHFEDALKOLCKEOGHAAA.tim_one@email.msn.com> <B5C5C0F4.7E0D%dgoodger@bigfoot.com>
Message-ID: <20000820231730.F4797@xs4all.nl>

On Sun, Aug 20, 2000 at 05:08:05PM -0400, David Goodger wrote:
> on 2000-08-20 13:46, Tim Peters (tim_one at email.msn.com) wrote:

> > In Python 3000, I expect we'll introduce a new binary infix operator
> > 
> > !!!
> 
> Looking forward to more syntax in future releases of Python. I'm sure you'll
> lead the way, Tim.

I think you just witnessed some of Tim's legendary wit ;) I suspect most
Python programmers would shoot Guido, Tim, whoever wrote the patch that
added the new syntax, and then themselves, if that ever made it into Python
;)

Good-thing-I-can't-legally-carry-guns-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Mon Aug 21 01:22:10 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 20 Aug 2000 23:22:10 +0000
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment 
 expressions after 'import something as'
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl>
Message-ID: <39A06822.5360596D@nowonder.de>

Thomas Wouters wrote:
> 
> > Date: 2000-Aug-20 20:02
> > By: nowonder
> >
> > Comment:
> > Looks fine. Works as I expect. Doesn't break old code. I hope Guido likes
> > it (assigned to gvanrossum).
> 
> Actually, it *will* break old code. Try using some of those tricks on, say,
> 'encodings', like so (excessively convoluted to prove a point ;):

Actually, I meant that it won't break any existing code (because there
is no code using 'import x as y' yet).

Although I don't understand your example (because the word "encoding"
makes
me want to stick my head into the sand), I am fine with your shift
of the list building to compile-time. When I realized what IMPORT_NAME
does at runtime, I wondered if this was really neccessary.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From nowonder at nowonder.de  Mon Aug 21 01:54:09 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Sun, 20 Aug 2000 23:54:09 +0000
Subject: [Python-Dev] OT: How to send CVS update mails?
Message-ID: <39A06FA1.C5EB34D1@nowonder.de>

Sorry, but I cannot figure out how to make SourceForge send
updates whenever there is a CVS commit (checkins mailing
list).

I need this for another project, so if someone remembers
how to do this, please tell me.

off-topic-and-terribly-sorri-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From greg at cosc.canterbury.ac.nz  Mon Aug 21 03:50:49 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 21 Aug 2000 13:50:49 +1200 (NZST)
Subject: [Python-Dev] Re: os.path.commonprefix breakage
In-Reply-To: <14749.26431.198802.970572@cj42289-a.reston1.va.home.com>
Message-ID: <200008210150.NAA15911@s454.cosc.canterbury.ac.nz>

"Fred L. Drake, Jr." <fdrake at beopen.com>:

> Let's accept (some variant) or Skip's desired functionality as
> os.path.splitprefix(); The result can be (prefix, [list of suffixes]).

+1

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From guido at beopen.com  Mon Aug 21 06:37:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 20 Aug 2000 23:37:46 -0500
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Sun, 20 Aug 2000 22:49:18 +0200."
             <20000820224918.D4797@xs4all.nl> 
References: <200008202002.NAA13530@delerium.i.sourceforge.net>  
            <20000820224918.D4797@xs4all.nl> 
Message-ID: <200008210437.XAA22075@cj20424-a.reston1.va.home.com>

> > Summary: Allow all assignment expressions after 'import something as'

-1.  Hypergeneralization.

By the way, notice that

  import foo.bar

places 'foo' in the current namespace, after ensuring that 'foo.bar'
is defined.

What should

  import foo.bar as spam

assign to spam?  I hope foo.bar, not foo.

I note that the CVS version doesn't support this latter syntax at all;
it should be fixed, even though the same effect can be has with

  from foo import bar as spam

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Mon Aug 21 07:08:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 01:08:25 -0400
Subject: [Python-Dev] Py_MakePendingCalls
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>

Does anyone ever call Py_MakePendingCalls?  It's an undocumented entry point
in ceval.c.  I'd like to get rid of it.  Guido sez:

    The only place I know that uses it was an old Macintosh module I
    once wrote to play sounds asynchronously.  I created
    Py_MakePendingCalls() specifically for that purpose.  It may be
    best to get rid of it.

It's not best if anyone is using it despite its undocumented nature, but is
best otherwise.





From moshez at math.huji.ac.il  Mon Aug 21 07:56:31 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 21 Aug 2000 08:56:31 +0300 (IDT)
Subject: Canonical test_XXX.py - nope (was:Re: [Python-Dev] Re: Call for
 reviewer!)
In-Reply-To: <20000820231730.F4797@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008210855550.8603-100000@sundial>

On Sun, 20 Aug 2000, Thomas Wouters wrote:

> I think you just witnessed some of Tim's legendary wit ;) I suspect most
> Python programmers would shoot Guido, Tim, whoever wrote the patch that
> added the new syntax, and then themselves, if that ever made it into Python
> ;)
> 
> Good-thing-I-can't-legally-carry-guns-ly y'rs,

Oh, I'm sure ESR will let you use on of his for this purpose.

it's-a-worth-goal-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From nowonder at nowonder.de  Mon Aug 21 10:30:02 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Mon, 21 Aug 2000 08:30:02 +0000
Subject: [Python-Dev] Re: compile.c: problem with duplicate argument bugfix
References: <14750.1321.978274.117748@buffalo.fnal.gov>
Message-ID: <39A0E88A.2E2DB35E@nowonder.de>

Charles G Waldman wrote:
> 
> I'm catching up on the python-dev archives and see your message.
> 
> Note that I submitted a patch back in May to fix this same problem:
> 
>  http://www.python.org/pipermail/patches/2000-May/000638.html
> 
> There you will find a working patch, and a detailed discussion which
> explains why your approach results in a core-dump.

I had a look. This problem was fixed by removing the call to
PyErr_Clear() from (at that time) line 359 in Objects/object.c.

If you think your patch is a better solution/still needed, please
explain why. Thanks anyway.

> I submitted this patch back before Python moved over to SourceForge,
> there was a small amount of discussion about it and then the word from
> Guido was "I'm too busy to look at this now", and the patch got
> dropped on the floor.

a-patch-manager-can-be-a-good-thing---even-web-based-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From gstein at lyra.org  Mon Aug 21 09:57:06 2000
From: gstein at lyra.org (Greg Stein)
Date: Mon, 21 Aug 2000 00:57:06 -0700
Subject: [Python-Dev] OT: How to send CVS update mails?
In-Reply-To: <39A06FA1.C5EB34D1@nowonder.de>; from nowonder@nowonder.de on Sun, Aug 20, 2000 at 11:54:09PM +0000
References: <39A06FA1.C5EB34D1@nowonder.de>
Message-ID: <20000821005706.D11327@lyra.org>

Take a look at CVSROOT/loginfo and CVSROOT/syncmail in the Python repository.

Cheers,
-g

On Sun, Aug 20, 2000 at 11:54:09PM +0000, Peter Schneider-Kamp wrote:
> Sorry, but I cannot figure out how to make SourceForge send
> updates whenever there is a CVS commit (checkins mailing
> list).
> 
> I need this for another project, so if someone remembers
> how to do this, please tell me.
> 
> off-topic-and-terribly-sorri-ly y'rs
> Peter
> -- 
> Peter Schneider-Kamp          ++47-7388-7331
> Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
> N-7050 Trondheim              http://schneider-kamp.de
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From gstein at lyra.org  Mon Aug 21 09:58:41 2000
From: gstein at lyra.org (Greg Stein)
Date: Mon, 21 Aug 2000 00:58:41 -0700
Subject: [Python-Dev] Py_MakePendingCalls
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 21, 2000 at 01:08:25AM -0400
References: <LNBBLJKPBEHFEDALKOLCCEPMHAAA.tim_one@email.msn.com>
Message-ID: <20000821005840.E11327@lyra.org>

Torch the sucker. It is a pain for free-threading.

(and no: I don't use it, nor do I know anything that uses it)

Cheers,
-g

On Mon, Aug 21, 2000 at 01:08:25AM -0400, Tim Peters wrote:
> Does anyone ever call Py_MakePendingCalls?  It's an undocumented entry point
> in ceval.c.  I'd like to get rid of it.  Guido sez:
> 
>     The only place I know that uses it was an old Macintosh module I
>     once wrote to play sounds asynchronously.  I created
>     Py_MakePendingCalls() specifically for that purpose.  It may be
>     best to get rid of it.
> 
> It's not best if anyone is using it despite its undocumented nature, but is
> best otherwise.
> 
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From martin at loewis.home.cs.tu-berlin.de  Mon Aug 21 09:57:54 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Mon, 21 Aug 2000 09:57:54 +0200
Subject: [Python-Dev] ANN: BerkeleyDB 2.9.0 (experimental)
Message-ID: <200008210757.JAA08643@loewis.home.cs.tu-berlin.de>

Hi Andrew,

I just downloaded your new module, and found a few problems with it:

- bsddb3.db.hashopen does not work, as Db() is called with no
  arguments; it expects at least one argument. The same holds for btopen
  and rnopen.

- The Db() function should accept None as an argument (or no argument),
  as invoking db_create with a NULL environment creates a "standalone
  database".

- Error recovery appears to be missing; I'm not sure whether this is
  the fault of the library or the fault of the module, though:

>>> from bsddb3 import db
>>> e=db.DbEnv()
>>> e.open("/tmp/aaa",db.DB_CREATE)
>>> d=db.Db(e)
>>> d.open("foo",db.DB_HASH,db.DB_CREATE)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
_bsddb.error: (22, 'Das Argument ist ung\374ltig')
>>> d.open("foo",db.DB_HASH,db.DB_CREATE)
zsh: segmentation fault  python

BTW, I still don't know what argument exactly was invalid ...

Regards,
Martin



From mal at lemburg.com  Mon Aug 21 11:42:04 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 11:42:04 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net>
			<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com> <200008201059.MAA05292@pandora.informatik.hu-berlin.de>
Message-ID: <39A0F96C.EA0D0D4B@lemburg.com>

Martin von Loewis wrote:
> 
> > Just dreaming a little here: I would prefer that we use some
> > form of XML to write the catalogs.
> 
> Well, I hope that won't happen. We have excellent tools dealing with
> the catalogs, and I see no value in replacing
> 
> #: src/grep.c:183 src/grep.c:200 src/grep.c:300 src/grep.c:408 src/kwset.c:184
> #: src/kwset.c:190
> msgid "memory exhausted"
> msgstr "Virtueller Speicher ersch?pft."
> 
> with
> 
> <entry>
>   <sourcelist>
>     <source file="src/grep.c" line="183"/>
>     <source file="src/grep.c" line="200"/>
>     <source file="src/grep.c" line="300"/>
>     <source file="src/grep.c" line="408"/>
>     <source file="src/kwset.c" line="184"/>
>     <source file="src/kwset.c" line="190"/>
>   </sourcelist>
>   <msgid>memory exhausted</msgid>
>   <msgstr>Virtueller Speicher ersch?pft.</msgstr>
> </entry>

Well, it's the same argument as always: better have one format
which fits all than a new format for every application. XML
suits these tasks nicely and is becoming more and more accepted
these days.
 
> > XML comes with Unicode support and tools for writing XML are
> > available too.
> 
> Well, the catalog files also "come with unicode support", meaning that
> you can write them in UTF-8 if you want; and tools could be easily
> extended to process UCS-2 input if anybody desires.
> 
> OTOH, the tools for writing po files are much more advanced than any
> XML editor I know.
> 
> > We'd only need a way to transform XML into catalog files of some
> > Python specific platform independent format (should be possible to
> > create .mo files from XML too).
> 
> Or we could convert the XML catalogs in Uniforum-style catalogs, and
> then use the existing tools.

True.

Was just a thought...
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 21 11:30:20 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 11:30:20 +0200
Subject: [Python-Dev] Re: gettext in the standard library
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com> <200008201051.MAA05259@pandora.informatik.hu-berlin.de>
Message-ID: <39A0F6AC.B9C0FCC9@lemburg.com>

Martin von Loewis wrote:
> 
> > Hmm, if your catalogs are encoded in UTF-8 and use non-ASCII
> > chars then the traditional API would have to raise encoding
> > errors
> 
> I don't know what you mean by "traditional" here. The gettext.gettext
> implementation in Barry's patch will return the UTF-8 encoded byte
> string, instead of raising encoding errors - no code conversion takes
> place.

True.
 
> > Perhaps the return value type of .gettext() should be given on
> > the .install() call: e.g. encoding='utf-8' would have .gettext()
> > return a string using UTF-8 while encoding='unicode' would have
> > it return Unicode objects.
> 
> No. You should have the option of either receiving byte strings, or
> Unicode strings. If you want byte strings, you should get the ones
> appearing in the catalog.

So you're all for the two different API version ? After some
more thinking, I think I agree. The reason is that the lookup
itself will have to be Unicode-aware too:

gettext.unigettext(u"L?schen") would have to convert u"L?schen"
to UTF-8, then look this up and convert the returned value
back to Unicode.

gettext.gettext(u"L?schen") will fail with ASCII default encoding.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 21 12:04:04 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 12:04:04 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>
Message-ID: <39A0FE94.1AF5FABF@lemburg.com>

Guido van Rossum wrote:
> 
> Paul Prescod wrote:
> 
> > I don't think of iterators as indexing in terms of numbers. Otherwise I
> > could do this:
> >
> > >>> a={0:"zero",1:"one",2:"two",3:"three"}
> > >>> for i in a:
> > ...     print i
> > ...
> >
> > So from a Python user's point of view, for-looping has nothing to do
> > with integers. From a Python class/module creator's point of view it
> > does have to do with integers. I wouldn't be either surprised nor
> > disappointed if that changed one day.
> 
> Bingo!
> 
> I've long had an idea for generalizing 'for' loops using iterators. This is
> more a Python 3000 thing, but I'll explain it here anyway because I think
> it's relevant. Perhaps this should become a PEP?  (Maybe we should have a
> series of PEPs with numbers in the 3000 range for Py3k ideas?)
> 
> The statement
> 
>   for <variable> in <object>: <block>
> 
> should translate into this kind of pseudo-code:
> 
>   # variant 1
>   __temp = <object>.newiterator()
>   while 1:
>       try: <variable> = __temp.next()
>       except ExhaustedIterator: break
>       <block>
> 
> or perhaps (to avoid the relatively expensive exception handling):
> 
>   # variant 2
>   __temp = <object>.newiterator()
>   while 1:
>       __flag, <variable = __temp.next()
>       if not __flag: break
>       <block>
> 
> In variant 1, the next() method returns the next object or raises
> ExhaustedIterator. In variant 2, the next() method returns a tuple (<flag>,
> <variable>) where <flag> is 1 to indicate that <value> is valid or 0 to
> indicate that there are no more items available. I'm not crazy about the
> exception, but I'm even less crazy about the more complex next() return
> value (careful observers may have noticed that I'm rarely crazy about flag
> variables :-).
> 
> Another argument for variant 1 is that variant 2 changes what <variable> is
> after the loop is exhausted, compared to current practice: currently, it
> keeps the last valid value assigned to it. Most likely, the next() method
> returns None when the sequence is exhausted. It doesn't make a lot of sense
> to require it to return the last item of the sequence -- there may not *be*
> a last item, if the sequence is empty, and not all sequences make it
> convenient to keep hanging on to the last item in the iterator, so it's best
> to specify that next() returns (0, None) when the sequence is exhausted.
> 
> (It would be tempting to suggeste a variant 1a where instead of raising an
> exception, next() returns None when the sequence is exhausted, but this
> won't fly: you couldn't iterate over a list containing some items that are
> None.)

How about a third variant:

#3:
__iter = <object>.iterator()
while __iter:
   <variable> = __iter.next()
   <block>

This adds a slot call, but removes the malloc overhead introduced
by returning a tuple for every iteration (which is likely to be
a performance problem).

Another possibility would be using an iterator attribute
to get at the variable:

#4:
__iter = <object>.iterator()
while 1:
   if not __iter.next():
        break
   <variable> = __iter.value
   <block>

> Side note: I believe that the iterator approach could actually *speed up*
> iteration over lists compared to today's code. This is because currently the
> interation index is a Python integer object that is stored on the stack.
> This means an integer add with overflow check, allocation, and deallocation
> on each iteration! But the iterator for lists (and other basic sequences)
> could easily store the index as a C int! (As long as the sequence's length
> is stored in an int, the index can't overflow.)

You might want to check out the counterobject.c approach I used
to speed up the current for-loop in Python 1.5's ceval.c:
it's basically a mutable C integer which is used instead of
the current Python integer index.

The details can be found in my old patch:

  http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz

> [Warning: thinking aloud ahead!]
> 
> Once we have the concept of iterators, we could support explicit use of them
> as well. E.g. we could use a variant of the for statement to iterate over an
> existing iterator:
> 
>   for <variable> over <iterator>: <block>
> 
> which would (assuming variant 1 above) translate to:
> 
>   while 1:
>       try: <variable> = <iterator>.next()
>       except ExhaustedIterator: break
>       <block>
> 
> This could be used in situations where you have a loop iterating over the
> first half of a sequence and a second loop that iterates over the remaining
> items:
> 
>   it = something.newiterator()
>   for x over it:
>       if time_to_start_second_loop(): break
>       do_something()
>   for x over it:
>       do_something_else()
> 
> Note that the second for loop doesn't reset the iterator -- it just picks up
> where the first one left off! (Detail: the x that caused the break in the
> first loop doesn't get dealt with in the second loop.)
> 
> I like the iterator concept because it allows us to do things lazily. There
> are lots of other possibilities for iterators. E.g. mappings could have
> several iterator variants to loop over keys, values, or both, in sorted or
> hash-table order. Sequences could have an iterator for traversing them
> backwards, and a few other ones for iterating over their index set (cf.
> indices()) and over (index, value) tuples (cf. irange()). Files could be
> their own iterator where the iterator is almost the same as readline()
> except it raises ExhaustedIterator at EOF instead of returning "".  A tree
> datastructure class could have an associated iterator class that maintains a
> "finger" into the tree.
> 
> Hm, perhaps iterators could be their own iterator? Then if 'it' were an
> iterator, it.newiterator() would return a reference to itself (not a copy).
> Then we wouldn't even need the 'over' alternative syntax. Maybe the method
> should be called iterator() then, not newiterator(), to avoid suggesting
> anything about the newness of the returned iterator.
> 
> Other ideas:
> 
> - Iterators could have a backup() method that moves the index back (or
> raises an exception if this feature is not supported, e.g. when reading data
> from a pipe).
> 
> - Iterators over certain sequences might support operations on the
> underlying sequence at the current position of the iterator, so that you
> could iterate over a sequence and occasionally insert or delete an item (or
> a slice).

FYI, I've attached a module which I've been using a while for
iteration. The code is very simple and implements the #4 variant
described above.
 
> Of course iterators also connect to generators. The basic list iterator
> doesn't need coroutines or anything, it can be done as follows:
> 
>   class Iterator:
>       def __init__(self, seq):
>           self.seq = seq
>           self.ind = 0
>       def next(self):
>           if self.ind >= len(self.seq): raise ExhaustedIterator
>           val = self.seq[self.ind]
>           self.ind += 1
>           return val
> 
> so that <list>.iterator() could just return Iterator(<list>) -- at least
> conceptually.
> 
> But for other data structures the amount of state needed might be
> cumbersome. E.g. a tree iterator needs to maintain a stack, and it's much
> easier to code that using a recursive Icon-style generator than by using an
> explicit stack. On the other hand, I remember reading an article a while ago
> (in Dr. Dobbs?) by someone who argued (in a C++ context) that such recursive
> solutions are very inefficient, and that an explicit stack (1) is really not
> that hard to code, and (2) gives much more control over the memory and time
> consumption of the code. On the third hand, some forms of iteration really
> *are* expressed much more clearly using recursion. On the fourth hand, I
> disagree with Matthias ("Dr. Scheme") Felleisen about recursion as the root
> of all iteration. Anyway, I believe that iterators (as explained above) can
> be useful even if we don't have generators (in the Icon sense, which I
> believe means coroutine-style).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Iterator.py
Type: text/python
Size: 2858 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000821/8f8fec76/attachment-0001.bin>

From loewis at informatik.hu-berlin.de  Mon Aug 21 12:25:01 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 21 Aug 2000 12:25:01 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A0F6AC.B9C0FCC9@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net> <399DA8D3.70E85C58@lemburg.com> <200008190725.JAA26022@pandora.informatik.hu-berlin.de> <399E5340.B00811EF@lemburg.com> <200008201051.MAA05259@pandora.informatik.hu-berlin.de> <39A0F6AC.B9C0FCC9@lemburg.com>
Message-ID: <200008211025.MAA14212@pandora.informatik.hu-berlin.de>

> So you're all for the two different API version ? After some
> more thinking, I think I agree. The reason is that the lookup
> itself will have to be Unicode-aware too:
> 
> gettext.unigettext(u"L?schen") would have to convert u"L?schen"
> to UTF-8, then look this up and convert the returned value
> back to Unicode.

I did not even think of using Unicode as *keys* to the lookup. The GNU
translation project recommends that message in the source code are in
English. This is good advice, as translators producing, say, Japanese
translation like have more problems with German input than with
English one.

So I'd say that message ids can safely be byte strings, especially as
I believe that the gettext tools treat them that way, as well. If
authors really have to put non-ASCII into message ids, they should use
\x escapes. I have never seen such a message, though (and I have
translated a number of message catalogs).

Regards,
Martin




From loewis at informatik.hu-berlin.de  Mon Aug 21 12:31:25 2000
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Mon, 21 Aug 2000 12:31:25 +0200 (MET DST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A0F96C.EA0D0D4B@lemburg.com> (mal@lemburg.com)
References: <14749.42747.411862.940207@anthem.concentric.net>
			<399DA8D3.70E85C58@lemburg.com> <14749.44899.573649.483154@anthem.concentric.net> <399E5558.C7B6029B@lemburg.com> <200008201059.MAA05292@pandora.informatik.hu-berlin.de> <39A0F96C.EA0D0D4B@lemburg.com>
Message-ID: <200008211031.MAA14260@pandora.informatik.hu-berlin.de>

> Well, it's the same argument as always: better have one format
> which fits all than a new format for every application. XML
> suits these tasks nicely and is becoming more and more accepted
> these days.

I believe this is a deluding claim. First, XML is not *one* format; is
is rather a "meta format"; you still need the document type definition
(valid vs. well-formed). Furthermore, the XML argument is good if
there is no established data format for some application. If there is
already an accepted format, I see no value in converting that to XML.

Regards,
Martin



From fredrik at pythonware.com  Mon Aug 21 12:43:47 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 21 Aug 2000 12:43:47 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com>
Message-ID: <020401c00b5c$b07f1870$0900a8c0@SPIFF>

mal wrote:
> How about a third variant:
> 
> #3:
> __iter = <object>.iterator()
> while __iter:
>    <variable> = __iter.next()
>    <block>

how does that one terminate?

maybe you meant something like:

    __iter = <object>.iterator()
    while __iter:
        <variable> = __iter.next()
        if <variable> is <sentinel>:
            break
        <block>

(where <sentinel> could be __iter itself...)

</F>




From thomas at xs4all.net  Mon Aug 21 12:59:44 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 12:59:44 +0200
Subject: [Python-Dev] iterators
In-Reply-To: <020401c00b5c$b07f1870$0900a8c0@SPIFF>; from fredrik@pythonware.com on Mon, Aug 21, 2000 at 12:43:47PM +0200
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <020401c00b5c$b07f1870$0900a8c0@SPIFF>
Message-ID: <20000821125944.K4797@xs4all.nl>

On Mon, Aug 21, 2000 at 12:43:47PM +0200, Fredrik Lundh wrote:
> mal wrote:
> > How about a third variant:
> > 
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>

> how does that one terminate?

__iter should evaluate to false once it's "empty". 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Mon Aug 21 13:08:05 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 21 Aug 2000 13:08:05 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <020401c00b5c$b07f1870$0900a8c0@SPIFF>
Message-ID: <024301c00b60$15168fe0$0900a8c0@SPIFF>

I wrote:
> mal wrote:
> > How about a third variant:
> > 
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>
> 
> how does that one terminate?

brain disabled.  sorry.

</F>




From thomas at xs4all.net  Mon Aug 21 14:03:06 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 14:03:06 +0200
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: <200008210437.XAA22075@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Sun, Aug 20, 2000 at 11:37:46PM -0500
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl> <200008210437.XAA22075@cj20424-a.reston1.va.home.com>
Message-ID: <20000821140306.L4797@xs4all.nl>

On Sun, Aug 20, 2000 at 11:37:46PM -0500, Guido van Rossum wrote:

> > > Summary: Allow all assignment expressions after 'import something as'
> -1.  Hypergeneralization.

I don't think it's hypergeneralization. In fact, people might expect it[*],
if we claim that the 'import-as' syntax is a shortcut for the current
practice of

   import somemod
   sspam = somemod.somesubmod.spam

(or similar constructs.) However, I realize you're under a lot of pressure
to Pronounce a number of things now that you're back, and we can always
change this later (if you change your mind.) I dare to predict, though, that
we'll see questions about why this isn't generalized, on c.l.py.

(*] I know 'people might expect it' and 'hypergeneralization' aren't
mutually exclusive, but you know what I mean.)

> By the way, notice that
>   import foo.bar
> places 'foo' in the current namespace, after ensuring that 'foo.bar'
> is defined.

Oh, I noticed ;) We had a small thread about that, this weekend. The subject
was something like ''import as'' or so.

> What should
>   import foo.bar as spam
> assign to spam?  I hope foo.bar, not foo.

The original patch assigned foo to spam, not foo.bar. Why ? Well, all the
patch did was use a different name for the STORE operation that follows an
IMPORT_NAME. To elaborate, 'import foo.bar' does this:

    IMPORT_NAME "foo.bar"
    <resulting object, 'foo', is pushed on the stack>
    STORE_NAME "foo"

and all the patch did was replace the "foo" in STORE_NAME with the name
given in the "as" clause.

> I note that the CVS version doesn't support this latter syntax at all;
> it should be fixed, even though the same effect can be has with
>   from foo import bar as spam

Well, "general consensus" (where the general was a three-headed beast, see
the thread I mentioned) prompted me to make it illegal for now. At least
noone is going to rely on it just yet ;) Making it work as you suggest
requires a separate approach, though. I'll think about how to do it best.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Mon Aug 21 15:52:34 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 15:52:34 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <200008211335.GAA27170@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Mon, Aug 21, 2000 at 06:35:40AM -0700
References: <200008211335.GAA27170@slayer.i.sourceforge.net>
Message-ID: <20000821155234.M4797@xs4all.nl>

On Mon, Aug 21, 2000 at 06:35:40AM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/nondist/peps
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv27120

> Modified Files:
> 	pep-0000.txt 
> Log Message:
> PEP 202, change tim's email address to tpeters -- we really need a key
> for these.

>    I   200  pep-0200.txt  Python 2.0 Release Schedule           jhylton
>    SA  201  pep-0201.txt  Lockstep Iteration                    bwarsaw
> !  S   202  pep-0202.txt  List Comprehensions                   tim_one
>    S   203  pep-0203.txt  Augmented Assignments                 twouters
>    S   204  pep-0204.txt  Range Literals                        twouters


I thought the last collumn was the SourceForge username ? I don't have an
email address that reads 'twouters', except the SF one, anyway, and I
thought tim had 'tim_one' there. Or did he change it ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From james at daa.com.au  Mon Aug 21 16:02:21 2000
From: james at daa.com.au (James Henstridge)
Date: Mon, 21 Aug 2000 22:02:21 +0800 (WST)
Subject: [Python-Dev] Re: gettext in the standard library
In-Reply-To: <39A01C0C.E6BA6CCB@lemburg.com>
Message-ID: <Pine.LNX.4.21.0008210948070.15515-100000@james.daa.com.au>

On Sun, 20 Aug 2000, M.-A. Lemburg wrote:

> James Henstridge wrote:
> > Well, it can do a little more than that.  It will also handle the case of
> > a number of locales listed in the LANG environment variable.  It also
> > doesn't look like it handles decomposition of a locale like
> > ll_CC.encoding at modifier into other matching encodings in the correct
> > precedence order.
> > 
> > Maybe something to do this sort of decomposition would fit better in
> > locale.py though.
> > 
> > This sort of thing is very useful for people who know more than one
> > language, and doesn't seem to be handled by plain setlocale()
> 
> I'm not sure I can follow you here: are you saying that your
> support in gettext.py does more or less than what's present
> in locale.py ?
> 
> If it's more, I think it would be a good idea to add those
> parts to locale.py.

It does a little more than the current locale.py.

I just checked the current locale module, and it gives a ValueError
exception when LANG is set to something like en_AU:fr_FR.  This sort of
thing should be handled by the python interface to gettext, as it is by
the C implementation (and I am sure that most programmers would not expect
such an error from the locale module).

The code in my gettext module handles that case.

James.

-- 
Email: james at daa.com.au
WWW:   http://www.daa.com.au/~james/





From MarkH at ActiveState.com  Mon Aug 21 16:48:09 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 00:48:09 +1000
Subject: [Python-Dev] configure.in, C++ and Linux
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>

I'm pretty new to all of this, so please bear with me.

I create a Setup.in that references some .cpp or .cxx files.  When I create
the Makefile, the command line generated for building these C++ sources is
similar to:

  $(CCC) $(CCSHARE) ...

However, CCC is never set anywhere....

Looking at configure.in, there appears to be support for setting this CCC
variable.  However, it was commented out in revision 1.113 - a checkin by
Guido, December 1999, with the comment:
"""
Patch by Geoff Furnish to make compiling with C++ more gentle.
(The configure script is regenerated, not from his patch.)
"""

Digging a little deeper, I find that config/Makefile.pre.in and
config/makesetup both have references to CCC that account for the
references in my Makefile.  Unfortunately, my knowledge doesnt yet stretch
to knowing exactly where these files come from :-)

Surely all of this isn't correct.  Can anyone tell me what is going on, or
what I am doing wrong?

Thanks,

Mark.








From mal at lemburg.com  Mon Aug 21 16:59:36 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 16:59:36 +0200
Subject: [Python-Dev] Adding more LANG support to locale.py (Re: gettext in 
 the standard library)
References: <Pine.LNX.4.21.0008210948070.15515-100000@james.daa.com.au>
Message-ID: <39A143D8.5595B7C4@lemburg.com>

James Henstridge wrote:
> 
> On Sun, 20 Aug 2000, M.-A. Lemburg wrote:
> 
> > James Henstridge wrote:
> > > Well, it can do a little more than that.  It will also handle the case of
> > > a number of locales listed in the LANG environment variable.  It also
> > > doesn't look like it handles decomposition of a locale like
> > > ll_CC.encoding at modifier into other matching encodings in the correct
> > > precedence order.
> > >
> > > Maybe something to do this sort of decomposition would fit better in
> > > locale.py though.
> > >
> > > This sort of thing is very useful for people who know more than one
> > > language, and doesn't seem to be handled by plain setlocale()
> >
> > I'm not sure I can follow you here: are you saying that your
> > support in gettext.py does more or less than what's present
> > in locale.py ?
> >
> > If it's more, I think it would be a good idea to add those
> > parts to locale.py.
> 
> It does a little more than the current locale.py.
> 
> I just checked the current locale module, and it gives a ValueError
> exception when LANG is set to something like en_AU:fr_FR.  This sort of
> thing should be handled by the python interface to gettext, as it is by
> the C implementation (and I am sure that most programmers would not expect
> such an error from the locale module).

That usage of LANG is new to me... I wonder how well the
multiple options settings fit the current API.
 
> The code in my gettext module handles that case.

Would you be willing to supply a patch to locale.py which
adds multiple LANG options to the interface ?

I guess we'd need a new API getdefaultlocales() [note the trailing
"s"] which will then return a list of locale tuples rather than
a single one. The standard getdefaultlocale() should then return
whatever is considered to be the standard locale when using the
multiple locale notation for LANG.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Mon Aug 21 17:05:13 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 21 Aug 2000 11:05:13 -0400 (EDT)
Subject: [Python-Dev] OT: How to send CVS update mails?
References: <39A06FA1.C5EB34D1@nowonder.de>
	<20000821005706.D11327@lyra.org>
Message-ID: <14753.17705.775721.360133@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> Take a look at CVSROOT/loginfo and CVSROOT/syncmail in the
    GS> Python repository.

Just to complete the picture, add CVSROOT/checkoutlist.

-Barry



From akuchlin at mems-exchange.org  Mon Aug 21 17:06:16 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Mon, 21 Aug 2000 11:06:16 -0400
Subject: [Python-Dev] configure.in, C++ and Linux
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Tue, Aug 22, 2000 at 12:48:09AM +1000
References: <ECEPKNMJLHAPFFJHDOJBEEMEDFAA.MarkH@ActiveState.com>
Message-ID: <20000821110616.A547@kronos.cnri.reston.va.us>

On Tue, Aug 22, 2000 at 12:48:09AM +1000, Mark Hammond wrote:
>Digging a little deeper, I find that config/Makefile.pre.in and
>config/makesetup both have references to CCC that account for the
>references in my Makefile.  Unfortunately, my knowledge doesnt yet stretch
>to knowing exactly where these files come from :-)

The Makefile.pre.in is probably from Misc/Makefile.pre.in, which has a
reference to $(CCC); Modules/Makefile.pre.in is more up to date and
uses $(CXX).  Modules/makesetup also refers to $(CCC), and probably
needs to be changed to use $(CXX), matching Modules/Makefile.pre.in.

Given that we want to encourage the use of the Distutils,
Misc/Makefile.pre.in should be deleted to avoid having people use it.

--amk




From fdrake at beopen.com  Mon Aug 21 18:01:38 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 12:01:38 -0400 (EDT)
Subject: [Python-Dev] OT: How to send CVS update mails?
In-Reply-To: <39A06FA1.C5EB34D1@nowonder.de>
References: <39A06FA1.C5EB34D1@nowonder.de>
Message-ID: <14753.21090.492033.754101@cj42289-a.reston1.va.home.com>

Peter Schneider-Kamp writes:
 > Sorry, but I cannot figure out how to make SourceForge send
 > updates whenever there is a CVS commit (checkins mailing
 > list).

  I wrote up some instructions at:

http://sfdocs.sourceforge.net/sfdocs/display_topic.php?topicid=52


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fdrake at beopen.com  Mon Aug 21 18:49:00 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 12:49:00 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Python import.c,2.146,2.147
In-Reply-To: <200008211635.JAA09187@slayer.i.sourceforge.net>
References: <200008211635.JAA09187@slayer.i.sourceforge.net>
Message-ID: <14753.23932.816392.92125@cj42289-a.reston1.va.home.com>

Barry Warsaw writes:
 > Thomas reminds me to bump the MAGIC number for the extended print
 > opcode additions.

  You also need to document the new opcodes in Doc/lib/libdis.tex.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Mon Aug 21 19:21:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 13:21:32 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <20000821155234.M4797@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>

[Thomas Wouters]
> I thought the last collumn was the SourceForge username ?
> I don't have an email address that reads 'twouters', except the
> SF one, anyway, and I thought tim had 'tim_one' there. Or did he
> change it ?

I don't know what the last column means.  What would you like it to mean?
Perhaps a complete email address, or (what a concept!) the author's *name*,
would be best.

BTW, my current employer assigned "tpeters at beopen.com" to me.  I was just
"tim" for the first 15 years of my career, and then "tim_one" when you kids
started using computers faster than me and took "tim" everywhere before I
got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
now.  I have given up all hope of retaining an online identity.

the-effbot-is-next-ly y'rs  - tim





From cgw at fnal.gov  Mon Aug 21 19:47:04 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 21 Aug 2000 12:47:04 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps
 pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
References: <20000821155234.M4797@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <14753.27416.760084.528198@buffalo.fnal.gov>

Tim Peters writes:

 >  I have given up all hope of retaining an online identity.

And have you seen http://www.timpeters.com ?

(I don't know how you find the time to take those stunning color
photographs!)




From bwarsaw at beopen.com  Mon Aug 21 20:29:27 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 21 Aug 2000 14:29:27 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
References: <20000821155234.M4797@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <14753.29959.135594.439438@anthem.concentric.net>

I don't know why I haven't seen Thomas's reply yet, but in any event...

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> [Thomas Wouters]
    >> I thought the last collumn was the SourceForge username ?  I
    >> don't have an email address that reads 'twouters', except the
    >> SF one, anyway, and I thought tim had 'tim_one' there. Or did
    >> he change it ?

    TP> I don't know what the last column means.  What would you like
    TP> it to mean?  Perhaps a complete email address, or (what a
    TP> concept!) the author's *name*, would be best.

    TP> BTW, my current employer assigned "tpeters at beopen.com" to me.
    TP> I was just "tim" for the first 15 years of my career, and then
    TP> "tim_one" when you kids started using computers faster than me
    TP> and took "tim" everywhere before I got to it.  Now even
    TP> "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one" now.
    TP> I have given up all hope of retaining an online identity.

I'm not sure what it should mean either, except as a shorthand way to
identify the owner of the PEP.  Most important is that each line fit
in 80 columns!

Perhaps we can do away with the filename column, since that's easily
calculated?

I had originally thought the owner should be the mailbox on
SourceForge, but then I thought maybe it ought to be the mailbox given
in the Author: field of the PEP.  Perhaps the Real Name is best after
all, if we can reclaim some horizontal space.

unsure-ly, y'rs,
-Barry



From thomas at xs4all.net  Mon Aug 21 21:02:58 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 21:02:58 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <14753.29959.135594.439438@anthem.concentric.net>; from bwarsaw@beopen.com on Mon, Aug 21, 2000 at 02:29:27PM -0400
References: <20000821155234.M4797@xs4all.nl> <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com> <14753.29959.135594.439438@anthem.concentric.net>
Message-ID: <20000821210258.B4933@xs4all.nl>

On Mon, Aug 21, 2000 at 02:29:27PM -0400, Barry A. Warsaw wrote:

> I don't know why I haven't seen Thomas's reply yet, but in any event...

Strange, it went to python-dev not long after the checkin. Tim quoted about
the entire email, though, so you didn't miss much. The name-calling and
snide remarks weren't important anyway :)

> I'm not sure what it should mean either, except as a shorthand way to
> identify the owner of the PEP.  Most important is that each line fit
> in 80 columns!

> I had originally thought the owner should be the mailbox on
> SourceForge, but then I thought maybe it ought to be the mailbox given
> in the Author: field of the PEP.

Emails in the Author field are likely to be too long to fit in that list,
even if you remove the filename. I'd say go for the SF username, for five
reasons:

  1) it's a name developers know and love (or hate)
  2) more information on a user can be found through SourceForge
  5) that SF email address should work, too. It's where patch updates and
     stuff are sent, so most people are likely to have it forwarding to
     their PEP author address.

Also, it's the way of least resistance. All you need to change is that
'tpeters' into 'tim_one' :-)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Mon Aug 21 21:11:37 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 21 Aug 2000 21:11:37 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Mon, Aug 21, 2000 at 01:21:32PM -0400
References: <20000821155234.M4797@xs4all.nl> <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <20000821211137.C4933@xs4all.nl>

On Mon, Aug 21, 2000 at 01:21:32PM -0400, Tim Peters wrote:

> BTW, my current employer assigned "tpeters at beopen.com" to me.  I was just
> "tim" for the first 15 years of my career, and then "tim_one" when you kids
> started using computers faster than me and took "tim" everywhere before I
> got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
> now.  I have given up all hope of retaining an online identity.

For the first few years online, I was known as 'zonny'. Chosen because my
first online experience, The Digital City of Amsterdam (a local freenet),
was a free service, and I'd forgotten the password of 'thomas', 'sonny',
'twouters' and 'thomasw'. And back then you couldn't get the password
changed :-) So 'zonny' it was, even when I started working there and
could've changed it. And I was happy with it, because I could use 'zonny'
everywhere, noone had apparently ever thought of that name (no suprise
there, eh ? :)

And then two years after I started with that name, I ran into another
'zonny' in some American MUD or another. (I believe it was TinyTIM(*), for
those who know about such things.) And it was a girl, and she had been using
it for years as well! So to avoid confusion I started using 'thomas', and
have had the luck of not needing another name until Mailman moved to
SourceForge :-) But ever since then, I don't believe *any* name is not
already taken. You'll just have to live with the confusion.

*) This is really true. There was a MUD called TinyTIM (actually an offshoot
of TinyMUSH) and it had a shitload of bots, too. It was one of the most
amusing senseless MU*s out there, with a lot of Pythonic humour.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Mon Aug 21 22:30:41 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 15:30:41 -0500
Subject: [Python-Dev] Re: [Patches] [Patch #101234] Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Mon, 21 Aug 2000 14:03:06 +0200."
             <20000821140306.L4797@xs4all.nl> 
References: <200008202002.NAA13530@delerium.i.sourceforge.net> <20000820224918.D4797@xs4all.nl> <200008210437.XAA22075@cj20424-a.reston1.va.home.com>  
            <20000821140306.L4797@xs4all.nl> 
Message-ID: <200008212030.PAA26887@cj20424-a.reston1.va.home.com>

> > > > Summary: Allow all assignment expressions after 'import
> > > > something as'

[GvR]
> > -1.  Hypergeneralization.

[TW]
> I don't think it's hypergeneralization. In fact, people might expect it[*],
> if we claim that the 'import-as' syntax is a shortcut for the current
> practice of
> 
>    import somemod
>    sspam = somemod.somesubmod.spam
> 
> (or similar constructs.) However, I realize you're under a lot of pressure
> to Pronounce a number of things now that you're back, and we can always
> change this later (if you change your mind.) I dare to predict, though, that
> we'll see questions about why this isn't generalized, on c.l.py.

I kind of doubt it, because it doesn't look useful.

I do want "import foo.bar as spam" back, assigning foo.bar to spam.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From moshez at math.huji.ac.il  Mon Aug 21 21:42:50 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 21 Aug 2000 22:42:50 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps
 pep-0000.txt,1.24,1.25
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBIHBAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008212241020.7563-100000@sundial>

On Mon, 21 Aug 2000, Tim Peters wrote:

> BTW, my current employer assigned "tpeters at beopen.com" to me.  I was just
> "tim" for the first 15 years of my career, and then "tim_one" when you kids
> started using computers faster than me and took "tim" everywhere before I
> got to it.  Now even "tim_one" is mostly taken!  On Yahoo, I'm "tim_one_one"
> now.  I have given up all hope of retaining an online identity.

Hah! That's all I have to say to you! 
Being the only moshez in the Free Software/Open Source community and 
the only zadka which cares about the internet has certainly made my life
easier (can you say zadka.com?) 

on-the-other-hand-people-use-moshe-as-a-generic-name-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From mal at lemburg.com  Mon Aug 21 22:22:10 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 21 Aug 2000 22:22:10 +0200
Subject: [Python-Dev] Doc-strings for class attributes ?!
Message-ID: <39A18F72.A0EADEA7@lemburg.com>

Lately I was busy extracting documentation from a large
Python application. 

Everything worked just fine building on existing doc-strings and
the nice Python reflection features, but I came across one 
thing to which I didn't find a suitable Python-style solution:
inline documentation for class attributes.

We already have doc-strings for modules, classes, functions
and methods, but there is no support for documenting class
attributes in a way that:

1. is local to the attribute definition itself
2. doesn't affect the attribute object in any way (e.g. by
   adding wrappers of some sort)
3. behaves well under class inheritence
4. is available online

After some thinking and experimenting with different ways
of achieving the above I came up with the following solution
which looks very Pythonesque to me:

class C:
        " class C doc-string "

        a = 1
        " attribute C.a doc-string "

        b = 2
        " attribute C.b doc-string "

The compiler would handle these cases as follows:

" class C doc-string " -> C.__doc__
" attribute C.a doc-string " -> C.__doc__a__
" attribute C.b doc-string " -> C.__doc__b__

All of the above is perfectly valid Python syntax. Support
should be easy to add to the byte code compiler. The
name mangling assures that attribute doc-strings

a) participate in class inheritence and
b) are treated as special attributes (following the __xxx__
   convention)

Also, the look&feel of this convention is similar to that
of the other existing conventions: the doc string follows
the definition of the object.

What do you think about this idea ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Mon Aug 21 22:32:22 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 16:32:22 -0400 (EDT)
Subject: [Python-Dev] regression test question
Message-ID: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>

  I'm working on bringing the parser module up to date and introducing
a regression test for it.  (And if the grammar stops changing, it may
actually get finished!)
  I'm having a bit of a problem, though:  the test passes when run as
a script, but not when run via the regression test framework.  The
problem is *not* with the output file.  I'm getting an exception from
the module which is not expected, and is only raised when it runs
using the regression framework.
  Has anyone else encountered a similar problem?  I've checked to make
sure the right version or parsermodule.so and test_parser.py are being
picked up.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gvwilson at nevex.com  Mon Aug 21 22:48:11 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 21 Aug 2000 16:48:11 -0400 (EDT)
Subject: [Python-Dev] Doc-strings for class attributes ?!
In-Reply-To: <39A18F72.A0EADEA7@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008211644050.24709-100000@akbar.nevex.com>

> Marc-Andre Lemburg wrote:
> We already have doc-strings for modules, classes, functions and
> methods, but there is no support for documenting class attributes in a
> way that:
> 
> 1. is local to the attribute definition itself
> 2. doesn't affect the attribute object
> 3. behaves well under class inheritence
> 4. is available online
> 
> [proposal]
> class C:
>         " class C doc-string "
> 
>         a = 1
>         " attribute C.a doc-string "
> 
>         b = 2
>         " attribute C.b doc-string "
>
> What do you think about this idea ?

Greg Wilson:
I think it would be useful, but as we've discussed elsewhere, I think that
if the doc string mechanism is going to be extended, I would like it to
allow multiple chunks of information to be attached to objects (functions,
modules, class variables, etc.), so that different developers and tools
can annotate programs without colliding.

Thanks,
Greg




From tim_one at email.msn.com  Tue Aug 22 00:01:54 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 18:01:54 -0400
Subject: [Python-Dev] Looking for an "import" expert
Message-ID: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>

Fred Drake opened an irritating <wink> bug report (#112436).

Cut to the chase:

regrtest.py imports test_support.
test_support.verbose is 1 after that.
regrtest's main() reaches into test_support and
    stomps on test_support.verbose, usually setting to 0.

Now in my build directory, if I run

   python ..\lib\test\regrtest.py test_getopt

the test passes.  However, it *shouldn't* (and the crux of Fred's bug report
is that it does fail when he runs regrtest in an old & deprecated way).

What happens is that test_getopt.py has this near the top:

    from test.test_support import verbose

and this is causing *another* copy of the test_support module to get loaded,
and *its* verbose attr is 1.

So when we run test_getopt "normally" via regrtest, it incorrectly believes
that verbose is 1, and the "expected result" file (which I generated via
regrtest -g) is in fact verbose-mode output.

If I change the import at the top of test_getopt.py to

    from test import test_support
    from test_support import verbose

then test_getopt.py sees the 0 it's supposed to see.

The story is exactly the same, btw, if I run regrtest while *in* the test
directory (so this has nothing to do with that I usually run regrtest from
my build directory).

Now what *Fred* does is equivalent to getting into a Python shell and typing

>>> from test import regrtest
>>> regrtest.main()

and in *that* case (the original) test_getopt sees the 0 it's supposed to
see.

I confess I lost track how fancy Python imports work a long time ago.  Can
anyone make sense of these symptoms?  Why is a 2nd version of test_support
getting loaded, and why only sometimes?






From fdrake at beopen.com  Tue Aug 22 00:10:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 21 Aug 2000 18:10:53 -0400 (EDT)
Subject: [Python-Dev] regression test question
In-Reply-To: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
References: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
Message-ID: <14753.43245.685276.857116@cj42289-a.reston1.va.home.com>

I wrote:
 >   I'm having a bit of a problem, though:  the test passes when run as
 > a script, but not when run via the regression test framework.  The
 > problem is *not* with the output file.  I'm getting an exception from
 > the module which is not expected, and is only raised when it runs
 > using the regression framework.

  Of course, I managed to track this down to a bug in my own code.  I
wasn't clearing an error that should have been cleared, and that was
affecting the result in strange ways.
  I'm not at all sure why it didn't affect the results in more cases,
but that may just mean I need more variation in my tests.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Tue Aug 22 00:13:32 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 00:13:32 +0200
Subject: [Python-Dev] regression test question
In-Reply-To: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Mon, Aug 21, 2000 at 04:32:22PM -0400
References: <14753.37334.464696.70650@cj42289-a.reston1.va.home.com>
Message-ID: <20000822001331.D4933@xs4all.nl>

On Mon, Aug 21, 2000 at 04:32:22PM -0400, Fred L. Drake, Jr. wrote:

>   I'm working on bringing the parser module up to date and introducing
> a regression test for it.  (And if the grammar stops changing, it may
> actually get finished!)

Well, I have augmented assignment in the queue, but that's about it for
Grammar changing patches ;)

>   I'm having a bit of a problem, though:  the test passes when run as
> a script, but not when run via the regression test framework.  The
> problem is *not* with the output file.  I'm getting an exception from
> the module which is not expected, and is only raised when it runs
> using the regression framework.
>   Has anyone else encountered a similar problem?  I've checked to make
> sure the right version or parsermodule.so and test_parser.py are being
> picked up.

I've seen this kind of problem when writing the pty test suite: fork() can
do nasty things to the regression test suite. You have to make sure the
child process exits brutally, in all cases, and *does not output anything*,
etc. I'm not sure if that's your problem though. Another issue I've had to
deal with was with a signal/threads problem on BSDI: enabling threads
screwed up random tests *after* the signal or thread test (never could
figure out what triggered it ;)

(This kind of problem is generic: several test modules, like test_signal,
set 'global' attributes to test something, and don't always reset them. If
you type ^C at the right moment in the test process, test_signal doesn't
remove the SIGINT-handler, and subsequent ^C's dont do anything other than
saying 'HandlerBC called' and failing the test ;))

I'm guessing this is what your parser test is hitting: the regression tester
itself sets something differently from running it directly. Try importing
the test from a script rather than calling it directly ? Did you remember to
set PYTHONPATH and such, like 'make test' does ? Did you use '-tt' like
'make test' does ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Tue Aug 22 00:30:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 21 Aug 2000 18:30:11 -0400
Subject: [Python-Dev] Looking for an "import" expert
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDAHBAA.tim_one@email.msn.com>

> ...
> What happens is that test_getopt.py has this near the top:
>
>     from test.test_support import verbose
>
> and this is causing *another* copy of the test_support module to
> get loaded, and *its* verbose attr is 1.

Maybe adding these lines after that import will help clarify it for you
(note that you can't print to stdout without screwing up what regrtest
expects):

import sys
print >> sys.stderr, sys.modules["test_support"], \
                     sys.modules["test.test_support"]

(hot *damn* is that more pleasant than pasting stuff together by hand!).

When I run it, I get

<module 'test_support' from '..\lib\test\test_support.pyc'>
<module 'test.test_support' from
    'C:\CODE\PYTHON\DIST\SRC\lib\test\test_support.pyc'>

so they clearly are distinct modules.





From guido at beopen.com  Tue Aug 22 02:00:41 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 19:00:41 -0500
Subject: [Python-Dev] Looking for an "import" expert
In-Reply-To: Your message of "Mon, 21 Aug 2000 18:01:54 -0400."
             <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> 
Message-ID: <200008220000.TAA27482@cj20424-a.reston1.va.home.com>

If the tests are run "the modern way" (python ../Lib/test/regrtest.py)
then the test module is the script directory and it is on the path, so
"import test_support" sees and loads a toplevel module test_support.
Then "import test.test_support" sees a package test with a
test_support submodule which is assumed to be a different one, so it
is loaded again.

But if the tests are run via "import test.autotest" (or "import
test.regrtest; test.regrtest.main()" the "import test_support" knows
that the importing module is in the test package, so it first tries to
import the test_support submodule from that package, so
test.test_support and (plain) test_support are the same.

Conclusion: inside the test package, never refer explicitly to the
test package.  Always use "import test_support".  Never "import
test.test_support" or "from test.test_support import verbose" or "from
test import test_support".

This is one for the README!

I've fixed this by checking in a small patch to test_getopt.py and the
corresponding output file (because of the bug, the output file was
produced under verbose mode).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From MarkH at ActiveState.com  Tue Aug 22 03:58:15 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 11:58:15 +1000
Subject: [Python-Dev] configure.in, C++ and Linux
In-Reply-To: <20000821110616.A547@kronos.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBGENODFAA.MarkH@ActiveState.com>

[Andrew]

> The Makefile.pre.in is probably from Misc/Makefile.pre.in, which has a
> reference to $(CCC); Modules/Makefile.pre.in is more up to date and
> uses $(CXX).  Modules/makesetup also refers to $(CCC), and probably
> needs to be changed to use $(CXX), matching Modules/Makefile.pre.in.

This is a bug in the install script then - it installs the CCC version into
/usr/local/lib/python2.0/config.

Also, the online extending-and-embedding instructions explicitly tell you
to use the Misc/ version
(http://www.python.org/doc/current/ext/building-on-unix.html)

> Given that we want to encourage the use of the Distutils,
> Misc/Makefile.pre.in should be deleted to avoid having people use it.

That may be a little drastic :-)

So, as far as I can tell, we have a problem in that using the most widely
available instructions, attempting to build a new C++ extension module on
Linux will fail.  Can someone confirm it is indeed a bug that I should
file?  (Or maybe a patch I should submit?)

Mark.




From guido at beopen.com  Tue Aug 22 05:32:18 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 21 Aug 2000 22:32:18 -0500
Subject: [Python-Dev] iterators
In-Reply-To: Your message of "Mon, 21 Aug 2000 12:04:04 +0200."
             <39A0FE94.1AF5FABF@lemburg.com> 
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>  
            <39A0FE94.1AF5FABF@lemburg.com> 
Message-ID: <200008220332.WAA02661@cj20424-a.reston1.va.home.com>

[BDFL]
> > The statement
> > 
> >   for <variable> in <object>: <block>
> > 
> > should translate into this kind of pseudo-code:
> > 
> >   # variant 1
> >   __temp = <object>.newiterator()
> >   while 1:
> >       try: <variable> = __temp.next()
> >       except ExhaustedIterator: break
> >       <block>
> > 
> > or perhaps (to avoid the relatively expensive exception handling):
> > 
> >   # variant 2
> >   __temp = <object>.newiterator()
> >   while 1:
> >       __flag, <variable = __temp.next()
> >       if not __flag: break
> >       <block>

[MAL]
> How about a third variant:
> 
> #3:
> __iter = <object>.iterator()
> while __iter:
>    <variable> = __iter.next()
>    <block>
> 
> This adds a slot call, but removes the malloc overhead introduced
> by returning a tuple for every iteration (which is likely to be
> a performance problem).

Are you sure the slot call doesn't cause some malloc overhead as well?

Ayway, the problem with this one is that it requires a dynamic
iterator (one that generates values on the fly, e.g. something reading
lines from a pipe) to hold on to the next value between "while __iter"
and "__iter.next()".

> Another possibility would be using an iterator attribute
> to get at the variable:
> 
> #4:
> __iter = <object>.iterator()
> while 1:
>    if not __iter.next():
>         break
>    <variable> = __iter.value
>    <block>

Uglier than any of the others.

> You might want to check out the counterobject.c approach I used
> to speed up the current for-loop in Python 1.5's ceval.c:
> it's basically a mutable C integer which is used instead of
> the current Python integer index.

> The details can be found in my old patch:
> 
>   http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz

Ah, yes, that's what I was thinking of.

> """ Generic object iterators.
[...]

Thanks -- yes, that's what I was thinking of.  Did you just whip this
up?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Tue Aug 22 09:58:12 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 09:58:12 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com>  
	            <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>
Message-ID: <39A23294.B2DB3C77@lemburg.com>

Guido van Rossum wrote:
> 
> [BDFL]
> > > The statement
> > >
> > >   for <variable> in <object>: <block>
> > >
> > > should translate into this kind of pseudo-code:
> > >
> > >   # variant 1
> > >   __temp = <object>.newiterator()
> > >   while 1:
> > >       try: <variable> = __temp.next()
> > >       except ExhaustedIterator: break
> > >       <block>
> > >
> > > or perhaps (to avoid the relatively expensive exception handling):
> > >
> > >   # variant 2
> > >   __temp = <object>.newiterator()
> > >   while 1:
> > >       __flag, <variable = __temp.next()
> > >       if not __flag: break
> > >       <block>
> 
> [MAL]
> > How about a third variant:
> >
> > #3:
> > __iter = <object>.iterator()
> > while __iter:
> >    <variable> = __iter.next()
> >    <block>
> >
> > This adds a slot call, but removes the malloc overhead introduced
> > by returning a tuple for every iteration (which is likely to be
> > a performance problem).
> 
> Are you sure the slot call doesn't cause some malloc overhead as well?

Quite likely not, since the slot in question doesn't generate
Python objects (nb_nonzero).
 
> Ayway, the problem with this one is that it requires a dynamic
> iterator (one that generates values on the fly, e.g. something reading
> lines from a pipe) to hold on to the next value between "while __iter"
> and "__iter.next()".

Hmm, that depends on how you look at it: I was thinking in terms
of reading from a file -- feof() is true as soon as the end of
file is reached. The same could be done for iterators.

We might also consider a mixed approach:

#5:
__iter = <object>.iterator()
while __iter:
   try:
       <variable> = __iter.next()
   except ExhaustedIterator:
       break
   <block>

Some iterators may want to signal the end of iteration using
an exception, others via the truth text prior to calling .next(),
e.g. a list iterator can easily implement the truth test
variant, while an iterator with complex .next() processing
might want to use the exception variant.

Another possibility would be using exception class objects
as singleton indicators of "end of iteration":

#6:
__iter = <object>.iterator()
while 1:
   try:
       rc = __iter.next()
   except ExhaustedIterator:
       break
   else:
       if rc is ExhaustedIterator:
           break
   <variable> = rc
   <block>

> > Another possibility would be using an iterator attribute
> > to get at the variable:
> >
> > #4:
> > __iter = <object>.iterator()
> > while 1:
> >    if not __iter.next():
> >         break
> >    <variable> = __iter.value
> >    <block>
> 
> Uglier than any of the others.
> 
> > You might want to check out the counterobject.c approach I used
> > to speed up the current for-loop in Python 1.5's ceval.c:
> > it's basically a mutable C integer which is used instead of
> > the current Python integer index.
> 
> > The details can be found in my old patch:
> >
> >   http://starship.python.net/crew/lemburg/mxPython-1.5.patch.gz
> 
> Ah, yes, that's what I was thinking of.
> 
> > """ Generic object iterators.
> [...]
> 
> Thanks -- yes, that's what I was thinking of.  Did you just whip this
> up?

The file says: Feb 2000... I don't remember what I wrote it for;
it's in my lib/ dir meaning that it qualified as general purpose
utility :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Tue Aug 22 10:01:40 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 10:01:40 +0200
Subject: [Python-Dev] Looking for an "import" expert
References: <LNBBLJKPBEHFEDALKOLCKECNHBAA.tim_one@email.msn.com> <200008220000.TAA27482@cj20424-a.reston1.va.home.com>
Message-ID: <39A23364.356E9EE4@lemburg.com>

Guido van Rossum wrote:
> 
> If the tests are run "the modern way" (python ../Lib/test/regrtest.py)
> then the test module is the script directory and it is on the path, so
> "import test_support" sees and loads a toplevel module test_support.
> Then "import test.test_support" sees a package test with a
> test_support submodule which is assumed to be a different one, so it
> is loaded again.
> 
> But if the tests are run via "import test.autotest" (or "import
> test.regrtest; test.regrtest.main()" the "import test_support" knows
> that the importing module is in the test package, so it first tries to
> import the test_support submodule from that package, so
> test.test_support and (plain) test_support are the same.
> 
> Conclusion: inside the test package, never refer explicitly to the
> test package.  Always use "import test_support".  Never "import
> test.test_support" or "from test.test_support import verbose" or "from
> test import test_support".

I'd rather suggest to use a different convention: *always* import
using the full path, i.e. "from test import test_support". 

This scales much better and also avoids a nasty problem with
Python pickles related to much the same problem Tim found here:
dual import of subpackage modules (note that pickle will always
do the full path import).
 
> This is one for the README!
> 
> I've fixed this by checking in a small patch to test_getopt.py and the
> corresponding output file (because of the bug, the output file was
> produced under verbose mode).
> 
> --Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Tue Aug 22 11:34:20 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 22 Aug 2000 11:34:20 +0200
Subject: [Python-Dev] New anal crusade 
In-Reply-To: Message by "Tim Peters" <tim_one@email.msn.com> ,
	     Sat, 19 Aug 2000 13:34:28 -0400 , <LNBBLJKPBEHFEDALKOLCGEMCHAAA.tim_one@email.msn.com> 
Message-ID: <20000822093420.AE00B303181@snelboot.oratrix.nl>

> Has anyone tried compiling Python under gcc with
> 
>     -Wmissing-prototypes -Wstrict-prototypes

I have a similar set of options (actually it's difficult to turn them off if 
you're checking for prototypes:-) which will make the CodeWarrior compiler for 
the Mac be strict about prototypes, which complains about external functions 
being declared without a prototype in scope. I was initially baffled by this: 
why would it want a prototype if the function declaration is ansi-style 
anyway? But, it turns out its a really neat warning: usually if you declare an 
external without a prototype in scope it means that it isn't declared in a .h 
file, which means that either (a) it shouldn't have been an extern in the 
first place or (b) you've duplicated the prototype in an external declaration 
somewhere else which means the prototypes aren't necessarily identical.

For Python this option produces warnings for all the init routines, which is 
to be expected, but also for various other things (I seem to remember there's 
a couple in the GC code). If anyone is interested I can print them out and 
send them here.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jack at oratrix.nl  Tue Aug 22 11:45:41 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 22 Aug 2000 11:45:41 +0200
Subject: [Python-Dev] iterators 
In-Reply-To: Message by "Guido van Rossum" <guido@python.org> ,
	     Fri, 18 Aug 2000 16:57:15 -0400 , <011601c00a1f$9923d460$7aa41718@beopen.com> 
Message-ID: <20000822094542.71467303181@snelboot.oratrix.nl>

>   it = something.newiterator()
>   for x over it:
>       if time_to_start_second_loop(): break
>       do_something()
>   for x over it:
>       do_something_else()

Another, similar, paradigm I find myself often using is something like
    tmplist = []
    for x in origlist:
        if x.has_some_property():
           tmplist.append(x)
        else
           do_something()
    for x in tmplist:
        do_something_else()

I think I'd like it if iterators could do something like
    it = origlist.iterator()
    for x in it:
        if x.has_some_property():
           it.pushback()
        else
           do_something()
    for x in it:
        do_something_else()

--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From guido at beopen.com  Tue Aug 22 15:03:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 22 Aug 2000 08:03:28 -0500
Subject: [Python-Dev] iterators
In-Reply-To: Your message of "Tue, 22 Aug 2000 09:58:12 +0200."
             <39A23294.B2DB3C77@lemburg.com> 
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>  
            <39A23294.B2DB3C77@lemburg.com> 
Message-ID: <200008221303.IAA09992@cj20424-a.reston1.va.home.com>

> > [MAL]
> > > How about a third variant:
> > >
> > > #3:
> > > __iter = <object>.iterator()
> > > while __iter:
> > >    <variable> = __iter.next()
> > >    <block>
> > >
> > > This adds a slot call, but removes the malloc overhead introduced
> > > by returning a tuple for every iteration (which is likely to be
> > > a performance problem).
> > 
> > Are you sure the slot call doesn't cause some malloc overhead as well?
> 
> Quite likely not, since the slot in question doesn't generate
> Python objects (nb_nonzero).

Agreed only for built-in objects like lists.  For class instances this
would be way more expensive, because of the two calls vs. one!

> > Ayway, the problem with this one is that it requires a dynamic
> > iterator (one that generates values on the fly, e.g. something reading
> > lines from a pipe) to hold on to the next value between "while __iter"
> > and "__iter.next()".
> 
> Hmm, that depends on how you look at it: I was thinking in terms
> of reading from a file -- feof() is true as soon as the end of
> file is reached. The same could be done for iterators.

But feof() needs to read an extra character into the buffer if the
buffer is empty -- so it needs buffering!  That's what I'm trying to
avoid.

> We might also consider a mixed approach:
> 
> #5:
> __iter = <object>.iterator()
> while __iter:
>    try:
>        <variable> = __iter.next()
>    except ExhaustedIterator:
>        break
>    <block>
> 
> Some iterators may want to signal the end of iteration using
> an exception, others via the truth text prior to calling .next(),
> e.g. a list iterator can easily implement the truth test
> variant, while an iterator with complex .next() processing
> might want to use the exception variant.

Belt and suspenders.  What does this buy you over "while 1"?

> Another possibility would be using exception class objects
> as singleton indicators of "end of iteration":
> 
> #6:
> __iter = <object>.iterator()
> while 1:
>    try:
>        rc = __iter.next()
>    except ExhaustedIterator:
>        break
>    else:
>        if rc is ExhaustedIterator:
>            break
>    <variable> = rc
>    <block>

Then I'd prefer to use a single protocol:

    #7:
    __iter = <object>.iterator()
    while 1:
       rc = __iter.next()
       if rc is ExhaustedIterator:
	   break
       <variable> = rc
       <block>

This means there's a special value that you can't store in lists
though, and that would bite some introspection code (e.g. code listing
all internal objects).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From fredrik at pythonware.com  Tue Aug 22 14:27:11 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 22 Aug 2000 14:27:11 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
Message-ID: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>

well, see for yourself:
http://www.pythonlabs.com/logos.html






From thomas.heller at ion-tof.com  Tue Aug 22 15:13:20 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Tue, 22 Aug 2000 15:13:20 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
References: <200008221210.FAA25857@slayer.i.sourceforge.net>
Message-ID: <001501c00c3a$becdaac0$4500a8c0@thomasnb>

> Update of /cvsroot/python/python/dist/src/PCbuild
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv25776
> 
> Modified Files:
> python20.wse 
> Log Message:
> Thomas Heller noticed that the wrong registry entry was written for
> the DLL.  Replace
>  %_SYSDEST_%\Python20.dll
> with
>  %_DLLDEST_%\Python20.dll.
> 
Unfortunately, there was a bug in my bug-report.

%DLLDEST% would have been correct.
Sorry: Currently I don't have time to test the fix.

Thomas Heller




From MarkH at ActiveState.com  Tue Aug 22 15:32:25 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 22 Aug 2000 23:32:25 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: <001501c00c3a$becdaac0$4500a8c0@thomasnb>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>

> > Modified Files:
> > python20.wse
> > Log Message:
> > Thomas Heller noticed that the wrong registry entry was written for
> > the DLL.  Replace
> >  %_SYSDEST_%\Python20.dll
> > with
> >  %_DLLDEST_%\Python20.dll.
> >
> Unfortunately, there was a bug in my bug-report.

Actually, there is no need to write that entry at all!  It should be
removed.  I thought it was, ages ago.

Mark.




From thomas at xs4all.net  Tue Aug 22 15:35:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 15:35:13 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>; from fredrik@pythonware.com on Tue, Aug 22, 2000 at 02:27:11PM +0200
References: <002201c00c34$4cbb9b50$0900a8c0@SPIFF>
Message-ID: <20000822153512.H4933@xs4all.nl>

On Tue, Aug 22, 2000 at 02:27:11PM +0200, Fredrik Lundh wrote:

> well, see for yourself:
> http://www.pythonlabs.com/logos.html

Oh, that reminds me, the FAQ needs adjusting ;) It still says:
"""
1.2. Why is it called Python?

Apart from being a computer scientist, I'm also a fan of "Monty Python's
Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely
-- case you didn't know). It occurred to me one day that I needed a name
that was short, unique, and slightly mysterious. And I happened to be
reading some scripts from the series at the time... So then I decided to
call my language Python. But Python is not a joke. And don't you associate
it with dangerous reptiles either! (If you need an icon, use an image of the
16-ton weight from the TV series or of a can of SPAM :-)
"""
 
And while I'm at it, I hope I can say without offending anyone that I hope
the logo is open for critisism. Few (if any?) python species look that
green, making the logo looks more like an adder. And I think the more
majestic python species are cooler on a logo, anyway ;) If whoever makes the
logos wants, I can go visit the small reptile-zoo around the corner from
where I live and snap some pictures of the Pythons they have there (they
have great Tiger-Pythons, including an albino one!)

I-still-like-the-shirt-though!-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas.heller at ion-tof.com  Tue Aug 22 15:52:16 2000
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Tue, 22 Aug 2000 15:52:16 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
References: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com>
Message-ID: <002b01c00c40$2e7b32c0$4500a8c0@thomasnb>

> > > Modified Files:
> > > python20.wse
> > > Log Message:
> > > Thomas Heller noticed that the wrong registry entry was written for
> > > the DLL.  Replace
> > >  %_SYSDEST_%\Python20.dll
> > > with
> > >  %_DLLDEST_%\Python20.dll.
> > >
> > Unfortunately, there was a bug in my bug-report.
> 
> Actually, there is no need to write that entry at all!  It should be
> removed.  I thought it was, ages ago.
I would like to use this entry to find the python-interpreter
belonging to a certain registry entry.

How would you do it if this entry is missing?
Guess the name python<major-version/minor-version>.dll???

Thomas Heller




From guido at beopen.com  Tue Aug 22 17:04:33 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 22 Aug 2000 10:04:33 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: Your message of "Tue, 22 Aug 2000 23:32:25 +1000."
             <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com> 
References: <ECEPKNMJLHAPFFJHDOJBOEONDFAA.MarkH@ActiveState.com> 
Message-ID: <200008221504.KAA01151@cj20424-a.reston1.va.home.com>

> From: "Mark Hammond" <MarkH at ActiveState.com>
> To: "Thomas Heller" <thomas.heller at ion-tof.com>, <python-dev at python.org>
> Date: Tue, 22 Aug 2000 23:32:25 +1000
> 
> > > Modified Files:
> > > python20.wse
> > > Log Message:
> > > Thomas Heller noticed that the wrong registry entry was written for
> > > the DLL.  Replace
> > >  %_SYSDEST_%\Python20.dll
> > > with
> > >  %_DLLDEST_%\Python20.dll.
> > >
> > Unfortunately, there was a bug in my bug-report.

Was that last like Thomas Heller speaking?  I didn't see that mail!
(Maybe because my machine crashed due to an unexpected power outage a
few minutes ago.)

> Actually, there is no need to write that entry at all!  It should be
> removed.  I thought it was, ages ago.

OK, will do, but for 2.0 only.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From MarkH at ActiveState.com  Tue Aug 22 16:04:08 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Wed, 23 Aug 2000 00:04:08 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/PCbuild python20.wse,1.6,1.7
In-Reply-To: <002b01c00c40$2e7b32c0$4500a8c0@thomasnb>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEOPDFAA.MarkH@ActiveState.com>

[Me, about removing the .DLL entry from the registry]

> > Actually, there is no need to write that entry at all!  It should be
> > removed.  I thought it was, ages ago.

[Thomas]
> I would like to use this entry to find the python-interpreter
> belonging to a certain registry entry.
>
> How would you do it if this entry is missing?
> Guess the name python<major-version/minor-version>.dll???

I think I am responsible for this registry entry in the first place.
Pythonwin/COM etc. went down the path of locating and loading the Python
DLL from the registry, but it has since all been long removed.

The basic problem is that there is only _one_ acceptable Python DLL for a
given version, regardless of what that particular registry says!  If the
registry points to the "wrong" DLL, things start to go wrong pretty quick,
and in not-so-obvious ways!

I think it is better to LoadLibrary("Python%d.dll") (or GetModuleHandle()
if you know Python is initialized) - this is what the system itself will
soon be doing to start Python up anyway!

Mark.




From skip at mojam.com  Tue Aug 22 16:45:27 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 22 Aug 2000 09:45:27 -0500 (CDT)
Subject: [Python-Dev] commonprefix - the beast just won't die...
Message-ID: <14754.37383.913840.582313@beluga.mojam.com>

I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
updated the tests (still to be checked in) and was starting to work on
documentation changes, when I realized that something Guido said about using
dirname to trim to the common directory prefix is probably not correct.
Here's an example.  The common prefix of ["/usr/local", "/usr/local/bin"] is
"/usr/local".  If you blindly apply dirname to that (which is what I think
Guido suggested as the way to make commonprefix do what I wanted, you wind
up with "/usr", which isn't going to be correct on most Unix flavors.
Instead, you need to check that the prefix doesn't exist or isn't a
directory before applying dirname.  (And of course, that only works on the
machine containing the paths in question.  You should be able to import
posixpath on a Mac and feed it Unix-style paths, which you won't be able to
check for existence.)

Based on this problem, I'm against documenting using dirname to trim the
commonprefix output to a directory prefix.  I'm going to submit a patch with
the test case and minimal documentation changes and leave it at that for
now.

Skip



From mal at lemburg.com  Tue Aug 22 16:43:50 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 16:43:50 +0200
Subject: [Python-Dev] iterators
References: <LNBBLJKPBEHFEDALKOLCEEALHAAA.tim_one@email.msn.com> <399BE124.9920B0B6@prescod.net> <011601c00a1f$9923d460$7aa41718@beopen.com> <39A0FE94.1AF5FABF@lemburg.com> <200008220332.WAA02661@cj20424-a.reston1.va.home.com>  
	            <39A23294.B2DB3C77@lemburg.com> <200008221303.IAA09992@cj20424-a.reston1.va.home.com>
Message-ID: <39A291A6.3DC7A4E3@lemburg.com>

Guido van Rossum wrote:
> 
> > > [MAL]
> > > > How about a third variant:
> > > >
> > > > #3:
> > > > __iter = <object>.iterator()
> > > > while __iter:
> > > >    <variable> = __iter.next()
> > > >    <block>
> > > >
> > > > This adds a slot call, but removes the malloc overhead introduced
> > > > by returning a tuple for every iteration (which is likely to be
> > > > a performance problem).
> > >
> > > Are you sure the slot call doesn't cause some malloc overhead as well?
> >
> > Quite likely not, since the slot in question doesn't generate
> > Python objects (nb_nonzero).
> 
> Agreed only for built-in objects like lists.  For class instances this
> would be way more expensive, because of the two calls vs. one!

True.
 
> > > Ayway, the problem with this one is that it requires a dynamic
> > > iterator (one that generates values on the fly, e.g. something reading
> > > lines from a pipe) to hold on to the next value between "while __iter"
> > > and "__iter.next()".
> >
> > Hmm, that depends on how you look at it: I was thinking in terms
> > of reading from a file -- feof() is true as soon as the end of
> > file is reached. The same could be done for iterators.
> 
> But feof() needs to read an extra character into the buffer if the
> buffer is empty -- so it needs buffering!  That's what I'm trying to
> avoid.

Ok.
 
> > We might also consider a mixed approach:
> >
> > #5:
> > __iter = <object>.iterator()
> > while __iter:
> >    try:
> >        <variable> = __iter.next()
> >    except ExhaustedIterator:
> >        break
> >    <block>
> >
> > Some iterators may want to signal the end of iteration using
> > an exception, others via the truth text prior to calling .next(),
> > e.g. a list iterator can easily implement the truth test
> > variant, while an iterator with complex .next() processing
> > might want to use the exception variant.
> 
> Belt and suspenders.  What does this buy you over "while 1"?

It gives you two possible ways to signal "end of iteration".
But your argument about Python iterators (as opposed to
builtin ones) applies here as well, so I withdraw this one :-)
 
> > Another possibility would be using exception class objects
> > as singleton indicators of "end of iteration":
> >
> > #6:
> > __iter = <object>.iterator()
> > while 1:
> >    try:
> >        rc = __iter.next()
> >    except ExhaustedIterator:
> >        break
> >    else:
> >        if rc is ExhaustedIterator:
> >            break
> >    <variable> = rc
> >    <block>
> 
> Then I'd prefer to use a single protocol:
> 
>     #7:
>     __iter = <object>.iterator()
>     while 1:
>        rc = __iter.next()
>        if rc is ExhaustedIterator:
>            break
>        <variable> = rc
>        <block>
> 
> This means there's a special value that you can't store in lists
> though, and that would bite some introspection code (e.g. code listing
> all internal objects).

Which brings us back to the good old "end of iteration" == raise
an exception logic :-)

Would this really hurt all that much in terms of performance ?
I mean, todays for-loop code uses IndexError for much the same
thing...
 
    #8:
    __iter = <object>.iterator()
    while 1:
       try:
           <variable> = __iter.next()
       except ExhaustedIterator:
           break
       <block>

Since this will be written in C, we don't even have the costs
of setting up an exception block.

I would still suggest that the iterator provides the current
position and iteration value as attributes. This avoids some
caching of those values and also helps when debugging code
using introspection tools.

The positional attribute will probably have to be optional
since not all iterators can supply this information, but
the .value attribute is certainly within range (it would
cache the value returned by the last .next() or .prev()
call).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Tue Aug 22 17:16:38 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 22 Aug 2000 17:16:38 +0200
Subject: [Python-Dev] commonprefix - the beast just won't die...
In-Reply-To: <14754.37383.913840.582313@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 22, 2000 at 09:45:27AM -0500
References: <14754.37383.913840.582313@beluga.mojam.com>
Message-ID: <20000822171638.I4933@xs4all.nl>

On Tue, Aug 22, 2000 at 09:45:27AM -0500, Skip Montanaro wrote:

> I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
> updated the tests (still to be checked in) and was starting to work on
> documentation changes, when I realized that something Guido said about using
> dirname to trim to the common directory prefix is probably not correct.
> Here's an example.  The common prefix of ["/usr/local", "/usr/local/bin"] is
> "/usr/local".  If you blindly apply dirname to that (which is what I think
> Guido suggested as the way to make commonprefix do what I wanted, you wind
> up with "/usr", which isn't going to be correct on most Unix flavors.
> Instead, you need to check that the prefix doesn't exist or isn't a
> directory before applying dirname.

And even that won't work, in a case like this:

/home/swenson/
/home/swen/

(common prefix would be /home/swen, which is a directory) or cases like
this:

/home/swenson/
/home/swenniker/

where another directory called
/home/swen

exists.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Tue Aug 22 20:14:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 22 Aug 2000 20:14:56 +0200
Subject: [Python-Dev] Adding doc-strings to attributes [with patch]
Message-ID: <39A2C320.ADF5E80F@lemburg.com>

Here's a patch which roughly implements the proposed attribute
doc-string syntax and semantics:

class C:
        " class C doc-string "

        a = 1
        " attribute C.a doc-string "

        b = 2
        " attribute C.b doc-string "

The compiler would handle these cases as follows:

" class C doc-string " -> C.__doc__
" attribute C.a doc-string " -> C.__doc__a__
" attribute C.b doc-string " -> C.__doc__b__

The name mangling assures that attribute doc-strings

a) participate in class inheritence and
b) are treated as special attributes (following the __xxx__
   convention)

Also, the look&feel of this convention is similar to that
of the other existing conventions: the doc string follows
the definition of the object.

The patch is a little rough in the sense that binding the
doc-string to the attribute name is done using a helper
variable that is not reset by non-expressions during the
compile. Shouldn't be too hard to fix though... at least
not for one of you compiler wizards ;-)

What do you think ?

[I will probably have to write a PEP for this...]

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
-------------- next part --------------
--- CVS-Python/Python/compile.c	Tue Aug 22 10:31:06 2000
+++ Python+Unicode/Python/compile.c	Tue Aug 22 19:59:30 2000
@@ -293,10 +293,11 @@ struct compiling {
 	int c_last_addr, c_last_line, c_lnotab_next;
 #ifdef PRIVATE_NAME_MANGLING
 	char *c_private;	/* for private name mangling */
 #endif
 	int c_tmpname;		/* temporary local name counter */
+        char *c_last_name;       /* last assigned name */
 };
 
 
 /* Error message including line number */
 
@@ -435,12 +436,13 @@ com_init(struct compiling *c, char *file
 	c->c_stacklevel = 0;
 	c->c_maxstacklevel = 0;
 	c->c_firstlineno = 0;
 	c->c_last_addr = 0;
 	c->c_last_line = 0;
-	c-> c_lnotab_next = 0;
+	c->c_lnotab_next = 0;
 	c->c_tmpname = 0;
+	c->c_last_name = 0;
 	return 1;
 	
   fail:
 	com_free(c);
  	return 0;
@@ -1866,10 +1868,11 @@ com_assign_name(struct compiling *c, nod
 {
 	REQ(n, NAME);
 	com_addopname(c, assigning ? STORE_NAME : DELETE_NAME, n);
 	if (assigning)
 		com_pop(c, 1);
+	c->c_last_name = STR(n);
 }
 
 static void
 com_assign(struct compiling *c, node *n, int assigning)
 {
@@ -1974,18 +1977,40 @@ com_assign(struct compiling *c, node *n,
 		}
 	}
 }
 
 /* Forward */ static node *get_rawdocstring(node *);
+/* Forward */ static PyObject *get_docstring(node *);
 
 static void
 com_expr_stmt(struct compiling *c, node *n)
 {
 	REQ(n, expr_stmt); /* testlist ('=' testlist)* */
-	/* Forget it if we have just a doc string here */
-	if (!c->c_interactive && NCH(n) == 1 && get_rawdocstring(n) != NULL)
+	/* Handle attribute doc-strings here */
+	if (!c->c_interactive && NCH(n) == 1) {
+	    node *docnode = get_rawdocstring(n);
+	    if (docnode != NULL) {
+		if (c->c_last_name) {
+		    PyObject *doc = get_docstring(docnode);
+		    int i = com_addconst(c, doc);
+		    char docname[420];
+#if 0
+		    printf("found doc-string '%s' for '%s'\n", 
+			   PyString_AsString(doc),
+			   c->c_last_name);
+#endif
+		    sprintf(docname, "__doc__%.400s__", c->c_last_name);
+		    com_addoparg(c, LOAD_CONST, i);
+		    com_push(c, 1);
+		    com_addopnamestr(c, STORE_NAME, docname);
+		    com_pop(c, 1);
+		    c->c_last_name = NULL;
+		    Py_DECREF(doc);
+		}
 		return;
+	    }
+	}
 	com_node(c, CHILD(n, NCH(n)-1));
 	if (NCH(n) == 1) {
 		if (c->c_interactive)
 			com_addbyte(c, PRINT_EXPR);
 		else

From donb at init.com  Tue Aug 22 21:16:39 2000
From: donb at init.com (Donald Beaudry)
Date: Tue, 22 Aug 2000 15:16:39 -0400
Subject: [Python-Dev] Adding insint() function 
References: <20000818110037.C27419@kronos.cnri.reston.va.us>
Message-ID: <200008221916.PAA17130@zippy.init.com>

Andrew Kuchling <akuchlin at mems-exchange.org> wrote,
> This duplication bugs me.  Shall I submit a patch to add an API
> convenience function to do this, and change the modules to use it?
> Suggested prototype and name: PyDict_InsertInteger( dict *, string,
> long)

+0 but...

...why not:

   PyDict_SetValueString(PyObject* dict, char* key, char* fmt, ...)

and

   PyDict_SetValue(PyObject* dict, PyObject* key, char* fmt, ...)

where the fmt is Py_BuildValue() format string and the ... is, of
course, the argument list.

--
Donald Beaudry                                     Ab Initio Software Corp.
                                                   201 Spring Street
donb at init.com                                      Lexington, MA 02421
                      ...Will hack for sushi...



From ping at lfw.org  Tue Aug 22 22:02:31 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 22 Aug 2000 13:02:31 -0700 (PDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: <200008212030.PAA26887@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org>

On Mon, 21 Aug 2000, Guido van Rossum wrote:
> > > > > Summary: Allow all assignment expressions after 'import
> > > > > something as'
[...]
> I kind of doubt it, because it doesn't look useful.

Looks potentially useful to me.  If nothing else, it's certainly
easier to explain than any other behaviour i could think of, since
assignment is already well-understood.

> I do want "import foo.bar as spam" back, assigning foo.bar to spam.

No no no no.  Or at least let's step back and look at the whole
situation first.

"import foo.bar as spam" makes me uncomfortable because:

    (a) It's not clear whether spam should get foo or foo.bar, as
        evidenced by the discussion between Gordon and Thomas.

    (b) There's a straightforward and unambiguous way to express
        this already: "from foo import bar as spam".

    (c) It's not clear whether this should work only for modules
        named bar, or any symbol named bar.


Before packages, the only two forms of the import statement were:

    import <module>
    from <module> import <symbol>

After packages, the permitted forms are now:

    import <module>
    import <package>
    import <pkgpath>.<module>
    import <pkgpath>.<package>
    from <module> import <symbol>
    from <package> import <module>
    from <pkgpath>.<module> import <symbol>
    from <pkgpath>.<package> import <module>

where a <pkgpath> is a dot-separated list of package names.

With "as" clauses, we could permit:

    import <module> as <localmodule>
    import <package> as <localpackage>
??  import <pkgpath>.<module> as <localmodule>
??  import <pkgpath>.<package> as <localpackage>
??  import <module>.<symbol> as <localsymbol>
??  import <pkgpath>.<module>.<symbol> as <localsymbol>
    from <module> import <symbol> as <localsymbol>
    from <package> import <symbol> as <localsymbol>
    from <pkgpath>.<module> import <symbol> as <localsymbol>
    from <pkgpath>.<package> import <module> as <localmodule>

It's not clear that we should allow "as" on the forms marked with
??, since the other six clearly identify the thing being renamed
and they do not.

Also note that all the other forms using "as" assign exactly one
thing: the name after the "as".  Would the forms marked with ??
assign just the name after the "as" (consistent with the other
"as" forms), or also the top-level package name as well (consistent
with the current behaviour of "import <pkgpath>.<module>")?

That is, would

    import foo.bar as spam

define just spam or both foo and spam?

All these questions make me uncertain...


-- ?!ng




From jack at oratrix.nl  Wed Aug 23 00:03:24 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 23 Aug 2000 00:03:24 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is... 
In-Reply-To: Message by Thomas Wouters <thomas@xs4all.net> ,
	     Tue, 22 Aug 2000 15:35:13 +0200 , <20000822153512.H4933@xs4all.nl> 
Message-ID: <20000822220329.19A91E266F@oratrix.oratrix.nl>

Ah, forget about the snake. It was an invention of
those-who-watch-blue-screens, and I guess Guido only jumped on the
bandwagon because those-who-gave-us-bluescreens offered him lots of
money or something.

On Real Machines Python still uses the One And Only True Python Icon,
and will continue to do so by popular demand (although
those-who-used-to-watch-bluescreens-but-saw-the-light occasionally
complain:-).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From tim_one at email.msn.com  Wed Aug 23 04:43:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 22 Aug 2000 22:43:04 -0400
Subject: Not commonprefix (was RE: [Python-Dev] commonprefix - the beast just won't die...)
In-Reply-To: <20000822171638.I4933@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEGDHBAA.tim_one@email.msn.com>

[Skip Montanaro]
> I reverted the changes to {posix,nt,dos}path.commonprefix this morning,
> updated the tests (still to be checked in) and was starting to work on
> documentation changes, when I realized that something Guido
> said about using dirname to trim to the common directory prefix is
> probably not correct.  Here's an example.  The common prefix of
>     ["/usr/local", "/usr/local/bin"]
> is
>     "/usr/local"
> If you blindly apply dirname to that (which is what I think Guido
> suggested as the way to make commonprefix do what I wanted, you wind
> up with
>     "/usr"
> which isn't going to be correct on most Unix flavors.  Instead, you
> need to check that the prefix doesn't exist or isn't a directory
> before applying dirname.

[Thomas Wouters]
> And even that won't work, in a case like this:
>
> /home/swenson/
> /home/swen/
>
> (common prefix would be /home/swen, which is a directory)

Note that Guido's suggestion does work for that, though.

> or cases like this:
>
> /home/swenson/
> /home/swenniker/
>
> where another directory called
> /home/swen
>
> exists.

Ditto.  This isn't coincidence:  Guido's suggestion works as-is *provided
that* each dirname in the original collection ends with a path separator.
Change Skip's example to

    ["/usr/local/", "/usr/local/bin/"]
                ^ stuck in slashes ^

and Guido's suggestion works fine too.  But these are purely
string-crunching functions, and "/usr/local" *screams* "directory" to people
thanks to its specific name.  Let's make the test case absurdly "simple":

    ["/x/y", "/x/y"]

What's the "common (directory) prefix" here?  Well, there's simply no way to
know at this level.  It's /x/y if y is a directory, or /x if y's just a
regular file.  Guido's suggestion returns /x in this case, or /x/y if you
add trailing slashes to both.  If you don't tell a string gimmick which
inputs are and aren't directories, you can't expect it to guess.

I'll say again, if you want a new function, press for one!  Just leave
commonprefix alone.






From tim_one at email.msn.com  Wed Aug 23 06:32:32 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 00:32:32 -0400
Subject: [Python-Dev] 2.0 Release Plans
Message-ID: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>

PythonLabs had a high-decibel meeting today (well, Monday), culminating in
Universal Harmony.  Jeremy will be updating PEP 200 accordingly.  Just three
highlights:

+ The release schedule has Officially Slipped by one week:  2.0b1 will ship
a week from this coming Monday.  There are too many Open and Accepted
patches backed up to meet the original date.  Also problems cropping up,
like new std tests failing to pass every day (you're not supposed to do
that, you know!  consider yourself clucked at), and patches having to be
redone because of other patches getting checked in.  We want to take the
extra week to do this right, and give you more space to do *your* part
right.

+ While only one beta release is scheduled at this time, we reserve the
right to make a second beta release if significant problems crop up during
the first beta period.  Of course that would cause additional slippage of
2.0 final, if it becomes necessary.  Note that "no features after 2.0b1 is
out!" still stands, regardless of how many beta releases there are.

+ I changed the Patch Manager guidelines at

     http://python.sourceforge.net/sf-faq.html#a1

to better reflect the way we're actually using the beast.  In a nutshell,
Rejected has been changed to mean "this patch is dead, period"; and Open
patches that are awaiting resolution of complaints should remain Open.

All right.  Time for inspiration.  From my point of view, you've all had
waaaaaay too much sleep in August!  Pull non-stop all-nighters until 2.0b1
is out the door, or go work on some project for sissies -- like Perl 6.0 or
the twelve thousandth implementation of Scheme <wink>.

no-guts-no-glory-slow-and-steady-wins-the-race-ly y'rs  - tim





From esr at thyrsus.com  Wed Aug 23 07:16:01 2000
From: esr at thyrsus.com (Eric S. Raymond)
Date: Wed, 23 Aug 2000 01:16:01 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 23, 2000 at 12:32:32AM -0400
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
Message-ID: <20000823011601.D29063@thyrsus.com>

Tim Peters <tim_one at email.msn.com>:
> All right.  Time for inspiration.  From my point of view, you've all had
> waaaaaay too much sleep in August!  Pull non-stop all-nighters until 2.0b1
> is out the door, or go work on some project for sissies -- like Perl 6.0 or
> the twelve thousandth implementation of Scheme <wink>.
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Hey!  I *resemble* that remark!

I don't think I'm presently responsible for anything critical.  If I've
spaced something, somebody tell me now.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

What, then is law [government]? It is the collective organization of
the individual right to lawful defense."
	-- Frederic Bastiat, "The Law"



From tim_one at email.msn.com  Wed Aug 23 08:57:07 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 02:57:07 -0400
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <20000822153512.H4933@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com>

[F]
> well, see for yourself:
>     http://www.pythonlabs.com/logos.html

We should explain that.  I'll let Bob Weiner (BeOpen's CTO) do it instead,
though, because he explained it well to us:

<BOB>
From: weiner at beopen.com
Sent: Wednesday, August 23, 2000 1:23 AM

Just to clarify, the intent of these is for use by companies or individuals
who choose on their own to link back to the PythonLabs site and show their
support for BeOpen's work on Python.  Use of any such branding is wholly
voluntary, as you might expect.

To clarify even further, we recognize and work with many wonderful parties
who contribute to Python.  We expect to continue to put out source releases
called just `Python', and brand platform-specific releases which we produce
and quality-assure ourselves as `BeOpen Python' releases.  This is similar
to what other companies do in the Linux space and other open source arenas.
We know of another company branding their Python release; this helps
potential customers differentiate offerings in the largely undifferentiated
open source space.

We believe it is important and we meet with companies every week who tell us
they want one or more companies behind the development, productization and
support of Python (like Red Hat or SuSE behind Linux).  Connecting the
BeOpen name to Python is one way in which we can help them know that we
indeed do provide these services for Python.  The BeOpen name was chosen
very carefully to encourage people to take an open approach in their
technology deployments, so we think this is a good association for Python to
have and hope that many Python users will choose to help support these
efforts.

We're also very open to working with other Python-related firms to help
build broader use and acceptance of Python.  Mail
<pythonlabs-info at beopen.com> if you'd like to work on a partnership
together.

</BOB>

See?  It's not evil.  *All* American CTOs say "space" and "arena" too much,
so don't gripe about that either.  I can tell you that BeOpen isn't exactly
getting rich off their Python support so far, wrestling with CNRI is
exhausting in more ways than one, and Bob Weiner is a nice man.  Up to this
point, his support of PythonLabs has been purely philanthropic!  If you
appreciate that, you *might* even consider grabbing a link.

[Thomas Wouters]
> Oh, that reminds me, the FAQ needs adjusting ;) It still says:
> """
> 1.2. Why is it called Python?
>
> Apart from being a computer scientist, I'm also a fan of "Monty Python's
> Flying Circus" (a BBC comedy series from the seventies, in the -- unlikely
> -- case you didn't know). It occurred to me one day that I needed a name
> that was short, unique, and slightly mysterious. And I happened to be
> reading some scripts from the series at the time... So then I decided to
> call my language Python. But Python is not a joke. And don't you associate
> it with dangerous reptiles either! (If you need an icon, use an
> image of the
> 16-ton weight from the TV series or of a can of SPAM :-)
> """

Yes, that needs to be rewritten.  Here you go:

    Apart from being a computer scientist, I'm also a fan of
    "Monty BeOpen Python's Flying Circus" (a BBC comedy series from
    the seventies, in the -- unlikely -- case you didn't know). It
    occurred to me one day that I needed a name that was short, unique,
    and slightly mysterious. And I happened to be reading some scripts
    from the series at the time... So then I decided to call my language
    BeOpen Python. But BeOpen Python is not a joke. And don't you associate
    it with dangerous reptiles either! (If you need an icon, use an image
    of the decidedly *friendly* BeOpen reptiles at
    http://www.pythonlabs.com/logos.html).

> And while I'm at it, I hope I can say without offending anyone that I
> hope the logo is open for critisism.

You can hope all you like, and I doubt you're offending anyone, but the logo
is nevertheless not open for criticism:  the BDFL picked it Himself!  Quoth
Guido, "I think he's got a definite little smile going".  Besides, if you
don't like this logo, you're going to be sooooooooo disappointed when you
get a PythonLabs T-shirt.

> ...
> I-still-like-the-shirt-though!-ly y'rs,

Good!  In that case, I'm going to help you with your crusade after all:

Hi! I'm a .signature virus! copy me into your .signature file to
help me spread!





From mal at lemburg.com  Wed Aug 23 09:44:56 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 09:44:56 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
Message-ID: <39A380F8.3D1C86F6@lemburg.com>

Tim Peters wrote:
> 
> PythonLabs had a high-decibel meeting today (well, Monday), culminating in
> Universal Harmony.  Jeremy will be updating PEP 200 accordingly.  Just three
> highlights:
> 
> + The release schedule has Officially Slipped by one week:  2.0b1 will ship
> a week from this coming Monday.  There are too many Open and Accepted
> patches backed up to meet the original date.  Also problems cropping up,
> like new std tests failing to pass every day (you're not supposed to do
> that, you know!  consider yourself clucked at), and patches having to be
> redone because of other patches getting checked in.  We want to take the
> extra week to do this right, and give you more space to do *your* part
> right.
> 
> + While only one beta release is scheduled at this time, we reserve the
> right to make a second beta release if significant problems crop up during
> the first beta period.  Of course that would cause additional slippage of
> 2.0 final, if it becomes necessary.  Note that "no features after 2.0b1 is
> out!" still stands, regardless of how many beta releases there are.

Does this mean I can still splip in that minor patch to allow
for attribute doc-strings in 2.0b1 provided I write up a short
PEP really fast ;-) ?

BTW, what the new standard on releasing ideas to dev public ?
I know I'll have to write a PEP, but where should I put the
patch ? Into the SF patch manager or on a separate page on the
Internet ?

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Wed Aug 23 09:36:04 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 09:36:04 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0204.txt,1.3,1.4
In-Reply-To: <200008230542.WAA02168@slayer.i.sourceforge.net>; from bwarsaw@users.sourceforge.net on Tue, Aug 22, 2000 at 10:42:00PM -0700
References: <200008230542.WAA02168@slayer.i.sourceforge.net>
Message-ID: <20000823093604.M4933@xs4all.nl>

On Tue, Aug 22, 2000 at 10:42:00PM -0700, Barry Warsaw wrote:
> Update of /cvsroot/python/python/nondist/peps
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv2158
> 
> Modified Files:
> 	pep-0204.txt 
> Log Message:
> Editorial review, including:
> 
>     - Rearrange and standardize headers
>     - Removed ^L's
>     - Spellchecked
>     - Indentation and formatting
>     - Added reference to PEP 202

Damn, I'm glad I didn't rewrite it on my laptop yesterday. This looks much
better, Barry, thanx ! Want to co-author it ? :) (I really need to get
myself some proper (X)Emacs education so I can do cool things like
two-spaces-after-finished-sentences too)

> Thomas, if the open issues have been decided, they can be `closed' in
> this PEP, and then it should probably be marked as Accepted.

Well, that would require me to force the open issues, because they haven't
been decided. They have hardly been discussed ;) I'm not sure how to
properly close them, however. For instance: I would say "not now" to ranges
of something other than PyInt objects, and the same to the idea of
generators. But the issues remain open for debate in future versions. Should
there be a 'closed issues' section, or should I just not mention them and
have people start a new PEP and gather the ideas anew when the time comes ?

(And a Decisions (either a consensus one or a BDFL one) would be nice on
whether the two new PyList_ functions should be part of the API or not. The
rest of the issues I can handle.)

Don't forget, I'm a newbie in standards texts. Be gentle ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nowonder at nowonder.de  Wed Aug 23 12:17:33 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 23 Aug 2000 10:17:33 +0000
Subject: [Python-Dev] ...and the new name for our favourite little language 
 is...
References: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com>
Message-ID: <39A3A4BD.C30E4729@nowonder.de>

[Tim]
> get a PythonLabs T-shirt.

[Thomas]
> I-still-like-the-shirt-though!-ly y'rs,

Okay, folks. What's the matter? I don't see any T-shirt
references on http://pythonlabs.com. Where? How?

help-me-with-my-crusade-too-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From thomas at xs4all.net  Wed Aug 23 11:01:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 11:01:23 +0200
Subject: [Python-Dev] ...and the new name for our favourite little language is...
In-Reply-To: <39A3A4BD.C30E4729@nowonder.de>; from nowonder@nowonder.de on Wed, Aug 23, 2000 at 10:17:33AM +0000
References: <LNBBLJKPBEHFEDALKOLCGEGLHBAA.tim_one@email.msn.com> <39A3A4BD.C30E4729@nowonder.de>
Message-ID: <20000823110122.N4933@xs4all.nl>

On Wed, Aug 23, 2000 at 10:17:33AM +0000, Peter Schneider-Kamp wrote:
> [Tim]
> > get a PythonLabs T-shirt.
> 
> [Thomas]
> > I-still-like-the-shirt-though!-ly y'rs,

> Okay, folks. What's the matter? I don't see any T-shirt
> references on http://pythonlabs.com. Where? How?

We were referring to the PythonLabs T-shirt that was given out (in limited
numbers, I do believe, since my perl-hugging colleague only got me one, and
couldn't get one for himself & the two python-learning colleagues *) at
O'Reilly's Open Source Conference. It has the PythonLabs logo on front (the
green snake, on a simple black background, in a white frame) with
'PYTHONLABS' underneath the logo, and on the back it says 'PYTHONLABS.COM'
and 'There Is Only One Way To Do It.'. 

I'm sure they'll have some more at the next IPC ;)

(* As a result, I can't wear this T-shirt to work, just like my X-Files
T-shirt, for fear of being forced to leave without it ;)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From moshez at math.huji.ac.il  Wed Aug 23 11:21:25 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 23 Aug 2000 12:21:25 +0300 (IDT)
Subject: [Python-Dev] ...and the new name for our favourite little language
 is...
In-Reply-To: <20000823110122.N4933@xs4all.nl>
Message-ID: <Pine.GSO.4.10.10008231219190.8650-100000@sundial>

On Wed, 23 Aug 2000, Thomas Wouters wrote:

> We were referring to the PythonLabs T-shirt that was given out (in limited
> numbers, I do believe, since my perl-hugging colleague only got me one, and
> couldn't get one for himself & the two python-learning colleagues *) at
> O'Reilly's Open Source Conference. It has the PythonLabs logo on front (the
> green snake, on a simple black background, in a white frame) with
> 'PYTHONLABS' underneath the logo, and on the back it says 'PYTHONLABS.COM'
> and 'There Is Only One Way To Do It.'. 
> 
> I'm sure they'll have some more at the next IPC ;)

Can't they sell them over the net (at copyleft or something)? I'd love
to buy one for me and my friends, and maybe one for everyone in the
first Python-IL meeting..

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From fdrake at beopen.com  Wed Aug 23 16:38:12 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 23 Aug 2000 10:38:12 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A380F8.3D1C86F6@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
	<39A380F8.3D1C86F6@lemburg.com>
Message-ID: <14755.57812.111681.750661@cj42289-a.reston1.va.home.com>

M.-A. Lemburg writes:
 > Does this mean I can still splip in that minor patch to allow
 > for attribute doc-strings in 2.0b1 provided I write up a short
 > PEP really fast ;-) ?

  Write a PEP if you like; I think I'd really like to look at this
before you change any code, and I've not had a chance to read your
messages about this yet.  This is *awefully* late to be making a
change that hasn't been substantially hashed out and reviewed, and I'm
under the impression that this is pretty new (the past week or so).

 > BTW, what the new standard on releasing ideas to dev public ?
 > I know I'll have to write a PEP, but where should I put the
 > patch ? Into the SF patch manager or on a separate page on the
 > Internet ?

  Patches should still go to the SF patch manager.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Wed Aug 23 18:22:04 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 11:22:04 -0500
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: Your message of "Tue, 22 Aug 2000 13:02:31 MST."
             <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org> 
References: <Pine.LNX.4.10.10008212144360.416-100000@skuld.lfw.org> 
Message-ID: <200008231622.LAA02275@cj20424-a.reston1.va.home.com>

> On Mon, 21 Aug 2000, Guido van Rossum wrote:
> > > > > > Summary: Allow all assignment expressions after 'import
> > > > > > something as'
> [...]
> > I kind of doubt it, because it doesn't look useful.

[Ping]
> Looks potentially useful to me.  If nothing else, it's certainly
> easier to explain than any other behaviour i could think of, since
> assignment is already well-understood.

KISS suggests not to add it.  We had a brief discussion about this at
our 2.0 planning meeting and nobody there thought it would be worth
it, and several of us felt it would be asking for trouble.

> > I do want "import foo.bar as spam" back, assigning foo.bar to spam.
> 
> No no no no.  Or at least let's step back and look at the whole
> situation first.
> 
> "import foo.bar as spam" makes me uncomfortable because:
> 
>     (a) It's not clear whether spam should get foo or foo.bar, as
>         evidenced by the discussion between Gordon and Thomas.

As far as I recall that conversation, it's just that Thomas (more or
less accidentally) implemented what was easiest from the
implementation's point of view without thinking about what it should
mean.  *Of course* it should mean what I said if it's allowed.  Even
Thomas agrees to that now.

>     (b) There's a straightforward and unambiguous way to express
>         this already: "from foo import bar as spam".

Without syntax coloring that looks word soup to me.

  import foo.bar as spam

uses fewer words to say the same clearer.

>     (c) It's not clear whether this should work only for modules
>         named bar, or any symbol named bar.

Same as for import: bar must be a submodule (or subpackage) in package
foo.

> Before packages, the only two forms of the import statement were:
> 
>     import <module>
>     from <module> import <symbol>
> 
> After packages, the permitted forms are now:
> 
>     import <module>
>     import <package>
>     import <pkgpath>.<module>
>     import <pkgpath>.<package>
>     from <module> import <symbol>
>     from <package> import <module>
>     from <pkgpath>.<module> import <symbol>
>     from <pkgpath>.<package> import <module>

You're creating more cases than necessary to get a grip on this.  This
is enough, if you realize that a package is also a module and the
package path doesn't add any new cases:

  import <module>
  from <module> import <symbol>
  from <package> import <module>

> where a <pkgpath> is a dot-separated list of package names.
> 
> With "as" clauses, we could permit:
> 
>     import <module> as <localmodule>
>     import <package> as <localpackage>
> ??  import <pkgpath>.<module> as <localmodule>
> ??  import <pkgpath>.<package> as <localpackage>
> ??  import <module>.<symbol> as <localsymbol>
> ??  import <pkgpath>.<module>.<symbol> as <localsymbol>
>     from <module> import <symbol> as <localsymbol>
>     from <package> import <symbol> as <localsymbol>
>     from <pkgpath>.<module> import <symbol> as <localsymbol>
>     from <pkgpath>.<package> import <module> as <localmodule>

Let's simplify that to:

  import <module> as <localname>
  from <module> import <symbol> as <localname>
  from <package> import <module> as <localname>

> It's not clear that we should allow "as" on the forms marked with
> ??, since the other six clearly identify the thing being renamed
> and they do not.
> 
> Also note that all the other forms using "as" assign exactly one
> thing: the name after the "as".  Would the forms marked with ??
> assign just the name after the "as" (consistent with the other
> "as" forms), or also the top-level package name as well (consistent
> with the current behaviour of "import <pkgpath>.<module>")?
> 
> That is, would
> 
>     import foo.bar as spam
> 
> define just spam or both foo and spam?

Aargh!  Just spam, of course!

> All these questions make me uncertain...

Not me.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Wed Aug 23 17:38:31 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 17:38:31 +0200
Subject: [Python-Dev] Attribute Docstring PEP (2.0 Release Plans)
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
		<39A380F8.3D1C86F6@lemburg.com> <14755.57812.111681.750661@cj42289-a.reston1.va.home.com>
Message-ID: <39A3EFF7.A4D874EC@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
> M.-A. Lemburg writes:
>  > Does this mean I can still splip in that minor patch to allow
>  > for attribute doc-strings in 2.0b1 provided I write up a short
>  > PEP really fast ;-) ?
> 
>   Write a PEP if you like; I think I'd really like to look at this
> before you change any code, and I've not had a chance to read your
> messages about this yet.  This is *awefully* late to be making a
> change that hasn't been substantially hashed out and reviewed, and I'm
> under the impression that this is pretty new (the past week or so).

FYI, I've attached the pre-PEP below (I also sent it to Barry
for review).

This PEP is indeed very new, but AFAIK it doesn't harm any existing
code and also doesn't add much code complexity to achieve what it's
doing (see the patch).

>  > BTW, what the new standard on releasing ideas to dev public ?
>  > I know I'll have to write a PEP, but where should I put the
>  > patch ? Into the SF patch manager or on a separate page on the
>  > Internet ?
> 
>   Patches should still go to the SF patch manager.

Here it is:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/
-------------- next part --------------
PEP: 224
Title: Attribute Docstrings
Version: $Revision: 1.0 $
Owner: mal at lemburg.com (Marc-Andre Lemburg)
Python-Version: 2.0
Status: Draft
Created: 23-Aug-2000
Type: Standards Track


Introduction

    This PEP describes the "attribute docstring" proposal for Python
    2.0. This PEP tracks the status and ownership of this feature. It
    contains a description of the feature and outlines changes
    necessary to support the feature. The CVS revision history of this
    file contains the definitive historical record.


Rationale

    This PEP proposes a small addition to the way Python currently
    handles docstrings embedded in Python code. 

    Until now, Python only handles the case of docstrings which appear
    directly after a class definition, a function definition or as
    first string literal in a module. The string literals are added to
    the objects in question under the __doc__ attribute and are from
    then on available for introspecition tools which can extract the
    contained information for help, debugging and documentation
    purposes.

    Docstrings appearing in other locations as the ones mentioned are
    simply ignored and don't result in any code generation.

    Here is an example:

    class C:
	    " class C doc-string "

	    a = 1
	    " attribute C.a doc-string (1)"

	    b = 2
	    " attribute C.b doc-string (2)"

    The docstrings (1) and (2) are currently being ignored by the
    Python byte code compiler, but could obviously be put to good use
    for documenting the named assignments that preceed them.
    
    This PEP proposes to also make use of these cases by proposing
    semantics for adding their content to the objects in which they
    appear under new generated attribute names.

    The original idea behind this approach which also inspired the
    above example was to enable inline documentation of class
    attributes, which can currently only be documented in the class'
    docstring or using comments which are not available for
    introspection.


Implementation

    Docstrings are handled by the byte code compiler as expressions.
    The current implementation special cases the few locations
    mentioned above to make use of these expressions, but otherwise
    ignores the strings completely.

    To enable use of these docstrings for documenting named
    assignments (which is the natural way of defining e.g. class
    attributes), the compiler will have to keep track of the last
    assigned name and then use this name to assign the content of the
    docstring to an attribute of the containing object by means of
    storing it in as a constant which is then added to the object's
    namespace during object construction time.

    In order to preserve features like inheritence and hiding of
    Python's special attributes (ones with leading and trailing double
    underscores), a special name mangling has to be applied which
    uniquely identifies the docstring as belonging to the name
    assignment and allows finding the docstring later on by inspecting
    the namespace.

    The following name mangling scheme achieves all of the above:

		      __doc__<attributename>__

    To keep track of the last assigned name, the byte code compiler
    stores this name in a variable of the compiling structure. This
    variable defaults to NULL. When it sees a docstring, it then
    checks the variable and uses the name as basis for the above name
    mangling to produce an implicit assignment of the docstring to the
    mangled name. It then resets the variable to NULL to avoid
    duplicate assignments.

    If the variable does not point to a name (i.e. is NULL), no
    assignments are made.  These will continue to be ignored like
    before.  All classical docstrings fall under this case, so no
    duplicate assignments are done.

    In the above example this would result in the following new class
    attributes to be created:

    C.__doc__a__ == " attribute C.a doc-string (1)"
    C.__doc__b__ == " attribute C.b doc-string (2)"

    A patch to the current CVS version of Python 2.0 which implements
    the above is available on SourceForge at [1].


Caveats of the Implementation
    
    Since the implementation does not reset the compiling structure
    variable when processing a non-expression, e.g. a function definition,
    the last assigned name remains active until either the next assignment
    or the next occurrence of a docstring.

    This can lead to cases where the docstring and assignment may be
    separated by other expressions:

    class C:
	"C doc string"

	b = 2

	def x(self):
	    "C.x doc string"
	    y = 3
	    return 1

	"b's doc string"

    Since the definition of method "x" currently does not reset the
    used assignment name variable, it is still valid when the compiler
    reaches 

    
Copyright

    This document has been placed in the Public Domain.


References

    [1]
http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

From tim_one at email.msn.com  Wed Aug 23 17:40:46 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 11:40:46 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A380F8.3D1C86F6@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>

[MAL]
> Does this mean I can still splip in that minor patch to allow
> for attribute doc-strings in 2.0b1 provided I write up a short
> PEP really fast ;-) ?

2.0 went into feature freeze the Monday *before* this one!  So, no.  The "no
new features after 2.0b1" refers mostly to the patches currently in Open and
Accepted:  if *they're* not checked in before 2.0b1 goes out, they don't get
into 2.0 either.

Ideas that were accepted by Guido for 2.0 before last Monday aren't part of
the general "feature freeze".  Any new feature proposed *since* then has
been Postponed without second thought.  Guido accepted several ideas before
feature freeze that still haven't been checked in (in some cases, still not
coded!), and just dealing with them has already caused a slip in the
schedule.  We simply can't afford to entertain new ideas too now (indeed,
that's why "feature freeze" exists:  focus).

For you in particular <wink>, how about dealing with Open patch 100899?
It's been assigned to you for 5 weeks, and if you're not going to review it
or kick /F in the butt, assign it to someone else.

> BTW, what the new standard on releasing ideas to dev public ?
> I know I'll have to write a PEP, but where should I put the
> patch ? Into the SF patch manager or on a separate page on the
> Internet ?

The PEP should be posted to both Python-Dev and comp.lang.python after its
first stab is done.  If you don't at least post a link to the patch in the
SF Patch Manager, the patch doesn't officially exist.  I personally prefer
one-stop shopping, and SF is the Python Developer's Mall; but there's no
rule about that yet (note that 100899's patch was apparently so big SF
wouldn't accept it, so /F *had* to post just a URL to the Patch Manager).





From bwarsaw at beopen.com  Wed Aug 23 18:01:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 23 Aug 2000 12:01:32 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCOEGFHBAA.tim_one@email.msn.com>
	<39A380F8.3D1C86F6@lemburg.com>
Message-ID: <14755.62812.185580.367242@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> Does this mean I can still splip in that minor patch to allow
    M> for attribute doc-strings in 2.0b1 provided I write up a short
    M> PEP really fast ;-) ?

Well, it's really the 2.0 release manager's job to disappoint you, so
I won't. :) But yes a PEP would probably be required.  However, after
our group meeting yesterday, I'm changing the requirements for PEP
number assignment.  You need to send me a rough draft, not just an
abstract (there's too many incomplete PEPs already).

    M> BTW, what the new standard on releasing ideas to dev public ?
    M> I know I'll have to write a PEP, but where should I put the
    M> patch ? Into the SF patch manager or on a separate page on the
    M> Internet ?

Better to put the patches on SF.

-Barry



From bwarsaw at beopen.com  Wed Aug 23 18:09:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Wed, 23 Aug 2000 12:09:32 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0204.txt,1.3,1.4
References: <200008230542.WAA02168@slayer.i.sourceforge.net>
	<20000823093604.M4933@xs4all.nl>
Message-ID: <14755.63292.825567.868362@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> Damn, I'm glad I didn't rewrite it on my laptop
    TW> yesterday. This looks much better, Barry, thanx ! Want to
    TW> co-author it ? :)

Naw, that's what an editor is for (actually, I thought an editor was
for completely covering your desktop like lox on a bagel).
    
    TW> (I really need to get myself some proper (X)Emacs education so
    TW> I can do cool things like two-spaces-after-finished-sentences
    TW> too)

Heh, that's just finger training, but I do it only because it works
well with XEmacs's paragraph filling.

    TW> Well, that would require me to force the open issues, because
    TW> they haven't been decided. They have hardly been discussed ;)
    TW> I'm not sure how to properly close them, however. For
    TW> instance: I would say "not now" to ranges of something other
    TW> than PyInt objects, and the same to the idea of
    TW> generators. But the issues remain open for debate in future
    TW> versions. Should there be a 'closed issues' section, or should
    TW> I just not mention them and have people start a new PEP and
    TW> gather the ideas anew when the time comes ?

    TW> (And a Decisions (either a consensus one or a BDFL one) would
    TW> be nice on whether the two new PyList_ functions should be
    TW> part of the API or not. The rest of the issues I can handle.)

The thing to do is to request BDFL pronouncement on those issues for
2.0, and write them up in a "BDFL Pronouncements" section at the end
of the PEP.  See PEP 201 for an example.  You should probably email
Guido directly and ask him to rule.  If he doesn't, then they'll get
vetoed by default once 2.0beta1 is out.

IMO, if some extension of range literals is proposed for a future
release of Python, then we'll issue a new PEP for those.

-Barry



From mal at lemburg.com  Wed Aug 23 17:56:17 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 23 Aug 2000 17:56:17 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <39A3F421.415107E6@lemburg.com>

Tim Peters wrote:
> 
> [MAL]
> > Does this mean I can still splip in that minor patch to allow
> > for attribute doc-strings in 2.0b1 provided I write up a short
> > PEP really fast ;-) ?
> 
> 2.0 went into feature freeze the Monday *before* this one!  So, no.  The "no
> new features after 2.0b1" refers mostly to the patches currently in Open and
> Accepted:  if *they're* not checked in before 2.0b1 goes out, they don't get
> into 2.0 either.

Ah, ok. 

Pity I just started to do some heaviy doc-string extracting
just last week... oh, well.
 
> Ideas that were accepted by Guido for 2.0 before last Monday aren't part of
> the general "feature freeze".  Any new feature proposed *since* then has
> been Postponed without second thought.  Guido accepted several ideas before
> feature freeze that still haven't been checked in (in some cases, still not
> coded!), and just dealing with them has already caused a slip in the
> schedule.  We simply can't afford to entertain new ideas too now (indeed,
> that's why "feature freeze" exists:  focus).
> 
> For you in particular <wink>, how about dealing with Open patch 100899?
> It's been assigned to you for 5 weeks, and if you're not going to review it
> or kick /F in the butt, assign it to someone else.

AFAIK, Fredrik hasn't continued work on that patch and some
important parts are still missing, e.g. the generator scripts
and a description of how the whole thing works.

It's not that important though, since the patch is a space
optimization of what is already in Python 2.0 (and has been
for quite a while now): the Unicode database.
 
Perhaps I should simply post-pone the patch to 2.1 ?!

> > BTW, what the new standard on releasing ideas to dev public ?
> > I know I'll have to write a PEP, but where should I put the
> > patch ? Into the SF patch manager or on a separate page on the
> > Internet ?
> 
> The PEP should be posted to both Python-Dev and comp.lang.python after its
> first stab is done.  If you don't at least post a link to the patch in the
> SF Patch Manager, the patch doesn't officially exist.  I personally prefer
> one-stop shopping, and SF is the Python Developer's Mall; but there's no
> rule about that yet (note that 100899's patch was apparently so big SF
> wouldn't accept it, so /F *had* to post just a URL to the Patch Manager).

I've just posted the PEP here, CCed it to Barry and uploaded the
patch to SF. I'll post it to c.l.p tomorrow (don't know what that's
good for though, since I don't read c.l.p anymore).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Wed Aug 23 19:49:28 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 23 Aug 2000 13:49:28 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
References: <39A380F8.3D1C86F6@lemburg.com>
	<LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <14756.3752.23014.786587@bitdiddle.concentric.net>

I wanted to confirm: Tim is channeling the release manager just
fine.  We are in feature freeze for 2.0.

Jeremy



From jeremy at beopen.com  Wed Aug 23 19:55:34 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 23 Aug 2000 13:55:34 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <39A3F421.415107E6@lemburg.com>
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
	<39A3F421.415107E6@lemburg.com>
Message-ID: <14756.4118.865603.363166@bitdiddle.concentric.net>

>>>>> "MAL" == M-A Lemburg <mal at lemburg.com> writes:
>>>>> "TP" == Tim Peters <tpeters at beopen.com> writes:

  TP> For you in particular <wink>, how about dealing with Open patch
  TP>> 100899?  It's been assigned to you for 5 weeks, and if you're not
  TP> going to review it or kick /F in the butt, assign it to someone
  TP> else.

  MAL> AFAIK, Fredrik hasn't continued work on that patch and some
  MAL> important parts are still missing, e.g. the generator scripts
  MAL> and a description of how the whole thing works.

  MAL> It's not that important though, since the patch is a space
  MAL> optimization of what is already in Python 2.0 (and has been for
  MAL> quite a while now): the Unicode database.
 
  MAL> Perhaps I should simply post-pone the patch to 2.1 ?!

Thanks for clarifying the issue with this patch.  

I would like to see some compression in the release, but agree that it
is not an essential optimization.  People have talked about it for a
couple of months, and we haven't found someone to work on it because
at various times pirx and /F said they were working on it.

If we don't hear from /F by tomorrow promising he will finish it before
the beta release, let's postpone it.

Jeremy



From tim_one at email.msn.com  Wed Aug 23 20:32:20 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 14:32:20 -0400
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release Plans)
In-Reply-To: <14756.4118.865603.363166@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>

[Jeremy Hylton]
> I would like to see some compression in the release, but agree that it
> is not an essential optimization.  People have talked about it for a
> couple of months, and we haven't found someone to work on it because
> at various times pirx and /F said they were working on it.
>
> If we don't hear from /F by tomorrow promising he will finish it before
> the beta release, let's postpone it.

There was an *awful* lot of whining about the size increase without this
optimization, and the current situation violates the "no compiler warnings!"
rule too (at least under MSVC 6).  That means it's going to fail to compile
at all on *some* feebler system.  We said we'd put it in, so I'm afraid I
think it falls on PythonLabs to finish it if /F can't.





From thomas at xs4all.net  Wed Aug 23 20:59:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 20:59:20 +0200
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release Plans)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Wed, Aug 23, 2000 at 02:32:20PM -0400
References: <14756.4118.865603.363166@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com>
Message-ID: <20000823205920.A7566@xs4all.nl>

On Wed, Aug 23, 2000 at 02:32:20PM -0400, Tim Peters wrote:
> [Jeremy Hylton]
> > I would like to see some compression in the release, but agree that it
> > is not an essential optimization.  People have talked about it for a
> > couple of months, and we haven't found someone to work on it because
> > at various times pirx and /F said they were working on it.
> >
> > If we don't hear from /F by tomorrow promising he will finish it before
> > the beta release, let's postpone it.

> There was an *awful* lot of whining about the size increase without this
> optimization, and the current situation violates the "no compiler warnings!"
> rule too (at least under MSVC 6).

For the record, you can't compile unicodedatabase.c with g++ because of it's
size: g++ complains that the switch is too large to compile. Under gcc it
compiles, but only by trying really really hard, and I don't know how it
performs under other versions of gcc (in particular more heavily optimizing
ones -- might run into other limits in those situations.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Wed Aug 23 21:00:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 15:00:33 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <003b01c00d1f$3ef70fe0$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIOHBAA.tim_one@email.msn.com>

[Fredrik Lundh]

[on patch 100899]
> mal has reviewed the patch, and is waiting for an update
> from me.

Thanks!  On that basis, I've reassigned the patch to you.

> PS. the best way to get me to do something is to add a
> task to the task manager.

Yikes!  I haven't looked at the thing since the day after I enabled it
<wink> -- thanks for the clue.

> I currently have three things on my slate:
>
>     17333 add os.popen2 support for Unix

Guido definitely wants this for 2.0, but there's no patch for it and no
entry in PEP 200.  Jeremy, please add it.

>     17334 add PyErr_Format to errors module
>     17335 add compressed unicode database

Those two are in Open patches, and both assigned to you.

> if I missed something, let me know.

In your email (to Guido and me) from Monday, 31-July-2000,

> so to summarize, Python 2.0 will support the following
> hex-escapes:
>
>    \xNN
>    \uNNNN
>    \UNNNNNNNN
>
> where the last two are only supported in Unicode and
> SRE strings.
>
> I'll provide patches later this week, once the next SRE
> release is wrapped up (later tonight, I hope).

This apparently fell through the cracks, and I finally remembered it last
Friday, and added them to PEP 200 recently.  Guido wants this in 2.0, and
accepted them long before feature-freeze.  I'm currently writing a PEP for
the \x change (because it has a surreal chance of breaking old code).  I
haven't written any code for it.  The new \U escape is too minor to need a
PEP (according to me).





From effbot at telia.com  Wed Aug 23 18:28:58 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 18:28:58 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCMEHMHBAA.tim_one@email.msn.com>
Message-ID: <003b01c00d1f$3ef70fe0$f2a6b5d4@hagrid>

tim wrote:
> For you in particular <wink>, how about dealing with Open patch 100899?
> It's been assigned to you for 5 weeks, and if you're not going to review it
> or kick /F in the butt, assign it to someone else.

mal has reviewed the patch, and is waiting for an update
from me.

</F>

PS. the best way to get me to do something is to add a
task to the task manager.  I currently have three things
on my slate:

    17333 add os.popen2 support for Unix 
    17334 add PyErr_Format to errors module 
    17335 add compressed unicode database 

if I missed something, let me know.




From thomas at xs4all.net  Wed Aug 23 21:29:47 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 21:29:47 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src README,1.89,1.90
In-Reply-To: <200008231901.MAA31275@slayer.i.sourceforge.net>; from gvanrossum@users.sourceforge.net on Wed, Aug 23, 2000 at 12:01:47PM -0700
References: <200008231901.MAA31275@slayer.i.sourceforge.net>
Message-ID: <20000823212946.B7566@xs4all.nl>

On Wed, Aug 23, 2000 at 12:01:47PM -0700, Guido van Rossum wrote:
> Update of /cvsroot/python/python/dist/src
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv31228
> 
> Modified Files:
> 	README 
> Log Message:
> Updated some URLs; removed mention of copyright (we'll have to add
> something in later after that discussion is over); remove explanation
> of 2.0 version number.

I submit that this file needs some more editing for 2.0. For instance, it
mentions that 'some modules' will not compile on old SunOS compilers because
they are written in ANSI C. It also has a section on threads which needs to
be rewritten to reflect that threads are *on* by default, and explain how to
turn them off. I also think it should put some more emphasis on editing
Modules/Setup, which is commonly forgotten by newbies. Either that or make
some more things 'standard', like '*shared*'.

(It mentions '... editing a file, typing make, ...' in the overview, but
doesn't actually mention which file to edit until much later, in a sideways
kind of way in the machine-specific section, and even later in a separate
section.)

It also has some teensy small bugs: it says "uncomment" when it should say
"comment out" in the Cray T3E section, and it's "glibc2" or "libc6", not
"glibc6", in the Linux section. (it's glibc version 2, but the interface
number is 6.) I would personally suggest removing that entire section, it's
a bit outdated. But the same might go for other sections!

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Wed Aug 23 21:50:21 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 21:50:21 +0200
Subject: [Python-Dev] 2.0 Release Plans
References: <LNBBLJKPBEHFEDALKOLCIEIOHBAA.tim_one@email.msn.com>
Message-ID: <006601c00d3b$610f5440$f2a6b5d4@hagrid>

tim wrote:
> > I currently have three things on my slate:
> >
> >     17333 add os.popen2 support for Unix
> 
> Guido definitely wants this for 2.0, but there's no patch for it and no
> entry in PEP 200.  Jeremy, please add it.

to reduce my load somewhat, maybe someone who does
Python 2.0 development on a Unix box could produce that
patch?

(all our unix boxes are at the office, but I cannot run CVS
over SSH from there -- and sorting that one out will take
more time than I have right now...)

:::

anyway, fixing this is pretty straightforward:

1) move the class (etc) from popen2.py to os.py

2) modify the "if hasattr" stuff; change

    # popen2.py
    if hasattr(os, "popen2"):
        def popen2(...):
            # compatbility code, using os.popen2
    else:
        def popen2(...):
            # unix implementation

to

    # popen2.py
    def popen2(...):
        # compatibility code

    # os.py
    def popen2(...)
        # unix implementation, with the order of
        # the return values changed to (child_stdin,
        # child_stdout, child_stderr)

:::

> > so to summarize, Python 2.0 will support the following
> > hex-escapes:
> >
> >    \xNN
> >    \uNNNN
> >    \UNNNNNNNN
> >
> > where the last two are only supported in Unicode and
> > SRE strings.
> 
> This apparently fell through the cracks, and I finally remembered it last
> Friday, and added them to PEP 200 recently.  Guido wants this in 2.0, and
> accepted them long before feature-freeze.  I'm currently writing a PEP for
> the \x change (because it has a surreal chance of breaking old code).  I
> haven't written any code for it.  The new \U escape is too minor to need a
> PEP (according to me).

if someone else can do the popen2 stuff, I'll take care
of this one!

</F>




From effbot at telia.com  Wed Aug 23 23:47:01 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 23:47:01 +0200
Subject: [Python-Dev] anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <002001c00d4b$acf32ca0$f2a6b5d4@hagrid>

doesn't work too well for me -- Tkinter._test() tends to hang
when I press quit (not every time, though).  the only way to
shut down the process is to reboot.

any ideas?

(msvc 5, win95).

</F>




From tim_one at email.msn.com  Wed Aug 23 23:30:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 17:30:23 -0400
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <006601c00d3b$610f5440$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>

[/F, on "add os.popen2 support for Unix"]
> to reduce my load somewhat, maybe someone who does
> Python 2.0 development on a Unix box could produce that
> patch?

Sounds like a more than reasonable idea to me; heck, AFAIK, until you
mentioned you thought it was on your plate, we didn't think it was on
*anyone's* plate.  It simply "came up" on its own at the PythonLabs mtg
yesterday (which I misidentified as "Monday" in an earlier post).

Can we get a volunteer here?  Here's /F's explanation:

> anyway, fixing this is pretty straightforward:
>
> 1) move the class (etc) from popen2.py to os.py
>
> 2) modify the "if hasattr" stuff; change
>
>     # popen2.py
>     if hasattr(os, "popen2"):
>         def popen2(...):
>             # compatbility code, using os.popen2
>     else:
>         def popen2(...):
>             # unix implementation
>
> to
>
>     # popen2.py
>     def popen2(...):
>         # compatibility code
>
>     # os.py
>     def popen2(...)
>         # unix implementation, with the order of
>         # the return values changed to (child_stdin,
>         # child_stdout, child_stderr)

[on \x, \u and \U]
> if someone else can do the popen2 stuff, I'll take care
> of this one!

It's a deal as far as I'm concerned.  Thanks!  I'll finish the \x PEP
anyway, though, as it's already in progress.

Jeremy, please update PEP 200 accordingly (after you volunteer to do the
os.popen2 etc bit for Unix(tm) <wink>).





From effbot at telia.com  Wed Aug 23 23:59:50 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 23 Aug 2000 23:59:50 +0200
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <000d01c00d4d$78a81c60$f2a6b5d4@hagrid>

I wrote:


> doesn't work too well for me -- Tkinter._test() tends to hang
> when I press quit (not every time, though).  the only way to
> shut down the process is to reboot.

hmm.  it looks like it's more likely to hang if the program
uses unicode strings.

    Tkinter._test() hangs about 2 times out of three

    same goes for a simple test program that passes a
    unicode string constant (containing Latin-1 chars)
    to a Label

    the same test program using a Latin-1 string (which,
    I suppose, is converted to Unicode inside Tk) hangs
    in about 1/3 of the runs.

    the same test program with a pure ASCII string
    never hangs...

confusing.

</F>




From thomas at xs4all.net  Wed Aug 23 23:53:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 23 Aug 2000 23:53:45 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
Message-ID: <20000823235345.C7566@xs4all.nl>

While re-writing the PyNumber_InPlace*() functions in augmented assignment
to something Guido and I agree on should be the Right Way, I found something
that *might* be a bug. But I'm not sure.

The PyNumber_*() methods for binary operations (found in abstract.c) have
the following construct:

        if (v->ob_type->tp_as_number != NULL) {
                PyObject *x = NULL;
                PyObject * (*f)(PyObject *, PyObject *);
                if (PyNumber_Coerce(&v, &w) != 0)
                        return NULL;
                if ((f = v->ob_type->tp_as_number->nb_xor) != NULL)
                        x = (*f)(v, w);
                Py_DECREF(v);
                Py_DECREF(w);
                if (f != NULL)
                        return x;
        }

(This is after a check if either argument is an instance object, so both are
C objects here.) Now, I'm not sure how coercion is supposed to work, but I
see one problem here: 'v' can be changed by PyNumber_Coerce(), and the new
object's tp_as_number pointer could be NULL. I bet it's pretty unlikely that
(numeric) coercion of a numeric object and an unspecified object turns up a
non-numeric object, but I don't see anything guaranteeing it won't, either.

Is this a non-issue, or should I bother with adding the extra check in the
current binary operations (and the new inplace ones) ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Wed Aug 23 23:58:30 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 23 Aug 2000 17:58:30 -0400 (EDT)
Subject: [Python-Dev] 2.0 Release Plans
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>
References: <006601c00d3b$610f5440$f2a6b5d4@hagrid>
	<LNBBLJKPBEHFEDALKOLCAEJIHBAA.tim_one@email.msn.com>
Message-ID: <14756.18694.812840.428389@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Sounds like a more than reasonable idea to me; heck, AFAIK, until you
 > mentioned you thought it was on your plate, we didn't think it was on
 > *anyone's* plate.  It simply "came up" on its own at the PythonLabs mtg
 > yesterday (which I misidentified as "Monday" in an earlier post).
...
 > Jeremy, please update PEP 200 accordingly (after you volunteer to do the
 > os.popen2 etc bit for Unix(tm) <wink>).

  Note that Guido asked me to do this, and I've updated the SF Task
Manager with the appropriate information.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Thu Aug 24 01:08:13 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:08:13 -0500
Subject: [Python-Dev] The Python 1.6 License Explained
Message-ID: <200008232308.SAA02986@cj20424-a.reston1.va.home.com>

[Also posted to c.l.py]

With BeOpen's help, CNRI has prepared a FAQ about the new license
which should answer those questions.  The official URL for the Python
1.6 license FAQ is http://www.python.org/1.6/license_faq.html (soon on
a mirror site near you), but I'm also appending it here.

We expect that we will be able to issue the final 1.6 release very
soon.  We're also working hard on the first beta release of Python
2.0, slated for September 4; the final 2.0 release should be ready in
October.  See http://www.pythonlabs.com/tech/python2.html for
up-to-date 2.0 information.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)

Python 1.6 License FAQ

This FAQ addresses questions concerning the CNRI Open Source License
and its impact on past and future Python releases. The text below has
been approved for posting on the Python website and newsgroup by
CNRI's president, Dr. Robert E. Kahn.

    1.The old Python license from CWI worked well for almost 10
    years. Why a new license for Python 1.6?

      CNRI claims copyright in Python code and documentation from
      releases 1.3 through 1.6 inclusive.  However, for a number of
      technical reasons, CNRI never formally licensed this work for
      Internet download, although it did permit Guido to share the
      results with the Python community. As none of this work was
      published either, there were no CNRI copyright notices placed on
      these Python releases prior to 1.6. A CNRI copyright notice will
      appear on the official release of Python 1.6. The CNRI license
      was created to clarify for all users that CNRI's intent is to
      enable Python extensions to be developed in an extremely open
      form, in the best interests of the Python community.

    2.Why isn't the new CNRI license as short and simple as the CWI
    license? Are there any issues with it?

      A license is a legally binding document, and the CNRI Open
      Source License is-- according to CNRI --as simple as they were
      able to make it at this time while still maintaining a balance
      between the need for access and other use of Python with CNRI's
      rights.

    3.Are you saying that the CWI license did not protect our rights?

      CNRI has held copyright and other rights to the code but never
      codified them into a CNRI-blessed license prior to 1.6. The CNRI
      Open Source License is a binding contract between CNRI and
      Python 1.6's users and, unlike the CWI statement, cannot be
      revoked except for a material breach of its terms.  So this
      provides a licensing certainty to Python users that never really
      existed before.

    4.What is CNRI's position on prior Python releases, e.g. Python
    1.5.2?

      Releases of Python prior to 1.6 were shared with the community
      without a formal license from CNRI.  The CWI Copyright Notice
      and Permissions statement (which was included with Python
      releases prior to 1.6b1), as well as the combined CWI-CNRI
      disclaimer, were required to be included as a condition for
      using the prior Python software. CNRI does not intend to require
      users of prior versions of Python to upgrade to 1.6 unless they
      voluntarily choose to do so.

    5.OK, on to the new license. Is it an Open Source license?

      Yes. The board of the Open Source Initiative certified the CNRI
      Open Source License as being fully Open Source compliant.

    6.Has it been approved by the Python Consortium?

      Yes, the Python Consortium members approved the new CNRI Open
      Source License at a meeting of the Python Consortium on Friday,
      July 21, 2000 in Monterey, California.

    7.Is it compatible with the GNU Public License (GPL)?

      Legal counsel for both CNRI and BeOpen.com believe that it is
      fully compatible with the GPL. However, the Free Software
      Foundation attorney and Richard Stallman believe there may be
      one incompatibility, i.e., the CNRI License specifies a legal
      venue to interpret its License while the GPL is silent on the
      issue of jurisdiction. Resolution of this issue is being
      pursued.

    8.So that means it has a GPL-like "copyleft" provision?

      No. "GPL-compatible" means that code licensed under the terms of
      the CNRI Open Source License may be combined with GPLed
      code. The CNRI license imposes fewer restrictions than does the
      GPL.  There is no "copyleft" provision in the CNRI Open Source
      License.

    9.So it supports proprietary ("closed source") use of Python too?

      Yes, provided you abide by the terms of the CNRI license and
      also include the CWI Copyright Notice and Permissions Statement.

   10.I have some questions about those! First, about the "click to
   accept" business. What if I have a derivative work that has no GUI?

      As the text says, "COPYING, INSTALLING OR OTHERWISE USING THE
      SOFTWARE" also constitutes agreement to the terms of the
      license, so there is no requirement to use the click to accept
      button if that is not appropriate. CNRI prefers to offer the
      software via the Internet by first presenting the License and
      having a prospective user click an Accept button. Others may
      offer it in different forms (e.g.  CD-ROM) and thus clicking the
      Accept button is one means but not the only one.

   11.Virginia is one of the few states to have adopted the Uniform
   Computer Information Transactions Act, and paragraph 7 requires
   that the license be interpreted under Virginia law.  Is the "click
   clause" a way to invoke UCITA?

      CNRI needs a body of law to define what its License means, and,
      since its headquarters are in Virginia, Virginia law is a
      logical choice. The adoption of UCITA in Virginia was not a
      motivating factor. If CNRI didn't require that its License be
      interpreted under Virginia law, then anyone could interpret the
      license under very different laws than the ones under which it
      is intended to be interpreted. In particular in a jurisdiction
      that does not recognize general disclaimers of liability (such
      as in CNRI license's paragraphs 4 and 5).

   12.Suppose I embed Python in an application such that the user
   neither knows nor cares about the existence of Python. Does the
   install process have to inform my app's users about the CNRI
   license anyway?

      No, the license does not specify this. For example, in addition
      to including the License text in the License file of a program
      (or in the installer as well), you could just include a
      reference to it in the Readme file.  There is also no need to
      include the full License text in the program (the License
      provides for an alternative reference using the specified handle
      citation). Usage of the software amounts to license acceptance.

   13.In paragraph 2, does "provided, however, that CNRI's License
   Agreement is retained in Python 1.6 beta 1, alone or in any
   derivative version prepared by Licensee" mean that I can make and
   retain a derivative version of the license instead?

      The above statement applies to derivative versions of Python 1.6
      beta 1. You cannot revise the CNRI License. You must retain the
      CNRI License (or their defined reference to it)
      verbatim. However, you can make derivative works and license
      them as a whole under a different but compatible license.

   14.Since I have to retain the CNRI license in my derivative work,
   doesn't that mean my work must be released under exactly the same
   terms as Python?

      No. Paragraph 1 explicitly names Python 1.6 beta 1 as the only
      software covered by the CNRI license.  Since it doesn't name
      your derivative work, your derivative work is not bound by the
      license (except to the extent that it binds you to meet the
      requirements with respect to your use of Python 1.6). You are,
      of course, free to add your own license distributing your
      derivative work under terms similar to the CNRI Open Source
      License, but you are not required to do so.

      In other words, you cannot change the terms under which CNRI
      licenses Python 1.6, and must retain the CNRI License Agreement
      to make that clear, but you can (via adding your own license)
      set your own terms for your derivative works. Note that there is
      no requirement to distribute the Python source code either, if
      this does not make sense for your application.

   15.Does that include, for example, releasing my derivative work
   under the GPL?

      Yes, but you must retain the CNRI License Agreement in your
      work, and it will continue to apply to the Python 1.6 beta 1
      portion of your work (as is made explicit in paragraph 1 of the
      CNRI License).

   16.With regard to paragraph 3, what does "make available to the
   public" mean? If I embed Python in an application and make it
   available for download on the Internet, does that fit the meaning
   of this clause?

      Making the application generally available for download on the
      Internet would be making it available to the public.

   17.In paragraph 3, what does "indicate in any such work the nature
   of the modifications made to Python 1.6 beta 1" mean? Do you mean I
   must publish a patch? A textual description? If a description, how
   detailed must it be? For example, is "Assorted speedups"
   sufficient? Or "Ported to new architecture"? What if I merely add a
   new Python module, or C extension module? Does that constitute "a
   modification" too? What if I just use the freeze tool to change the
   way the distribution is packaged? Or change the layout of files and
   directories from the way CNRI ships them? Or change some file names
   to match my operating system's restrictions?  What if I merely use
   the documentation, as a basis for a brand new implementation of
   Python?

      This license clause is in discussion right now. CNRI has stated
      that the intent is just to have people provide a very high level
      summary of changes, e.g. includes new features X, Y and Z. There
      is no requirement for a specific level of detail. Work is in
      progress to clarify the intent of this clause so as to be
      clearer as to what the standard is. CNRI has already indicated
      that whatever has been done in the past to indicate changes in
      python releases would be sufficient.

   18.In paragraph 6, is automatic termination of the license upon
   material breach immediate?

      Yes. CNRI preferred to give the users a 60 day period to cure
      any deficiencies, but this was deemed incompatible with the GPL
      and CNRI reluctantly agreed to use the automatic termination
      language instead.

   19.Many licenses allow a 30 to 60 day period during which breaches
   can be corrected.

      Immediate termination is actually required for GPL
      compatibility, as the GPL terminates immediately upon a material
      breach. However, there is little you can do to breach the
      license based on usage of the code, since almost any usage is
      allowed by the license. You can breach it by not including the
      appropriate License information or by misusing CNRI's name and
      logo - to give two examples. As indicated above, CNRI actually
      preferred a 60 day cure period but GPL-compatibility required
      otherwise. In practice, the immediate termination clause is
      likely to have no substantive effect. Since breaches are simple
      to cure, most will have no substantive liability associated with
      them. CNRI can take legal steps to prevent eggregious and
      persistent offenders from relicensing the code, but this is a
      step they will not take cavalierly.

   20.What if people already downloaded a million copies of my
   derivative work before CNRI informs me my license has been
   terminated? What am I supposed to do then? Contact every one of
   them and tell them to download a new copy? I won't even know who
   they are!

      This is really up to the party that chooses to enforce such
      licensing. With the cure period removed for compliance with the
      GPL, CNRI is under no obligation to inform you of a
      termination. If you repair any such breach than you are in
      conformance with the License. Enforcement of the CNRI License is
      up to CNRI. Again, there are very few ways to violate the
      license.

   21.Well, I'm not even sure what "material breach" means. What's an
   example?

      This is a well-defined legal term. Very few examples of breaches
      can be given, because the CNRI license imposes very few
      requirements on you. A clear example is if you violate the
      requirement in paragraph 2 to retain CNRI's License Agreement
      (or their defined reference to it) in derivative works.  So
      simply retain the agreement, and you'll have no problem with
      that. Also, if you don't misuse CNRI's name and logo you'll be
      fine.

   22.OK, I'll retain the License Agreement in my derivative works,
   Does that mean my users and I then enter into this license
   agreement too?

      Yes, with CNRI but not with each other. As explained in
      paragraph 1, the license is between CNRI and whoever is using
      Python 1.6 beta 1.

   23.So you mean that everyone who uses my derivative work is
   entering into a contract with CNRI?

      With respect to the Python 1.6 beta 1 portion of your work,
      yes. This is what assures their right to use the Python 1.6 beta
      1 portion of your work-- which is licensed by CNRI, not by you
      --regardless of whatever other restrictions you may impose in
      your license.

   24.In paragraph 7, is the name "Python" a "CNRI trademark or trade
   name"?

      CNRI has certain trademark rights based on its use of the name
      Python. CNRI has begun discussing an orderly transition of the
      www.python.org site with Guido and the trademark matters will be
      addressed in that context.

   25.Will the license change for Python 2.0?

      BeOpen.com, who is leading future Python development, will make
      that determination at the appropriate time. Throughout the
      licensing process, BeOpen.com will be working to keep things as
      simple and as compatible with existing licenses as
      possible. BeOpen.com will add its copyright notice to Python but
      understands the complexities of licensing and so will work to
      avoid adding any further confusion on any of these issues. This
      is why BeOpen.com and CNRI are working together now to finalize
      a license.

   26.What about the copyrights? Will CNRI assign its copyright on
   Python to BeOpen.com or to Guido? If you say you want to clarify
   the legal status of the code, establishing a single copyright
   holder would go a long way toward achieving that!

      There is no need for a single copyright holder. Most composite
      works involve licensing of rights from parties that hold the
      rights to others that wish to make use of them. CNRI will retain
      copyright to its work on Python. CNRI has also worked to get wet
      signatures for major contributions to Python which assign rights
      to it, and email agreements to use minor contributions, so that
      it can license the bulk of the Python system for the public
      good. CNRI also worked with Guido van Rossum and CWI to clarify
      the legal status with respect to permissions for Python 1.2 and
      earlier versions.

August 23, 2000



From guido at beopen.com  Thu Aug 24 01:25:57 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:25:57 -0500
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:59:50 +0200."
             <000d01c00d4d$78a81c60$f2a6b5d4@hagrid> 
References: <000d01c00d4d$78a81c60$f2a6b5d4@hagrid> 
Message-ID: <200008232325.SAA03130@cj20424-a.reston1.va.home.com>

> > doesn't work too well for me -- Tkinter._test() tends to hang
> > when I press quit (not every time, though).  the only way to
> > shut down the process is to reboot.
> 
> hmm.  it looks like it's more likely to hang if the program
> uses unicode strings.
> 
>     Tkinter._test() hangs about 2 times out of three
> 
>     same goes for a simple test program that passes a
>     unicode string constant (containing Latin-1 chars)
>     to a Label
> 
>     the same test program using a Latin-1 string (which,
>     I suppose, is converted to Unicode inside Tk) hangs
>     in about 1/3 of the runs.
> 
>     the same test program with a pure ASCII string
>     never hangs...
> 
> confusing.

Try going back to Tk 8.2.

We had this problem with Tk 8.3.1 in Python 1.6a1; for a2, I went back
to 8.2.x (the latest).  Then for 1.6b1 I noticed that 8.3.2 was out
and after a light test it appeared to be fine, so I switched to
8.3.2.  But I've seen this too, and maybe 8.3 still isn't stable
enough.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 24 01:28:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Wed, 23 Aug 2000 18:28:03 -0500
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:53:45 +0200."
             <20000823235345.C7566@xs4all.nl> 
References: <20000823235345.C7566@xs4all.nl> 
Message-ID: <200008232328.SAA03141@cj20424-a.reston1.va.home.com>

> While re-writing the PyNumber_InPlace*() functions in augmented assignment
> to something Guido and I agree on should be the Right Way, I found something
> that *might* be a bug. But I'm not sure.
> 
> The PyNumber_*() methods for binary operations (found in abstract.c) have
> the following construct:
> 
>         if (v->ob_type->tp_as_number != NULL) {
>                 PyObject *x = NULL;
>                 PyObject * (*f)(PyObject *, PyObject *);
>                 if (PyNumber_Coerce(&v, &w) != 0)
>                         return NULL;
>                 if ((f = v->ob_type->tp_as_number->nb_xor) != NULL)
>                         x = (*f)(v, w);
>                 Py_DECREF(v);
>                 Py_DECREF(w);
>                 if (f != NULL)
>                         return x;
>         }
> 
> (This is after a check if either argument is an instance object, so both are
> C objects here.) Now, I'm not sure how coercion is supposed to work, but I
> see one problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> object's tp_as_number pointer could be NULL. I bet it's pretty unlikely that
> (numeric) coercion of a numeric object and an unspecified object turns up a
> non-numeric object, but I don't see anything guaranteeing it won't, either.
> 
> Is this a non-issue, or should I bother with adding the extra check in the
> current binary operations (and the new inplace ones) ?

I think this currently can't happen because coercions never return
non-numeric objects, but it sounds like a good sanity check to add.

Please check this in as a separate patch (not as part of the huge
augmented assignment patch).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Aug 24 01:09:41 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 24 Aug 2000 01:09:41 +0200
Subject: [Python-Dev] Re: anyone tried Python 2.0 with Tk 8.3.2?
Message-ID: <200008232309.BAA01070@loewis.home.cs.tu-berlin.de>

> hmm.  it looks like it's more likely to hang if the program
> uses unicode strings.

Are you sure it hangs? It may just take a lot of time to determine
which font is best to display the strings.

Of course, if it is not done after an hour or so, it probably hangs...
Alternatively, a debugger could tell what it is actually doing.

Regards,
Martin



From thomas at xs4all.net  Thu Aug 24 01:15:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 01:15:20 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: <200008232328.SAA03141@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Wed, Aug 23, 2000 at 06:28:03PM -0500
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com>
Message-ID: <20000824011519.D7566@xs4all.nl>

On Wed, Aug 23, 2000 at 06:28:03PM -0500, Guido van Rossum wrote:

> > Now, I'm not sure how coercion is supposed to work, but I see one
> > problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> > object's tp_as_number pointer could be NULL. I bet it's pretty unlikely
> > that (numeric) coercion of a numeric object and an unspecified object
> > turns up a non-numeric object, but I don't see anything guaranteeing it
> > won't, either.

> I think this currently can't happen because coercions never return
> non-numeric objects, but it sounds like a good sanity check to add.

> Please check this in as a separate patch (not as part of the huge
> augmented assignment patch).

Alright, checking it in after 'make test' finishes. I'm also removing some
redundant PyInstance_Check() calls in PyNumber_Multiply: the first thing in
that function is a BINOP call, which expands to

        if (PyInstance_Check(v) || PyInstance_Check(w)) \
                return PyInstance_DoBinOp(v, w, opname, ropname, thisfunc)

So after the BINOP call, neither argument can be an instance, anyway.


Also, I'll take this opportunity to explain what I'm doing with the
PyNumber_InPlace* functions, for those that are interested. The comment I'm
placing in the code should be enough information:

/* The in-place operators are defined to fall back to the 'normal',
   non in-place operations, if the in-place methods are not in place, and to
   take class instances into account. This is how it is supposed to work:

   - If the left-hand-side object (the first argument) is an
     instance object, let PyInstance_DoInPlaceOp() handle it.  Pass the
     non in-place variant of the function as callback, because it will only
     be used if any kind of coercion has been done, and if an object has
     been coerced, it's a new object and shouldn't be modified in-place.

   - Otherwise, if the object has the appropriate struct members, and they
     are filled, call that function and return the result. No coercion is
     done on the arguments; the left-hand object is the one the operation is
     performed on, and it's up to the function to deal with the right-hand
     object.

   - Otherwise, if the second argument is an Instance, let
     PyInstance_DoBinOp() handle it, but not in-place. Again, pass the
     non in-place function as callback.

   - Otherwise, both arguments are C objects. Try to coerce them and call
     the ordinary (not in-place) function-pointer from the type struct.
     
   - Otherwise, we are out of options: raise a type error.

   */

If anyone sees room for unexpected behaviour under these rules, let me know
and you'll get an XS4ALL shirt! (Sorry, only ones I can offer ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From DavidA at ActiveState.com  Thu Aug 24 02:25:55 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 23 Aug 2000 17:25:55 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] [Announce] ActivePython 1.6 beta release (fwd)
Message-ID: <Pine.WNT.4.21.0008231725340.272-100000@loom>

It is my pleasure to announce the availability of the beta release of
ActivePython 1.6, build 100.

This binary distribution, based on Python 1.6b1, is available from
ActiveState's website at:

    http://www.ActiveState.com/Products/ActivePython/

ActiveState is committed to making Python easy to install and use on all
major platforms. ActivePython contains the convenience of swift
installation, coupled with commonly used modules, providing you with a
total package to meets your Python needs. Additionally, for Windows users,
ActivePython provides a suite of Windows tools, developed by Mark Hammond.

ActivePython is provided in convenient binary form for Windows, Linux and
Solaris under a variety of installation packages, available at:

    http://www.ActiveState.com/Products/ActivePython/Download.html

For support information, mailing list subscriptions and archives, a bug
reporting system, and fee-based technical support, please go to

    http://www.ActiveState.com/Products/ActivePython/

Please send us feedback regarding this release, either through the mailing
list or through direct email to ActivePython-feedback at ActiveState.com.

ActivePython is free, and redistribution of ActivePython within your
organization is allowed.  The ActivePython license is available at
http://www.activestate.com/Products/ActivePython/License_Agreement.html
and in the software packages.

We look forward to your comments and to making ActivePython suit your
Python needs in future releases.

Thank you,

-- David Ascher & the ActivePython team
   ActiveState Tool Corporation










From tim_one at email.msn.com  Thu Aug 24 05:39:43 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 23 Aug 2000 23:39:43 -0400
Subject: [Python-Dev] [PEP 223]  Change the Meaning of \x Escapes
Message-ID: <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com>

An HTML version of the attached can be viewed at

    http://python.sourceforge.net/peps/pep-0223.html

This will be adopted for 2.0 unless there's an uproar.  Note that it *does*
have potential for breaking existing code -- although no real-life instance
of incompatibility has yet been reported.  This is explained in detail in
the PEP; check your code now.

although-if-i-were-you-i-wouldn't-bother<0.5-wink>-ly y'rs  - tim


PEP: 223
Title: Change the Meaning of \x Escapes
Version: $Revision: 1.4 $
Author: tpeters at beopen.com (Tim Peters)
Status: Active
Type: Standards Track
Python-Version: 2.0
Created: 20-Aug-2000
Post-History: 23-Aug-2000


Abstract

    Change \x escapes, in both 8-bit and Unicode strings, to consume
    exactly the two hex digits following.  The proposal views this as
    correcting an original design flaw, leading to clearer expression
    in all flavors of string, a cleaner Unicode story, better
    compatibility with Perl regular expressions, and with minimal risk
    to existing code.


Syntax

    The syntax of \x escapes, in all flavors of non-raw strings, becomes

        \xhh

    where h is a hex digit (0-9, a-f, A-F).  The exact syntax in 1.5.2 is
    not clearly specified in the Reference Manual; it says

        \xhh...

    implying "two or more" hex digits, but one-digit forms are also
    accepted by the 1.5.2 compiler, and a plain \x is "expanded" to
    itself (i.e., a backslash followed by the letter x).  It's unclear
    whether the Reference Manual intended either of the 1-digit or
    0-digit behaviors.


Semantics

    In an 8-bit non-raw string,
        \xij
    expands to the character
        chr(int(ij, 16))
    Note that this is the same as in 1.6 and before.

    In a Unicode string,
        \xij
    acts the same as
        \u00ij
    i.e. it expands to the obvious Latin-1 character from the initial
    segment of the Unicode space.

    An \x not followed by at least two hex digits is a compile-time error,
    specifically ValueError in 8-bit strings, and UnicodeError (a subclass
    of ValueError) in Unicode strings.  Note that if an \x is followed by
    more than two hex digits, only the first two are "consumed".  In 1.6
    and before all but the *last* two were silently ignored.


Example

    In 1.5.2:

        >>> "\x123465"  # same as "\x65"
        'e'
        >>> "\x65"
        'e'
        >>> "\x1"
        '\001'
        >>> "\x\x"
        '\\x\\x'
        >>>

    In 2.0:

        >>> "\x123465" # \x12 -> \022, "3456" left alone
        '\0223456'
        >>> "\x65"
        'e'
        >>> "\x1"
        [ValueError is raised]
        >>> "\x\x"
        [ValueError is raised]
        >>>


History and Rationale

    \x escapes were introduced in C as a way to specify variable-width
    character encodings.  Exactly which encodings those were, and how many
    hex digits they required, was left up to each implementation.  The
    language simply stated that \x "consumed" *all* hex digits following,
    and left the meaning up to each implementation.  So, in effect, \x in C
    is a standard hook to supply platform-defined behavior.

    Because Python explicitly aims at platform independence, the \x escape
    in Python (up to and including 1.6) has been treated the same way
    across all platforms:  all *except* the last two hex digits were
    silently ignored.  So the only actual use for \x escapes in Python was
    to specify a single byte using hex notation.

    Larry Wall appears to have realized that this was the only real use for
    \x escapes in a platform-independent language, as the proposed rule for
    Python 2.0 is in fact what Perl has done from the start (although you
    need to run in Perl -w mode to get warned about \x escapes with fewer
    than 2 hex digits following -- it's clearly more Pythonic to insist on
    2 all the time).

    When Unicode strings were introduced to Python, \x was generalized so
    as to ignore all but the last *four* hex digits in Unicode strings.
    This caused a technical difficulty for the new regular expression
engine:
    SRE tries very hard to allow mixing 8-bit and Unicode patterns and
    strings in intuitive ways, and it no longer had any way to guess what,
    for example, r"\x123456" should mean as a pattern:  is it asking to
match
    the 8-bit character \x56 or the Unicode character \u3456?

    There are hacky ways to guess, but it doesn't end there.  The ISO C99
    standard also introduces 8-digit \U12345678 escapes to cover the entire
    ISO 10646 character space, and it's also desired that Python 2 support
    that from the start.  But then what are \x escapes supposed to mean?
    Do they ignore all but the last *eight* hex digits then?  And if less
    than 8 following in a Unicode string, all but the last 4?  And if less
    than 4, all but the last 2?

    This was getting messier by the minute, and the proposal cuts the
    Gordian knot by making \x simpler instead of more complicated.  Note
    that the 4-digit generalization to \xijkl in Unicode strings was also
    redundant, because it meant exactly the same thing as \uijkl in Unicode
    strings.  It's more Pythonic to have just one obvious way to specify a
    Unicode character via hex notation.


Development and Discussion

    The proposal was worked out among Guido van Rossum, Fredrik Lundh and
    Tim Peters in email.  It was subsequently explained and disussed on
    Python-Dev under subject "Go \x yourself", starting 2000-08-03.
    Response was overwhelmingly positive; no objections were raised.


Backward Compatibility

    Changing the meaning of \x escapes does carry risk of breaking existing
    code, although no instances of incompabitility have yet been discovered.
    The risk is believed to be minimal.

    Tim Peters verified that, except for pieces of the standard test suite
    deliberately provoking end cases, there are no instances of \xabcdef...
    with fewer or more than 2 hex digits following, in either the Python
    CVS development tree, or in assorted Python packages sitting on his
    machine.

    It's unlikely there are any with fewer than 2, because the Reference
    Manual implied they weren't legal (although this is debatable!).  If
    there are any with more than 2, Guido is ready to argue they were buggy
    anyway <0.9 wink>.

    Guido reported that the O'Reilly Python books *already* document that
    Python works the proposed way, likely due to their Perl editing
    heritage (as above, Perl worked (very close to) the proposed way from
    its start).

    Finn Bock reported that what JPython does with \x escapes is
    unpredictable today.  This proposal gives a clear meaning that can be
    consistently and easily implemented across all Python implementations.


Effects on Other Tools

    Believed to be none.  The candidates for breakage would mostly be
    parsing tools, but the author knows of none that worry about the
    internal structure of Python strings beyond the approximation "when
    there's a backslash, swallow the next character".  Tim Peters checked
    python-mode.el, the std tokenize.py and pyclbr.py, and the IDLE syntax
    coloring subsystem, and believes there's no need to change any of
    them.  Tools like tabnanny.py and checkappend.py inherit their immunity
    from tokenize.py.


Reference Implementation

    The code changes are so simple that a separate patch will not be
produced.
    Fredrik Lundh is writing the code, is an expert in the area, and will
    simply check the changes in before 2.0b1 is released.


BDFL Pronouncements

    Yes, ValueError, not SyntaxError.  "Problems with literal
interpretations
    traditionally raise 'runtime' exceptions rather than syntax errors."


Copyright

    This document has been placed in the public domain.





From guido at beopen.com  Thu Aug 24 07:34:15 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 00:34:15 -0500
Subject: [Python-Dev] [PEP 223] Change the Meaning of \x Escapes
In-Reply-To: Your message of "Wed, 23 Aug 2000 23:39:43 -0400."
             <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCOEKFHBAA.tim_one@email.msn.com> 
Message-ID: <200008240534.AAA00885@cj20424-a.reston1.va.home.com>

> An HTML version of the attached can be viewed at
> 
>     http://python.sourceforge.net/peps/pep-0223.html

Nice PEP!

> Effects on Other Tools
> 
>     Believed to be none.  [...]

I believe that Fredrik also needs to fix SRE's interpretation of \xhh.
Unless he's already done that.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Thu Aug 24 07:31:04 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 24 Aug 2000 01:31:04 -0400
Subject: [Python-Dev] [PEP 223] Change the Meaning of \x Escapes
In-Reply-To: <200008240534.AAA00885@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEKNHBAA.tim_one@email.msn.com>

[Guido]
> Nice PEP!

Thanks!  I thought the kids could stand a simple example of what you'd like
to read <wink>.

> I believe that Fredrik also needs to fix SRE's interpretation of \xhh.
> Unless he's already done that.

I'm sure he's acutely aware of that, since that's how this started!  And
he's implementing \x in strings too.  I knew you wouldn't read it to the end
<0.9 wink>.

put-the-refman-stuff-briefly-at-the-front-and-save-the-blather-for-
    the-end-ly y'rs  - tim





From ping at lfw.org  Thu Aug 24 11:14:12 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 24 Aug 2000 04:14:12 -0500 (CDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import
 something as'
In-Reply-To: <200008231622.LAA02275@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>

On Wed, 23 Aug 2000, Guido van Rossum wrote:
> [Ping]
> > Looks potentially useful to me.  If nothing else, it's certainly
> > easier to explain than any other behaviour i could think of, since
> > assignment is already well-understood.
> 
> KISS suggests not to add it.  We had a brief discussion about this at
> our 2.0 planning meeting and nobody there thought it would be worth
> it, and several of us felt it would be asking for trouble.

What i'm trying to say is that it's *easier* to explain "import as"
with Thomas' enhancement than without it.

The current explanation of "import <x> as <y>" is something like

    Find and import the module named <x> and assign it to <y>
    in the normal way you do assignment, except <y> has to be
    a pure name.

Thomas' suggestion lifts the restriction and makes the explanation
simpler than it would have been:

    Find and import the module named <x> and assign it to <y>
    in the normal way you do assignment.

"The normal way you do assignment" is shorthand for "decide
whether to assign to the local or global namespace depending on
whether <y> has been assigned to in the current scope, unless
<y> has been declared global with a 'global' statement" -- and
that applies in any case.  Luckily, it's a concept that has
been explained before and which Python programmers already
need to understand anyway.

The net effect is essentially a direct translation to

    <y> = __import__("<x>")

> > "import foo.bar as spam" makes me uncomfortable because:
> > 
> >     (a) It's not clear whether spam should get foo or foo.bar, as
> >         evidenced by the discussion between Gordon and Thomas.
> 
> As far as I recall that conversation, it's just that Thomas (more or
> less accidentally) implemented what was easiest from the
> implementation's point of view without thinking about what it should
> mean.  *Of course* it should mean what I said if it's allowed.  Even
> Thomas agrees to that now.

Careful:

    import foo.bar          "import the package named foo and its submodule bar,
                             then put *foo* into the current namespace"
    import foo.bar as spam  "import the package named foo and its submodule bar,
                             then put *bar* into the current namespace, as spam"

Only this case causes import to import a *different* object just because
you used "as".

    import foo              "import the module named foo, then put foo into
                             the current namespace"
    import foo as spam      "import the module named foo, then put foo into
                             the current namespace, as spam"

The above, and all the other forms of "import ... as", put the *same*
object into the current namespace as they would have done, without the
"as" clause.

> >     (b) There's a straightforward and unambiguous way to express
> >         this already: "from foo import bar as spam".
> 
> Without syntax coloring that looks word soup to me.
> 
>   import foo.bar as spam
> 
> uses fewer words to say the same clearer.

But then:

        from foo import bar as spam    # give me bar, but name it spam
        import foo.bar as spam         # give me bar, but name it spam

are two ways to say the same thing -- but only if bar is a module.
If bar happens to be some other kind of symbol, the first works but
the second doesn't!

Not so without "as spam":

        from foo import bar            # give me bar
        import foo.bar                 # give me foo

> > That is, would
> > 
> >     import foo.bar as spam
> > 
> > define just spam or both foo and spam?
> 
> Aargh!  Just spam, of course!

I apologize if this is annoying you.  I hope you see the inconsistency
that i'm trying to point out, though.  If you see it and decide that
it's okay to live with the inconsistency, that's okay with me.


-- ?!ng




From thomas at xs4all.net  Thu Aug 24 12:18:58 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 12:18:58 +0200
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import something as'
In-Reply-To: <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>; from ping@lfw.org on Thu, Aug 24, 2000 at 04:14:12AM -0500
References: <200008231622.LAA02275@cj20424-a.reston1.va.home.com> <Pine.LNX.4.10.10008240353290.10936-100000@server1.lfw.org>
Message-ID: <20000824121858.E7566@xs4all.nl>

On Thu, Aug 24, 2000 at 04:14:12AM -0500, Ka-Ping Yee wrote:

> The current explanation of "import <x> as <y>" is something like

>     Find and import the module named <x> and assign it to <y>
>     in the normal way you do assignment, except <y> has to be
>     a pure name.

> Thomas' suggestion lifts the restriction and makes the explanation
> simpler than it would have been:

>     Find and import the module named <x> and assign it to <y>
>     in the normal way you do assignment.

> "The normal way you do assignment" is shorthand for "decide
> whether to assign to the local or global namespace depending on
> whether <y> has been assigned to in the current scope, unless
> <y> has been declared global with a 'global' statement" -- and
> that applies in any case.  Luckily, it's a concept that has
> been explained before and which Python programmers already
> need to understand anyway.

This is not true. The *current* situation already does the local/global
namespace trick, except that 'import ..' *is* a local assignment, so the
resulting name is always local (unless there is a "global" statement.)

My patch wouldn't change that one bit. It would only expand the allowable
expressions in the 'as' clause: is it a normal name-binding assignment (like
now), or a subscription-assignment, or a slice-assignment, or an
attribute-assignment. In other words, all types of assignment.

> The net effect is essentially a direct translation to

>     <y> = __import__("<x>")

Exactly :)

> Careful:

>     import foo.bar          "import the package named foo and its
>                              submodule bar, then put *foo* into the
>                              current namespace"

Wrong. What it does is: import the package named foo and its submodule bar,
and make it so you can access foo.bar via the name 'foo.bar'. That this has
to put 'foo' in the local namespace is a side issue :-) And when seen like
that,

>     import foo.bar as spam  "import the package named foo and its
>                              submodule bar, then put *bar* into the
>                              current namespace, as spam"

Becomes obvious as well.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Thu Aug 24 13:22:32 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 13:22:32 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl>
Message-ID: <39A50578.C08B9F14@lemburg.com>

Thomas Wouters wrote:
> 
> On Wed, Aug 23, 2000 at 06:28:03PM -0500, Guido van Rossum wrote:
> 
> > > Now, I'm not sure how coercion is supposed to work, but I see one
> > > problem here: 'v' can be changed by PyNumber_Coerce(), and the new
> > > object's tp_as_number pointer could be NULL. I bet it's pretty unlikely
> > > that (numeric) coercion of a numeric object and an unspecified object
> > > turns up a non-numeric object, but I don't see anything guaranteeing it
> > > won't, either.
> 
> > I think this currently can't happen because coercions never return
> > non-numeric objects, but it sounds like a good sanity check to add.
> 
> > Please check this in as a separate patch (not as part of the huge
> > augmented assignment patch).
> 
> Alright, checking it in after 'make test' finishes. I'm also removing some
> redundant PyInstance_Check() calls in PyNumber_Multiply: the first thing in
> that function is a BINOP call, which expands to
> 
>         if (PyInstance_Check(v) || PyInstance_Check(w)) \
>                 return PyInstance_DoBinOp(v, w, opname, ropname, thisfunc)
> 
> So after the BINOP call, neither argument can be an instance, anyway.
> 
> Also, I'll take this opportunity to explain what I'm doing with the
> PyNumber_InPlace* functions, for those that are interested. The comment I'm
> placing in the code should be enough information:
> 
> /* The in-place operators are defined to fall back to the 'normal',
>    non in-place operations, if the in-place methods are not in place, and to
>    take class instances into account. This is how it is supposed to work:
> 
>    - If the left-hand-side object (the first argument) is an
>      instance object, let PyInstance_DoInPlaceOp() handle it.  Pass the
>      non in-place variant of the function as callback, because it will only
>      be used if any kind of coercion has been done, and if an object has
>      been coerced, it's a new object and shouldn't be modified in-place.
> 
>    - Otherwise, if the object has the appropriate struct members, and they
>      are filled, call that function and return the result. No coercion is
>      done on the arguments; the left-hand object is the one the operation is
>      performed on, and it's up to the function to deal with the right-hand
>      object.
> 
>    - Otherwise, if the second argument is an Instance, let
>      PyInstance_DoBinOp() handle it, but not in-place. Again, pass the
>      non in-place function as callback.
> 
>    - Otherwise, both arguments are C objects. Try to coerce them and call
>      the ordinary (not in-place) function-pointer from the type struct.
> 
>    - Otherwise, we are out of options: raise a type error.
> 
>    */
> 
> If anyone sees room for unexpected behaviour under these rules, let me know
> and you'll get an XS4ALL shirt! (Sorry, only ones I can offer ;)

I just hope that with all these new operators you haven't
closed the door for switching to argument based handling of
coercion.

One of these days (probably for 2.1), I would like to write up the
proposal I made on my Python Pages about a new coercion mechanism
as PEP. The idea behind it is to only use centralized coercion
as fall-back solution in case the arguments can't handle the
operation with the given type combination.

To implement this, all builtin types will have to be changed
to support mixed type argument slot functions (this ability will
be signalled to the interpreter using a type flag).

More infos on the proposal page at:

  http://starship.python.net/crew/lemburg/CoercionProposal.html

Is this still possible under the new code you've added ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Thu Aug 24 13:37:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 13:37:28 +0200
Subject: Patch 100899 [Unicode compression] (was RE: [Python-Dev] 2.0 Release 
 Plans)
References: <14756.4118.865603.363166@bitdiddle.concentric.net> <LNBBLJKPBEHFEDALKOLCKEINHBAA.tim_one@email.msn.com> <20000823205920.A7566@xs4all.nl>
Message-ID: <39A508F8.44C921D4@lemburg.com>

Thomas Wouters wrote:
> 
> On Wed, Aug 23, 2000 at 02:32:20PM -0400, Tim Peters wrote:
> > [Jeremy Hylton]
> > > I would like to see some compression in the release, but agree that it
> > > is not an essential optimization.  People have talked about it for a
> > > couple of months, and we haven't found someone to work on it because
> > > at various times pirx and /F said they were working on it.
> > >
> > > If we don't hear from /F by tomorrow promising he will finish it before
> > > the beta release, let's postpone it.
> 
> > There was an *awful* lot of whining about the size increase without this
> > optimization, and the current situation violates the "no compiler warnings!"
> > rule too (at least under MSVC 6).
> 
> For the record, you can't compile unicodedatabase.c with g++ because of it's
> size: g++ complains that the switch is too large to compile. Under gcc it
> compiles, but only by trying really really hard, and I don't know how it
> performs under other versions of gcc (in particular more heavily optimizing
> ones -- might run into other limits in those situations.)

Are you sure this is still true with the latest CVS tree version ?

I split the unicodedatabase.c static array into chunks of
4096 entries each -- that should really be managable by all
compilers.

But perhaps you are talking about the switch in unicodectype.c 
(there are no large switches in unicodedatabase.c) ? In that
case, Jack Janssen has added a macro switch which breaks that
switch in multiple parts too (see the top of that file).

It should be no problem adding a few more platforms to the list
of platforms which have this switch defined per default (currently
Macs and MS Win64).

I see no problem taking the load off of Fredrik an postponing
the patch to 2.1.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Thu Aug 24 16:00:56 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 09:00:56 -0500
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: Your message of "Thu, 24 Aug 2000 13:22:32 +0200."
             <39A50578.C08B9F14@lemburg.com> 
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl>  
            <39A50578.C08B9F14@lemburg.com> 
Message-ID: <200008241400.JAA01806@cj20424-a.reston1.va.home.com>

> I just hope that with all these new operators you haven't
> closed the door for switching to argument based handling of
> coercion.

Far from it!  Actually, the inplace operators won't do any coercions
when the left argument supports the inplace version, and otherwise
exactly the same rules apply as for the non-inplace version.  (I
believe this isn't in the patch yet, but it will be when Thomas checks
it in.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 24 15:14:55 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 15:14:55 +0200
Subject: [Python-Dev] PyNumber_*() binary operations & coercion
In-Reply-To: <200008241400.JAA01806@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 24, 2000 at 09:00:56AM -0500
References: <20000823235345.C7566@xs4all.nl> <200008232328.SAA03141@cj20424-a.reston1.va.home.com> <20000824011519.D7566@xs4all.nl> <39A50578.C08B9F14@lemburg.com> <200008241400.JAA01806@cj20424-a.reston1.va.home.com>
Message-ID: <20000824151455.F7566@xs4all.nl>

On Thu, Aug 24, 2000 at 09:00:56AM -0500, Guido van Rossum wrote:
> > I just hope that with all these new operators you haven't
> > closed the door for switching to argument based handling of
> > coercion.

> Far from it!  Actually, the inplace operators won't do any coercions
> when the left argument supports the inplace version, and otherwise
> exactly the same rules apply as for the non-inplace version.  (I
> believe this isn't in the patch yet, but it will be when Thomas checks
> it in.)

Exactly. (Actually, I'm again re-working the patch: If I do it the way I
intended to, you'd sometimes get the 'non in-place' error messages, instead
of the in-place ones. But the result will be the same.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Thu Aug 24 17:52:35 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 11:52:35 -0400 (EDT)
Subject: [Python-Dev] Need help with SF bug #112558
Message-ID: <14757.17603.237768.174359@cj42289-a.reston1.va.home.com>

  I'd like some help with fixing a bug in dictobject.c.  The bug is on
SourceForge as #112558, and my attempted fix is SourceForge patch
#101277.
  The original bug is that exceptions raised by an object's __cmp__()
during dictionary lookup are not cleared, and can be propogated during
a subsequent lookup attempt.  I've made more detailed comments at
SourceForge at the patch:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470

  Thanks for any suggestions!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From mal at lemburg.com  Thu Aug 24 18:53:35 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 24 Aug 2000 18:53:35 +0200
Subject: [Python-Dev] Need help with SF bug #112558
References: <14757.17603.237768.174359@cj42289-a.reston1.va.home.com>
Message-ID: <39A5530F.FF2DA5C4@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   I'd like some help with fixing a bug in dictobject.c.  The bug is on
> SourceForge as #112558, and my attempted fix is SourceForge patch
> #101277.
>   The original bug is that exceptions raised by an object's __cmp__()
> during dictionary lookup are not cleared, and can be propogated during
> a subsequent lookup attempt.  I've made more detailed comments at
> SourceForge at the patch:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470
> 
>   Thanks for any suggestions!

Here are some:

* Please be very careful when patching this area of the interpreter:
  it is *very* performance sensitive.

* I'd remove the cmp variable and do a PyErr_Occurred() directly
  in all cases where PyObect_Compare() returns != 0.

* Exceptions during dict lookups are rare. I'm not sure about
  failing lookups... Valdimir ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From trentm at ActiveState.com  Thu Aug 24 19:46:27 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 24 Aug 2000 10:46:27 -0700
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
Message-ID: <20000824104627.C15992@ActiveState.com>

Hey all,

I recently checked in the Monterey stuff (patch
http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
) but the checkin did not show up on python-checkins and the comment and
status change to "Closed" did not show up on python-patches. My checkin was
about a full day ago.

Is this a potential SourceForge bug? The delay con't be *that* long.

Regards,
Trent

-- 
Trent Mick
TrentM at ActiveState.com



From fdrake at beopen.com  Thu Aug 24 20:39:01 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 14:39:01 -0400 (EDT)
Subject: [Python-Dev] CVS patch fixer?
Message-ID: <14757.27589.366614.231055@cj42289-a.reston1.va.home.com>

  Someone (don't remember who) posted a Perl script to either this
list or the patches list, perhaps a month or so ago(?), which could
massage a CVS-generated patch to make it easier to apply.
  Can anyone provide a copy of this, or a link to it?
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 24 21:50:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 21:50:53 +0200
Subject: [Python-Dev] Augmented assignment
Message-ID: <20000824215053.G7566@xs4all.nl>

I've finished rewriting the PyNumber_InPlace*() calls in the augmented
assignment patch and am about to check the entire thing in. I'll be checking
it in in parts, with the grammar/compile/ceval things last, but you might
get some weird errors in the next hour or so, depending on my link to
sourceforge. (I'm doing some last minute checks before checking it in ;)

Part of it will be docs, but not terribly much yet. I'm still working on
those, though, and I have a bit over a week before I leave on vacation, so I
think I can finish them for the most part.) I'm also checking in a test
case, and some modifications to the std library: support for += in UserList,
UserDict, UserString, and rfc822.Addresslist. Reviewers are more than
welcome, though I realize how large a patch it is. (Boy, do I realize that!)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Thu Aug 24 23:45:53 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 16:45:53 -0500
Subject: [Python-Dev] Augmented assignment
In-Reply-To: Your message of "Thu, 24 Aug 2000 21:50:53 +0200."
             <20000824215053.G7566@xs4all.nl> 
References: <20000824215053.G7566@xs4all.nl> 
Message-ID: <200008242145.QAA01306@cj20424-a.reston1.va.home.com>

Congratulations, Thomas!  Megathanks for carrying this proposal to a
happy ending.  I'm looking forward to using the new feature!

Nits: Lib/symbol.py and Lib/token.py need to be regenerated and
checked in; (see the comments at the top of the file).

Also, tokenizer.py probably needs to have the new tokens += etc. added
manually.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Thu Aug 24 23:09:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 24 Aug 2000 23:09:49 +0200
Subject: [Python-Dev] Augmented assignment
In-Reply-To: <200008242145.QAA01306@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 24, 2000 at 04:45:53PM -0500
References: <20000824215053.G7566@xs4all.nl> <200008242145.QAA01306@cj20424-a.reston1.va.home.com>
Message-ID: <20000824230949.O4798@xs4all.nl>

On Thu, Aug 24, 2000 at 04:45:53PM -0500, Guido van Rossum wrote:

> Nits: Lib/symbol.py and Lib/token.py need to be regenerated and
> checked in; (see the comments at the top of the file).

Checking them in now.

> Also, tokenizer.py probably needs to have the new tokens += etc. added
> manually.

Okay. I'm not entirely sure how to do this, but I *think* this does it:
replace

Operator = group('\+', '\-', '\*\*', '\*', '\^', '~', '/', '%', '&', '\|',
                 '<<', '>>', '==', '<=', '<>', '!=', '>=', '=', '<', '>')

with

Operator = group('\+=', '\-=', '\*=', '%=', '/=', '\*\*=', '&=', '\|=',
                 '\^=', '>>=', '<<=', '\+', '\-', '\*\*', '\*', '\^', '~',
                 '/', '%', '&', '\|', '<<', '>>', '==', '<=', '<>', '!=',
                 '>=', '=', '<', '>')

Placing the augmented-assignment operators at the end doesn't work, but this
seems to do the trick. However, I can't really test this module, just check
its output. It seems okay, but I would appreciate either an 'okay' or a
more extensive test before checking it in. No, I can't start IDLE right now,
I'm working over a 33k6 leased line and my home machine doesn't have an
augmented Python yet :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Thu Aug 24 23:35:38 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Thu, 24 Aug 2000 23:35:38 +0200
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
Message-ID: <20000824213543.5D902D71F9@oratrix.oratrix.nl>

Both regexp and sre don't behave well under low-memory conditions.

I noticed this because test_longexp basically ate all my memory (sigh, 
I think I'll finally have to give up my private memory allocator and
take the 15% performance hit, until I find the time to dig into
Vladimir's stuff) so the rest of the regressions tests ran under very
tight memory conditions.

test_re wasn't so bad, the only problem was that it crashed with a
"NULL return without an exception". test_regexp was worse, it crashed
my machine.

If someone feels the urge maybe they could run the testsuite on unix
with a sufficiently low memory-limit.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From jeremy at beopen.com  Fri Aug 25 00:17:56 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:17:56 -0400 (EDT)
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: <20000824104627.C15992@ActiveState.com>
References: <20000824104627.C15992@ActiveState.com>
Message-ID: <14757.40724.86552.609923@bitdiddle.concentric.net>

>>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:

  TM> Hey all,

  TM> I recently checked in the Monterey stuff (patch
  TM> http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
  TM> ) but the checkin did not show up on python-checkins and the
  TM> comment and status change to "Closed" did not show up on
  TM> python-patches. My checkin was about a full day ago.

  TM> Is this a potential SourceForge bug? The delay con't be *that*
  TM> long.

Weird.  I haven't even received the message quoted above.  There's
something very weird going on.

I have not seen a checkin message for a while, though I have made a
few checkins myself.  It looks like the problem I'm seeing here is
with between python.org and beopen.com, because the messages are in
the archive.

The problem you are seeing is different.  The most recent checkin
message from you is dated Aug. 16.  Could it be a problem with your
local mail?  The message would be sent from you account.  Perhaps
there is more info. in your system's mail log.

Jeremy




From skip at mojam.com  Fri Aug 25 00:05:20 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 24 Aug 2000 17:05:20 -0500 (CDT)
Subject: [Python-Dev] Check your "Accepted" patches
Message-ID: <14757.39968.498536.643301@beluga.mojam.com>

There are 8 patches with status "Accepted".  They are assigned to akuchling,
bwarsaw, jhylton, fdrake, ping and prescod.  I had not been paying attention
to that category and then saw this in the Open Items of PEP 0200:

    Get all patches out of Accepted.

I checked and found one of mine there.

Skip




From trentm at ActiveState.com  Fri Aug 25 00:32:55 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 24 Aug 2000 15:32:55 -0700
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: <14757.40724.86552.609923@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:17:56PM -0400
References: <20000824104627.C15992@ActiveState.com> <14757.40724.86552.609923@bitdiddle.concentric.net>
Message-ID: <20000824153255.B27016@ActiveState.com>

On Thu, Aug 24, 2000 at 06:17:56PM -0400, Jeremy Hylton wrote:
> >>>>> "TM" == Trent Mick <trentm at ActiveState.com> writes:
>   TM> I recently checked in the Monterey stuff (patch
>   TM> http://sourceforge.net/patch/index.php?func=detailpatch&patch_id=101249&group_id=5470
>   TM> ) but the checkin did not show up on python-checkins and the
>   TM> comment and status change to "Closed" did not show up on
>   TM> python-patches. My checkin was about a full day ago.
> 
> I have not seen a checkin message for a while, though I have made a
> few checkins myself.  It looks like the problem I'm seeing here is
> with between python.org and beopen.com, because the messages are in
> the archive.
> 
> The problem you are seeing is different.  The most recent checkin
> message from you is dated Aug. 16.  Could it be a problem with your
> local mail?  The message would be sent from you account.  Perhaps

The cvs checkin message is made from my local machine?! Really? I thought
that would be on the server side. Our email *is* a little backed up here but
I don't think *that* backed up.

In any case, that does not explain why patches at python.org did not a mail
regarding my update of the patch on SourceForge. *Two* emails have gone
astray here.

I am really not so curious that I want to hunt it down. Just a heads up for
people.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From jeremy at beopen.com  Fri Aug 25 00:44:27 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:44:27 -0400 (EDT)
Subject: [Python-Dev] two tests fail
Message-ID: <14757.42315.528801.142803@bitdiddle.concentric.net>

After the augmented assignment checkin (yay!), I see two failing
tests: test_augassign and test_parser.  Do you see the same problem?

Jeremy



From thomas at xs4all.net  Fri Aug 25 00:50:35 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 00:50:35 +0200
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <14757.42315.528801.142803@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:44:27PM -0400
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
Message-ID: <20000825005035.P4798@xs4all.nl>

On Thu, Aug 24, 2000 at 06:44:27PM -0400, Jeremy Hylton wrote:
> After the augmented assignment checkin (yay!), I see two failing
> tests: test_augassign and test_parser.  Do you see the same problem?

Hm, neither is failing, for me, in a tree that has no differences with the
CVS tree according to CVS itself. I'll see if I can reproduce it by
using a different tree, just to be sure.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at beopen.com  Fri Aug 25 00:56:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 24 Aug 2000 18:56:15 -0400 (EDT)
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <20000825005035.P4798@xs4all.nl>
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
	<20000825005035.P4798@xs4all.nl>
Message-ID: <14757.43023.497909.568824@bitdiddle.concentric.net>

Oops.  My mistake.  I hadn't rebuilt the parser.

Jeremy



From thomas at xs4all.net  Fri Aug 25 00:53:18 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 00:53:18 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib UserString.py,1.5,1.6
In-Reply-To: <200008242147.OAA05606@slayer.i.sourceforge.net>; from nowonder@users.sourceforge.net on Thu, Aug 24, 2000 at 02:47:36PM -0700
References: <200008242147.OAA05606@slayer.i.sourceforge.net>
Message-ID: <20000825005318.H7566@xs4all.nl>

On Thu, Aug 24, 2000 at 02:47:36PM -0700, Peter Schneider-Kamp wrote:
> Update of /cvsroot/python/python/dist/src/Lib
> In directory slayer.i.sourceforge.net:/tmp/cvs-serv5582
> 
> Modified Files:
> 	UserString.py 
> Log Message:
> 
> simple typo that makes regression test test_userstring fail

WTF ? Hmm. I was pretty damned sure I'd fixed that one. I saw it two
times, fixed it in two trees at least, but apparently not the one I commited
:P I'll get some sleep, soon :P

> ***************
> *** 56,60 ****
>           elif isinstance(other, StringType) or isinstance(other, UnicodeType):
>               self.data += other
> !         else
>               self.data += str(other)
>           return self
> --- 56,60 ----
>           elif isinstance(other, StringType) or isinstance(other, UnicodeType):
>               self.data += other
> !         else:
>               self.data += str(other)
>           return self


-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 25 01:03:49 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 01:03:49 +0200
Subject: [Python-Dev] Re: two tests fail
In-Reply-To: <14757.43023.497909.568824@bitdiddle.concentric.net>; from jeremy@beopen.com on Thu, Aug 24, 2000 at 06:56:15PM -0400
References: <14757.42315.528801.142803@bitdiddle.concentric.net> <20000825005035.P4798@xs4all.nl> <14757.43023.497909.568824@bitdiddle.concentric.net>
Message-ID: <20000825010349.Q4798@xs4all.nl>

On Thu, Aug 24, 2000 at 06:56:15PM -0400, Jeremy Hylton wrote:

> Oops.  My mistake.  I hadn't rebuilt the parser.

Well, you were on to something, of course. The parsermodule will have to be
modified to accept augmented assignment as well. (Or at least, so I assume.)
The test just doesn't test that part yet ;-) Fred, do you want me to do
that? I'm not sure on the parsermodule internals, but maybe if you can give
me some pointers I can work it out.

(The same goes for Tools/compiler/compiler, by the way, which I think also
needs to be taught list comprehensions.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From ping at lfw.org  Fri Aug 25 01:38:02 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 24 Aug 2000 19:38:02 -0400 (EDT)
Subject: [Python-Dev] Re: Allow all assignment expressions after 'import
 something as'
In-Reply-To: <20000824121858.E7566@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008241935250.1061-100000@skuld.lfw.org>

On Thu, 24 Aug 2000, Thomas Wouters wrote:
> >     import foo.bar          "import the package named foo and its
> >                              submodule bar, then put *foo* into the
> >                              current namespace"
> 
> Wrong. What it does is: import the package named foo and its submodule bar,
> and make it so you can access foo.bar via the name 'foo.bar'. That this has
> to put 'foo' in the local namespace is a side issue

I understand now.  Sorry for my thickheadedness.  Yes, when i look
at it as "please give this to me as foo.bar", it makes much more sense.

Apologies, Guido.  That's two brain-farts in a day or so.  :(


-- ?!ng




From fdrake at beopen.com  Fri Aug 25 01:36:54 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 24 Aug 2000 19:36:54 -0400 (EDT)
Subject: [Python-Dev] two tests fail
In-Reply-To: <14757.42315.528801.142803@bitdiddle.concentric.net>
References: <14757.42315.528801.142803@bitdiddle.concentric.net>
Message-ID: <14757.45462.717663.782865@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > After the augmented assignment checkin (yay!), I see two failing
 > tests: test_augassign and test_parser.  Do you see the same problem?

  I'll be taking care of the parser module update tonight (late) or
tomorrow morning.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From akuchlin at cnri.reston.va.us  Fri Aug 25 03:32:47 2000
From: akuchlin at cnri.reston.va.us (Andrew Kuchling)
Date: Thu, 24 Aug 2000 21:32:47 -0400
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules pyexpat.c,2.12,2.13
In-Reply-To: <200008242157.OAA06909@slayer.i.sourceforge.net>; from fdrake@users.sourceforge.net on Thu, Aug 24, 2000 at 02:57:46PM -0700
References: <200008242157.OAA06909@slayer.i.sourceforge.net>
Message-ID: <20000824213247.A2318@newcnri.cnri.reston.va.us>

On Thu, Aug 24, 2000 at 02:57:46PM -0700, Fred L. Drake wrote:
>Remove the Py_FatalError() from initpyexpat(); the Guido has decreed
>that this is not appropriate.

So what is going to catch errors while initializing a module?  Or is
PyErr_Occurred() called after a module's init*() function?

--amk



From MarkH at ActiveState.com  Fri Aug 25 03:56:10 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 11:56:10 +1000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules pyexpat.c,2.12,2.13
In-Reply-To: <20000824213247.A2318@newcnri.cnri.reston.va.us>
Message-ID: <ECEPKNMJLHAPFFJHDOJBCEHADGAA.MarkH@ActiveState.com>

Andrew writes:

> On Thu, Aug 24, 2000 at 02:57:46PM -0700, Fred L. Drake wrote:
> >Remove the Py_FatalError() from initpyexpat(); the Guido has decreed
> >that this is not appropriate.
>
> So what is going to catch errors while initializing a module?  Or is
> PyErr_Occurred() called after a module's init*() function?

Yes!  All errors are handled correctly (as of somewhere in the 1.5 family,
I believe)

Note that Py_FatalError() is _evil_ - it can make your program die without
a chance to see any error message or other diagnostic.  It should be
avoided if at all possible.

Mark.




From guido at beopen.com  Fri Aug 25 06:11:54 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 23:11:54 -0500
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions
In-Reply-To: Your message of "Thu, 24 Aug 2000 23:35:38 +0200."
             <20000824213543.5D902D71F9@oratrix.oratrix.nl> 
References: <20000824213543.5D902D71F9@oratrix.oratrix.nl> 
Message-ID: <200008250411.XAA08797@cj20424-a.reston1.va.home.com>

> test_re wasn't so bad, the only problem was that it crashed with a
> "NULL return without an exception". test_regexp was worse, it crashed
> my machine.

That's regex, right?  regexp was the *really* old regular expression
module we once had.

Anyway, I don't care about regex, it's old.

The sre code needs to be robustified, but it's not a high priority for
me.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug 25 06:19:39 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 24 Aug 2000 23:19:39 -0500
Subject: [Python-Dev] checkins not showing on python-checkins and python-patches
In-Reply-To: Your message of "Thu, 24 Aug 2000 15:32:55 MST."
             <20000824153255.B27016@ActiveState.com> 
References: <20000824104627.C15992@ActiveState.com> <14757.40724.86552.609923@bitdiddle.concentric.net>  
            <20000824153255.B27016@ActiveState.com> 
Message-ID: <200008250419.XAA08826@cj20424-a.reston1.va.home.com>

> In any case, that does not explain why patches at python.org did not a mail
> regarding my update of the patch on SourceForge. *Two* emails have gone
> astray here.

This is compensated though by the patch and bug managers, which often
sends me two or three copies of the email for each change to an
entry.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From guido at beopen.com  Fri Aug 25 07:58:15 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 00:58:15 -0500
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: Your message of "Thu, 24 Aug 2000 14:41:55 EST."
Message-ID: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>

Here's a patch that Tim & I believe should solve the thread+fork
problem properly.  I'll try to explain it briefly.

I'm not checking this in yet because I need more eyeballs, and because
I don't actually have a test to prove that I've fixed the problem.
However, our theory is very hopeful.

(1) BACKGROUND: A Python lock may be released by a different thread
than who aqcuired it, and it may be acquired by the same thread
multiple times.  A pthread mutex must always be unlocked by the same
thread that locked it, and can't be locked more than once.  So, a
Python lock can't be built out of a simple pthread mutex; instead, a
Python lock is built out of a "locked" flag and a <condition variable,
mutex> pair.  The mutex is locked for at most a few cycles, to protect
the flag.  This design is Tim's (while still at KSR).

(2) PROBLEM: If you fork while another thread holds a mutex, that
mutex will never be released, because only the forking thread survives
in the child.  The LinuxThread manual recommends to use
pthread_atfork() to acquire all locks in locking order before the
fork, and release them afterwards.  A problem with Tim's design here
is that even if the forking thread has Python's global interpreter
lock, another thread trying to acquire the lock may still hold the
mutex at the time of the fork, causing it to be held forever in the
child.  Charles has posted an effective hack that allocates a new
global interpreter lock in the child, but this doesn't solve the
problem for other locks.

(3) BRAINWAVE: If we use a single mutex shared by all locks, instead
of a mutex per lock, we can lock this mutex around the fork and thus
prevent any other thread from locking it.  This is okay because, while
a condition variable always needs a mutex to go with it, there's no
rule that the same mutex can't be shared by many condition variables.
The code below implements this.

(4) MORE WORK: (a) The PyThread API also defines semaphores, which may
have a similar problem.  But I'm not aware of any use of these (I'm
not quite sure why semaphore support was added), so I haven't patched
these.  (b) The thread_pth.h file define locks in the same way; there
may be others too.  I haven't touched these.

(5) TESTING: Charles Waldman posted this code to reproduce the
problem.  Unfortunately I haven't had much success with it; it seems
to hang even when I apply Charles' patch.

    import thread
    import os, sys
    import time

    def doit(name):
	while 1:
	    if os.fork()==0:
		print name, 'forked', os.getpid()
		os._exit(0)
	    r = os.wait()

    for x in range(50):
	name = 't%s'%x
	print 'starting', name
	thread.start_new_thread(doit, (name,))

    time.sleep(300)

Here's the patch:

*** Python/thread_pthread.h	2000/08/23 21:33:05	2.29
--- Python/thread_pthread.h	2000/08/25 04:29:43
***************
*** 84,101 ****
   * and a <condition, mutex> pair.  In general, if the bit can be acquired
   * instantly, it is, else the pair is used to block the thread until the
   * bit is cleared.     9 May 1994 tim at ksr.com
   */
  
  typedef struct {
  	char             locked; /* 0=unlocked, 1=locked */
  	/* a <cond, mutex> pair to handle an acquire of a locked lock */
  	pthread_cond_t   lock_released;
- 	pthread_mutex_t  mut;
  } pthread_lock;
  
  #define CHECK_STATUS(name)  if (status != 0) { perror(name); error = 1; }
  
  /*
   * Initialization.
   */
  
--- 84,125 ----
   * and a <condition, mutex> pair.  In general, if the bit can be acquired
   * instantly, it is, else the pair is used to block the thread until the
   * bit is cleared.     9 May 1994 tim at ksr.com
+  *
+  * MODIFICATION: use a single mutex shared by all locks.
+  * This should make it easier to cope with fork() while threads exist.
+  * 24 Aug 2000 {guido,tpeters}@beopen.com
   */
  
  typedef struct {
  	char             locked; /* 0=unlocked, 1=locked */
  	/* a <cond, mutex> pair to handle an acquire of a locked lock */
  	pthread_cond_t   lock_released;
  } pthread_lock;
  
+ static pthread_mutex_t locking_mutex = PTHREAD_MUTEX_INITIALIZER;
+ 
  #define CHECK_STATUS(name)  if (status != 0) { perror(name); error = 1; }
  
  /*
+  * Callbacks for pthread_atfork().
+  */
+ 
+ static void prefork_callback()
+ {
+ 	pthread_mutex_lock(&locking_mutex);
+ }
+ 
+ static void parent_callback()
+ {
+ 	pthread_mutex_unlock(&locking_mutex);
+ }
+ 
+ static void child_callback()
+ {
+ 	pthread_mutex_unlock(&locking_mutex);
+ }
+ 
+ /*
   * Initialization.
   */
  
***************
*** 113,118 ****
--- 137,144 ----
  	pthread_t thread1;
  	pthread_create(&thread1, NULL, (void *) _noop, &dummy);
  	pthread_join(thread1, NULL);
+ 	/* XXX Is the following supported here? */
+ 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
  }
  
  #else /* !_HAVE_BSDI */
***************
*** 123,128 ****
--- 149,156 ----
  #if defined(_AIX) && defined(__GNUC__)
  	pthread_init();
  #endif
+ 	/* XXX Is the following supported everywhere? */
+ 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
  }
  
  #endif /* !_HAVE_BSDI */
***************
*** 260,269 ****
  	if (lock) {
  		lock->locked = 0;
  
- 		status = pthread_mutex_init(&lock->mut,
- 					    pthread_mutexattr_default);
- 		CHECK_STATUS("pthread_mutex_init");
- 
  		status = pthread_cond_init(&lock->lock_released,
  					   pthread_condattr_default);
  		CHECK_STATUS("pthread_cond_init");
--- 288,293 ----
***************
*** 286,294 ****
  
  	dprintf(("PyThread_free_lock(%p) called\n", lock));
  
- 	status = pthread_mutex_destroy( &thelock->mut );
- 	CHECK_STATUS("pthread_mutex_destroy");
- 
  	status = pthread_cond_destroy( &thelock->lock_released );
  	CHECK_STATUS("pthread_cond_destroy");
  
--- 310,315 ----
***************
*** 304,314 ****
  
  	dprintf(("PyThread_acquire_lock(%p, %d) called\n", lock, waitflag));
  
! 	status = pthread_mutex_lock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_lock[1]");
  	success = thelock->locked == 0;
  	if (success) thelock->locked = 1;
! 	status = pthread_mutex_unlock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_unlock[1]");
  
  	if ( !success && waitflag ) {
--- 325,335 ----
  
  	dprintf(("PyThread_acquire_lock(%p, %d) called\n", lock, waitflag));
  
! 	status = pthread_mutex_lock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_lock[1]");
  	success = thelock->locked == 0;
  	if (success) thelock->locked = 1;
! 	status = pthread_mutex_unlock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_unlock[1]");
  
  	if ( !success && waitflag ) {
***************
*** 316,330 ****
  
  		/* mut must be locked by me -- part of the condition
  		 * protocol */
! 		status = pthread_mutex_lock( &thelock->mut );
  		CHECK_STATUS("pthread_mutex_lock[2]");
  		while ( thelock->locked ) {
  			status = pthread_cond_wait(&thelock->lock_released,
! 						   &thelock->mut);
  			CHECK_STATUS("pthread_cond_wait");
  		}
  		thelock->locked = 1;
! 		status = pthread_mutex_unlock( &thelock->mut );
  		CHECK_STATUS("pthread_mutex_unlock[2]");
  		success = 1;
  	}
--- 337,351 ----
  
  		/* mut must be locked by me -- part of the condition
  		 * protocol */
! 		status = pthread_mutex_lock( &locking_mutex );
  		CHECK_STATUS("pthread_mutex_lock[2]");
  		while ( thelock->locked ) {
  			status = pthread_cond_wait(&thelock->lock_released,
! 						   &locking_mutex);
  			CHECK_STATUS("pthread_cond_wait");
  		}
  		thelock->locked = 1;
! 		status = pthread_mutex_unlock( &locking_mutex );
  		CHECK_STATUS("pthread_mutex_unlock[2]");
  		success = 1;
  	}
***************
*** 341,352 ****
  
  	dprintf(("PyThread_release_lock(%p) called\n", lock));
  
! 	status = pthread_mutex_lock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_lock[3]");
  
  	thelock->locked = 0;
  
! 	status = pthread_mutex_unlock( &thelock->mut );
  	CHECK_STATUS("pthread_mutex_unlock[3]");
  
  	/* wake up someone (anyone, if any) waiting on the lock */
--- 362,373 ----
  
  	dprintf(("PyThread_release_lock(%p) called\n", lock));
  
! 	status = pthread_mutex_lock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_lock[3]");
  
  	thelock->locked = 0;
  
! 	status = pthread_mutex_unlock( &locking_mutex );
  	CHECK_STATUS("pthread_mutex_unlock[3]");
  
  	/* wake up someone (anyone, if any) waiting on the lock */

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From DavidA at ActiveState.com  Fri Aug 25 07:07:02 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 24 Aug 2000 22:07:02 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.WNT.4.21.0008242203060.1060-100000@cr469175-a>

On Fri, 25 Aug 2000, Guido van Rossum wrote:

> (4) MORE WORK: (a) The PyThread API also defines semaphores, which may
> have a similar problem.  But I'm not aware of any use of these (I'm
> not quite sure why semaphore support was added), so I haven't patched
> these. 

IIRC, we had a discussion a while back about semaphore support in the
PyThread API and agreed that they were not implemented on enough platforms
to be a useful part of the PyThread API.  I can't find it right now, alas.

--david




From MarkH at ActiveState.com  Fri Aug 25 07:16:56 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 15:16:56 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
Message-ID: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>

Something strange is happening in my Windows Debug builds (fresh CVS tree)

If you remove "urllib.pyc", and execute 'python_d -c "import urllib"',
Python dies after printing the message:

FATAL: node type 305, required 311

It also happens for a number of other files (compileall.py will show you
:-)

Further analysis shows this deep in the compiler, and triggered by this
macro in node.h:

---
/* Assert that the type of a node is what we expect */
#ifndef Py_DEBUG
#define REQ(n, type) { /*pass*/ ; }
#else
#define REQ(n, type) \
	{ if (TYPE(n) != (type)) { \
	    fprintf(stderr, "FATAL: node type %d, required %d\n", \
		    TYPE(n), type); \
	    abort(); \
	} }
#endif
---

Is this pointing to a deeper problem, or is the assertion incorrect?

Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
find a simple way to turn it on to confirm it also exists on Linux...

Any ideas?

Mark.




From thomas at xs4all.net  Fri Aug 25 07:23:52 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:23:52 +0200
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Fri, Aug 25, 2000 at 12:58:15AM -0500
References: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <20000825072351.I7566@xs4all.nl>

On Fri, Aug 25, 2000 at 12:58:15AM -0500, Guido van Rossum wrote:

> + 	/* XXX Is the following supported here? */
> + 	pthread_atfork(&prefork_callback, &parent_callback, &child_callback);
>   }
>   
>   #else /* !_HAVE_BSDI */

To answer that question: yes. BSDI from 3.0 onward has pthread_atfork(),
though threads remain unusable until BSDI 4.1 (because of a bug in libc
where pause() stops listening to signals when compiling for threads.) I
haven't actaully tested this patch yet, just gave it a once-over ;) I will
test it on all types of machines we have, though.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Fri Aug 25 07:24:12 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 01:24:12 -0400 (EDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <14758.764.705006.937500@cj42289-a.reston1.va.home.com>

Mark Hammond writes:
 > Is this pointing to a deeper problem, or is the assertion incorrect?

  I expect that there's an incorrect assertion that was fine until one
of the recent grammar changes; the augmented assignment patch is
highly suspect given that it's the most recent.  Look for problems
handling expr_stmt nodes.

 > Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
 > find a simple way to turn it on to confirm it also exists on Linux...

  I don't think I've ever used it, either on Linux or any other Unix.
We should definately have an easy way to turn it on!  Probably at
configure time would be good.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 25 07:29:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:29:53 +0200
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>; from MarkH@ActiveState.com on Fri, Aug 25, 2000 at 03:16:56PM +1000
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <20000825072953.J7566@xs4all.nl>

On Fri, Aug 25, 2000 at 03:16:56PM +1000, Mark Hammond wrote:

> Something strange is happening in my Windows Debug builds (fresh CVS tree)

> If you remove "urllib.pyc", and execute 'python_d -c "import urllib"',
> Python dies after printing the message:
> 
> FATAL: node type 305, required 311
> 
> It also happens for a number of other files (compileall.py will show you
> :-)

> Further analysis shows this deep in the compiler, and triggered by this
> macro in node.h:

> #define REQ(n, type) \
> 	{ if (TYPE(n) != (type)) { \
> 	    fprintf(stderr, "FATAL: node type %d, required %d\n", \
> 		    TYPE(n), type); \
> 	    abort(); \
> 	} }

> Is this pointing to a deeper problem, or is the assertion incorrect?

At first sight, I would say "yes, the assertion is wrong". That doesn't mean
it shouldn't be fixed ! It's probably caused by augmented assignment or list
comprehensions, though I have used both with Py_DEBUG enabled a few times,
so I don't know for sure. I'm compiling with debug right now, to inspect
this, though.

Another thing that might cause it is an out-of-date graminit.h file
somewhere. The one in the CVS tree is up to date, but maybe you have a copy
stashed somewhere ?

> Does the Linux community ever run with Py_DEBUG defined?  I couldn't even
> find a simple way to turn it on to confirm it also exists on Linux...

There's undoubtedly a good way, but I usually just chicken out and add
'#define Py_DEBUG 1' at the bottom of config.h ;) That also makes sure I
don't keep it around too long, as config.h gets regenerated often enough :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 25 07:44:41 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 07:44:41 +0200
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <20000825072953.J7566@xs4all.nl>; from thomas@xs4all.net on Fri, Aug 25, 2000 at 07:29:53AM +0200
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com> <20000825072953.J7566@xs4all.nl>
Message-ID: <20000825074440.K7566@xs4all.nl>

On Fri, Aug 25, 2000 at 07:29:53AM +0200, Thomas Wouters wrote:
> On Fri, Aug 25, 2000 at 03:16:56PM +1000, Mark Hammond wrote:

> > FATAL: node type 305, required 311

> > Is this pointing to a deeper problem, or is the assertion incorrect?
> 
> At first sight, I would say "yes, the assertion is wrong". That doesn't mean
> it shouldn't be fixed ! It's probably caused by augmented assignment or list
> comprehensions, 

Actually, it was a combination of removing UNPACK_LIST and adding
list comprehensions. I just checked in a fix for this. Can you confirm that
this fixes it for the windows build, too ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Fri Aug 25 07:44:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 01:44:11 -0400
Subject: [Python-Dev] Re: threading and forking and 2.0 (patch #101226)
In-Reply-To: <200008250558.AAA29516@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMENNHBAA.tim_one@email.msn.com>

[Guido]
> ...
> (1) BACKGROUND: A Python lock may be released by a different thread
> than who aqcuired it, and it may be acquired by the same thread
> multiple times.  A pthread mutex must always be unlocked by the same
> thread that locked it, and can't be locked more than once.

The business about "multiple times" may be misleading, as it makes Windows
geeks think of reentrant locks.  The Python lock is not reentrant.  Instead,
it's perfectly OK for a thread that has acquired a Python lock to *try* to
acquire it again (but is not OK for a thread that has locked a pthread mutex
to try to lock it again):  the acquire attempt simply blocks until *another*
thread releases the Python lock.  By "Python lock" here I mean at the Python
C API level, and as exposed by the thread module; the threading module
exposes fancier locks (including reentrant locks).

> So, a Python lock can't be built out of a simple pthread mutex; instead,
> a Python lock is built out of a "locked" flag and a <condition variable,
> mutex> pair.  The mutex is locked for at most a few cycles, to protect
> the flag.  This design is Tim's (while still at KSR).

At that time, a pthread mutex was generally implemented as a pure spin lock,
so it was important to hold a pthread mutex for as short a span as possible
(and, indeed, the code never holds a pthread mutex for longer than across 2
simple C stmts).

> ...
> (3) BRAINWAVE: If we use a single mutex shared by all locks, instead
> of a mutex per lock, we can lock this mutex around the fork and thus
> prevent any other thread from locking it.  This is okay because, while
> a condition variable always needs a mutex to go with it, there's no
> rule that the same mutex can't be shared by many condition variables.
> The code below implements this.

Before people panic <wink>, note that this is "an issue" only for those
thread_xxx.h implementations such that fork() is supported *and* the child
process nukes threads in the child, leaving its mutexes and the data they
protect in an insane state.  They're the ones creating problems, so they're
the ones that pay.

> (4) MORE WORK: (a) The PyThread API also defines semaphores, which may
> have a similar problem.  But I'm not aware of any use of these (I'm
> not quite sure why semaphore support was added), so I haven't patched
> these.

I'm almost certain we all agreed (spurred by David Ascher) to get rid of the
semaphore implementations a while back.

> (b) The thread_pth.h file define locks in the same way; there
> may be others too.  I haven't touched these.

(c) While the scheme protects mutexes from going nuts in the child, that
doesn't necessarily imply that the data mutexes *protect* won't go nuts.
For example, this *may* not be enough to prevent insanity in import.c:  if
another thread is doing imports at the time a fork() occurs,
import_lock_level could be left at an arbitrarily high value in import.c.
But the thread doing the import has gone away in the child, so can't restore
import_lock_level to a sane value there.  I'm not convinced that matters in
this specific case, just saying we've got some tedious headwork to review
all the cases.

> (5) TESTING: Charles Waldman posted this code to reproduce the
> problem.  Unfortunately I haven't had much success with it; it seems
> to hang even when I apply Charles' patch.

What about when you apply *your* patch?

>     import thread
>     import os, sys
>     import time
>
>     def doit(name):
> 	while 1:
> 	    if os.fork()==0:
> 		print name, 'forked', os.getpid()
> 		os._exit(0)
> 	    r = os.wait()
>
>     for x in range(50):
> 	name = 't%s'%x
> 	print 'starting', name
> 	thread.start_new_thread(doit, (name,))
>
>     time.sleep(300)
>
> Here's the patch:

> ...
> + static pthread_mutex_t locking_mutex = PTHREAD_MUTEX_INITIALIZER;

Anyone know whether this gimmick is supported by all pthreads
implementations?

> ...
> + 	/* XXX Is the following supported here? */
> + 	pthread_atfork(&prefork_callback, &parent_callback,
> &child_callback);

I expect we need some autoconf stuff for that, right?

Thanks for writing this up!  Even more thanks for thinking of it <wink>.





From MarkH at ActiveState.com  Fri Aug 25 07:55:42 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Fri, 25 Aug 2000 15:55:42 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <20000825074440.K7566@xs4all.nl>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEHJDGAA.MarkH@ActiveState.com>

> Actually, it was a combination of removing UNPACK_LIST and adding
> list comprehensions. I just checked in a fix for this. Can you 
> confirm that
> this fixes it for the windows build, too ?

It does - thank you!

Mark.




From tim_one at email.msn.com  Fri Aug 25 10:08:23 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 04:08:23 -0400
Subject: [Python-Dev] RE: Passwords after CVS commands
In-Reply-To: <PGECLPOBGNBNKHNAGIJHAEAECEAA.andy@reportlab.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEOBHBAA.tim_one@email.msn.com>

The latest version of Andy Robinson's excellent instructions for setting up
a cmdline CVS using SSH under Windows are now available:

    http://python.sourceforge.net/winssh.txt

This is also linked to from the Python-at-SourceForge FAQ:

    http://python.sourceforge.net/sf-faq.html

where it replaces the former "let's try to pretend Windows is Unix(tm)"
mish-mash.  Riaan Booysen cracked the secret of how to get the Windows
ssh-keygen to actually generate keys (ha!  don't think I can't hear you
Unix(tm) weenies laughing <wink>), and that's the main change from the last
version of these instructions I posted here.  I added a lot of words to
Riann's, admonishing you not to leave the passphrase empty, but so
unconvincingly I bet you won't heed my professional advice.

and-not-revealing-whether-i-did-ly y'rs  - tim





From thomas at xs4all.net  Fri Aug 25 13:16:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 13:16:20 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0203.txt,1.11,1.12
In-Reply-To: <200008251111.EAA13270@slayer.i.sourceforge.net>; from twouters@users.sourceforge.net on Fri, Aug 25, 2000 at 04:11:31AM -0700
References: <200008251111.EAA13270@slayer.i.sourceforge.net>
Message-ID: <20000825131620.B16377@xs4all.nl>

On Fri, Aug 25, 2000 at 04:11:31AM -0700, Thomas Wouters wrote:

> !     [XXX so I am accepting this, but I'm a bit worried about the
> !     argument coercion.  For x+=y, if x supports augmented assignment,
> !     y should only be cast to x's type, not the other way around!]

Oh, note that I chose not to do *any* coercion, if x supports the in-place
operation. I'm not sure how valuable coercion would be, here, at least not
in its current form. (Isn't coercion mostly used by integer types ? And
aren't they immutable ? If an in-place method wants to have its argument
coerced, it should do so itself, just like with direct method calls.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Fri Aug 25 14:04:27 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:04:27 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
Message-ID: <39A660CB.7661E20E@lemburg.com>

I've asked this question before: when are we going to see
comp.lang.python.announce back online ?

I know that everyone is busy with getting the betas ready,
but looking at www.python.org I find that the "latest"
special announcement is dated 22-Mar-2000. People will get
the false idea that Python isn't moving anywhere... at least
not in the spirit of OSS' "release early and often".

Could someone please summarize what needs to be done to
post a message to comp.lang.python.announce without taking
the path via the official (currently defunct) moderator ?

I've had a look at the c.l.p.a postings and the only special
header they include is the "Approved: fleck at informatik.uni-bonn.de"
header.

If this is all it takes to post to a moderated newsgroup,
fixing Mailman to do the trick should be really simple.

I'm willing to help here to get this done *before* the Python
2.0beta1 announcement.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Fri Aug 25 14:14:20 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 14:14:20 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: <39A660CB.7661E20E@lemburg.com>; from mal@lemburg.com on Fri, Aug 25, 2000 at 02:04:27PM +0200
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <20000825141420.C16377@xs4all.nl>

On Fri, Aug 25, 2000 at 02:04:27PM +0200, M.-A. Lemburg wrote:

> I've asked this question before: when are we going to see
> comp.lang.python.announce back online ?

Barry is working on this, by modifying Mailman to play moderator (via the
normal list-admin's post-approval mechanism.) As I'm sure he'll tell you
himself, when he wakes up ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From just at letterror.com  Fri Aug 25 15:25:02 2000
From: just at letterror.com (Just van Rossum)
Date: Fri, 25 Aug 2000 14:25:02 +0100
Subject: [Python-Dev] (214)
Message-ID: <l03102805b5cc22d9c375@[193.78.237.177]>

(Just to make sure you guys know; there's currently a thread in c.l.py
about the new 2.0 features. Not a *single* person stood up to defend PEP
214: noone seems to like it.)

Just





From mal at lemburg.com  Fri Aug 25 14:17:41 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:17:41 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <20000825141420.C16377@xs4all.nl>
Message-ID: <39A663E5.A1E85044@lemburg.com>

Thomas Wouters wrote:
> 
> On Fri, Aug 25, 2000 at 02:04:27PM +0200, M.-A. Lemburg wrote:
> 
> > I've asked this question before: when are we going to see
> > comp.lang.python.announce back online ?
> 
> Barry is working on this, by modifying Mailman to play moderator (via the
> normal list-admin's post-approval mechanism.) As I'm sure he'll tell you
> himself, when he wakes up ;)

This sounds like an aweful lot of work... wouldn't a quick hack
as intermediate solution suffice for the moment (it needen't
even go into any public Mailman release -- just the Mailman
installation at python.org which handles the announcement
list).

Ok, I'll wait for Barry to wake up ;-) ... <ringring>
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Fri Aug 25 15:30:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 08:30:40 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0203.txt,1.11,1.12
In-Reply-To: Your message of "Fri, 25 Aug 2000 13:16:20 +0200."
             <20000825131620.B16377@xs4all.nl> 
References: <200008251111.EAA13270@slayer.i.sourceforge.net>  
            <20000825131620.B16377@xs4all.nl> 
Message-ID: <200008251330.IAA19481@cj20424-a.reston1.va.home.com>

> Oh, note that I chose not to do *any* coercion, if x supports the in-place
> operation. I'm not sure how valuable coercion would be, here, at least not
> in its current form. (Isn't coercion mostly used by integer types ? And
> aren't they immutable ? If an in-place method wants to have its argument
> coerced, it should do so itself, just like with direct method calls.)

All agreed!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Fri Aug 25 15:34:44 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 08:34:44 -0500
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: Your message of "Fri, 25 Aug 2000 14:04:27 +0200."
             <39A660CB.7661E20E@lemburg.com> 
References: <39A660CB.7661E20E@lemburg.com> 
Message-ID: <200008251334.IAA19600@cj20424-a.reston1.va.home.com>

> I've asked this question before: when are we going to see
> comp.lang.python.announce back online ?
> 
> I know that everyone is busy with getting the betas ready,
> but looking at www.python.org I find that the "latest"
> special announcement is dated 22-Mar-2000. People will get
> the false idea that Python isn't moving anywhere... at least
> not in the spirit of OSS' "release early and often".
> 
> Could someone please summarize what needs to be done to
> post a message to comp.lang.python.announce without taking
> the path via the official (currently defunct) moderator ?
> 
> I've had a look at the c.l.p.a postings and the only special
> header they include is the "Approved: fleck at informatik.uni-bonn.de"
> header.
> 
> If this is all it takes to post to a moderated newsgroup,
> fixing Mailman to do the trick should be really simple.
> 
> I'm willing to help here to get this done *before* the Python
> 2.0beta1 announcement.

Coincidence!  Barry just wrote the necessary hacks that allow a
Mailman list to be used to moderate a newsgroup, and installed them in
python.org.  He's testing the setup today and I expect that we'll be
able to solicit for moderators tonight!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Fri Aug 25 14:47:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 14:47:06 +0200
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <200008251334.IAA19600@cj20424-a.reston1.va.home.com>
Message-ID: <39A66ACA.F638215A@lemburg.com>

Guido van Rossum wrote:
> 
> > I've asked this question before: when are we going to see
> > comp.lang.python.announce back online ?
> >
> > I know that everyone is busy with getting the betas ready,
> > but looking at www.python.org I find that the "latest"
> > special announcement is dated 22-Mar-2000. People will get
> > the false idea that Python isn't moving anywhere... at least
> > not in the spirit of OSS' "release early and often".
> >
> > Could someone please summarize what needs to be done to
> > post a message to comp.lang.python.announce without taking
> > the path via the official (currently defunct) moderator ?
> >
> > I've had a look at the c.l.p.a postings and the only special
> > header they include is the "Approved: fleck at informatik.uni-bonn.de"
> > header.
> >
> > If this is all it takes to post to a moderated newsgroup,
> > fixing Mailman to do the trick should be really simple.
> >
> > I'm willing to help here to get this done *before* the Python
> > 2.0beta1 announcement.
> 
> Coincidence!  Barry just wrote the necessary hacks that allow a
> Mailman list to be used to moderate a newsgroup, and installed them in
> python.org.  He's testing the setup today and I expect that we'll be
> able to solicit for moderators tonight!

Way cool :-) Thanks.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Fri Aug 25 15:17:17 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 09:17:17 -0400 (EDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
References: <ECEPKNMJLHAPFFJHDOJBMEHHDGAA.MarkH@ActiveState.com>
Message-ID: <14758.29149.992343.502526@bitdiddle.concentric.net>

>>>>> "MH" == Mark Hammond <MarkH at ActiveState.com> writes:

  MH> Does the Linux community ever run with Py_DEBUG defined?  I
  MH> couldn't even find a simple way to turn it on to confirm it also
  MH> exists on Linux...

I build a separate version of Python using make OPT="-Wall -DPy_DEBUG"

On Linux, the sre test fails.  Do you see the same problem on Windows?

Jeremy



From tim_one at email.msn.com  Fri Aug 25 15:24:40 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 09:24:40 -0400
Subject: [Python-Dev] (214)
In-Reply-To: <l03102805b5cc22d9c375@[193.78.237.177]>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>

[Just van Rossum]
> (Just to make sure you guys know; there's currently a thread in c.l.py
> about the new 2.0 features. Not a *single* person stood up to defend
> PEP 214: noone seems to like it.)

But that's not true!  I defended it <wink>.  Alas (or "thank God!",
depending on how you look at it), I sent my "In praise of" post to the
mailing list and apparently the list->news gateway dropped it on the floor.

It most reminds me of the introduction of class.__private names.  Except I
don't think *anyone* was a fan of those besides your brother (I was neutral,
but we had a long & quite fun Devil's Advocate debate anyway), and the
opposition was far more strident than it's yet gotten on PEP 214.  I liked
__private names a lot after I used them, and, as I said in my unseen post,
having used the new print gimmick several times "for real" now I don't ever
want to go back.

The people most opposed seem to be those who worked hard to learn about
sys.__stdout__ and exactly why they need a try/finally block <0.9 wink>.
Some of the Python-Dev'ers have objected too, but much more quietly --
principled objections always get lost in the noise.

doubting-that-python's-future-hangs-in-the-balance-ly y'rs  - tim





From moshez at math.huji.ac.il  Fri Aug 25 15:48:26 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Fri, 25 Aug 2000 16:48:26 +0300 (IDT)
Subject: [Python-Dev] Tasks
Message-ID: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>

This is a summary of problems I found with the task page:

Tasks which I was sure were complete
------------------------------------
17336 -- Add augmented assignments -- marked 80%. Thomas?
17346 -- Add poll() to selectmodule -- marked 50%. Andrew?

Duplicate tasks
---------------
17923 seems to be a duplicate of 17922

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From fdrake at beopen.com  Fri Aug 25 15:51:14 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 09:51:14 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <200008251344.GAA16623@slayer.i.sourceforge.net>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
Message-ID: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > + Accepted and in progress
...
 > +     * Support for opcode arguments > 2**16 - Charles Waldman
 > +       SF Patch 100893

  I checked this in 23 Aug.

 > +     * Range literals - Thomas Wouters
 > +       SF Patch 100902

  I thought this was done as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Fri Aug 25 15:53:34 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 15:53:34 +0200
Subject: [Python-Dev] Tasks
In-Reply-To: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>; from moshez@math.huji.ac.il on Fri, Aug 25, 2000 at 04:48:26PM +0300
References: <Pine.GSO.4.10.10008251642490.12206-100000@sundial>
Message-ID: <20000825155334.D16377@xs4all.nl>

On Fri, Aug 25, 2000 at 04:48:26PM +0300, Moshe Zadka wrote:

> Tasks which I was sure were complete
> ------------------------------------
> 17336 -- Add augmented assignments -- marked 80%. Thomas?

It isn't complete. It's missing documentation. I'm done with meetings today
(*yay!*) so I'm in the process of updating all that, as well as working on
it :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Fri Aug 25 15:57:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 15:57:53 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Fri, Aug 25, 2000 at 09:51:14AM -0400
References: <200008251344.GAA16623@slayer.i.sourceforge.net> <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <20000825155752.E16377@xs4all.nl>

On Fri, Aug 25, 2000 at 09:51:14AM -0400, Fred L. Drake, Jr. wrote:
>  > +     * Range literals - Thomas Wouters
>  > +       SF Patch 100902

>   I thought this was done as well.

No, it just hasn't been touched in a while :) I need to finish up the PEP
(move the Open Issues to "BDFL Pronouncements", and include said
pronouncements) and sync the patch with the CVS tree. Oh, and it needs to be
accepted, too ;) Tim claims he's going to review it.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at beopen.com  Fri Aug 25 16:03:16 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 10:03:16 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: <14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
References: <200008251344.GAA16623@slayer.i.sourceforge.net>
	<14758.31186.670323.159875@cj42289-a.reston1.va.home.com>
Message-ID: <14758.31908.552647.739111@bitdiddle.concentric.net>

>>>>> "FLD" == Fred L Drake, <fdrake at beopen.com> writes:

  FLD> Jeremy Hylton writes:
  >> + Accepted and in progress
  FLD> ...
  >> + * Support for opcode arguments > 2**16 - Charles Waldman
  >> + SF Patch 100893

  FLD>   I checked this in 23 Aug.

Ok.

  >> + * Range literals - Thomas Wouters
  >> + SF Patch 100902

  FLD>   I thought this was done as well.

There's still an open patch for it.

Jeremy



From mal at lemburg.com  Fri Aug 25 16:06:57 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 16:06:57 +0200
Subject: [Python-Dev] (214)
References: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <39A67D81.FD56F2C7@lemburg.com>

Tim Peters wrote:
> 
> [Just van Rossum]
> > (Just to make sure you guys know; there's currently a thread in c.l.py
> > about the new 2.0 features. Not a *single* person stood up to defend
> > PEP 214: noone seems to like it.)
> 
> But that's not true!  I defended it <wink>. 

Count me in on that one too... it's just great for adding a few
quick debugging prints into the program.

The only thing I find non-Pythonesque is that an operator
is used. I would have opted for something like:

	print on <stream> x,y,z

instead of

	print >> <stream> x,y,z

But I really don't mind since I don't use "print" in production
code for anything other than debugging anyway :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Fri Aug 25 16:26:15 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 10:26:15 -0400 (EDT)
Subject: [Python-Dev] compiling with SSL support on Windows
Message-ID: <14758.33287.507315.396536@bitdiddle.concentric.net>

https://sourceforge.net/bugs/?func=detailbug&bug_id=110683&group_id=5470

We have a bug report about compilation problems in the socketmodule on
Windows when using SSL support.  Is there any Windows user with
OpenSSL who can look into this problem?

Jeremy



From guido at beopen.com  Fri Aug 25 17:24:03 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 10:24:03 -0500
Subject: [Python-Dev] (214)
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:24:40 -0400."
             <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> 
Message-ID: <200008251524.KAA19935@cj20424-a.reston1.va.home.com>

I've just posted a long response to the whole thread in c.l.py, and
added the essence (a long new section titled "More Justification by
the BDFL")) of it to the PEP.  See
http://python.sourceforge.net/peps/pep-0214.html

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)
	



From guido at beopen.com  Fri Aug 25 17:32:57 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 10:32:57 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/peps pep-0200.txt,1.28,1.29
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:51:14 -0400."
             <14758.31186.670323.159875@cj42289-a.reston1.va.home.com> 
References: <200008251344.GAA16623@slayer.i.sourceforge.net>  
            <14758.31186.670323.159875@cj42289-a.reston1.va.home.com> 
Message-ID: <200008251532.KAA20007@cj20424-a.reston1.va.home.com>

>  > +     * Range literals - Thomas Wouters
>  > +       SF Patch 100902
> 
>   I thought this was done as well.

No:

$ ./python
Python 2.0b1 (#79, Aug 25 2000, 08:31:47)  [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
>>> [1:10]
  File "<stdin>", line 1
    [1:10]
      ^
SyntaxError: invalid syntax
>>> 

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jack at oratrix.nl  Fri Aug 25 16:48:24 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Fri, 25 Aug 2000 16:48:24 +0200
Subject: [Python-Dev] sre and regexp behave baldy under lowmemory conditions 
In-Reply-To: Message by Guido van Rossum <guido@beopen.com> ,
	     Thu, 24 Aug 2000 23:11:54 -0500 , <200008250411.XAA08797@cj20424-a.reston1.va.home.com> 
Message-ID: <20000825144829.CB29FD71F9@oratrix.oratrix.nl>

Recently, Guido van Rossum <guido at beopen.com> said:
> > test_re wasn't so bad, the only problem was that it crashed with a
> > "NULL return without an exception". test_regexp was worse, it crashed
> > my machine.
> 
> That's regex, right?  regexp was the *really* old regular expression
> module we once had.
> 
> Anyway, I don't care about regex, it's old.
> 
> The sre code needs to be robustified, but it's not a high priority for
> me.

Ok, fine with me.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From mal at lemburg.com  Fri Aug 25 17:05:38 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 17:05:38 +0200
Subject: [Python-Dev] [PEP 224] Attribute Docstrings
Message-ID: <39A68B42.4E3F8A3D@lemburg.com>

An HTML version of the attached can be viewed at

    http://python.sourceforge.net/peps/pep-0224.html

Even though the implementation won't go into Python 2.0, it
is worthwhile discussing this now, since adding these attribute
docstrings to existing code already works: Python simply ignores
them. What remains is figuring out a way to make use of them and
this is what the proposal is all about...

--

PEP: 224
Title: Attribute Docstrings
Version: $Revision: 1.2 $
Author: mal at lemburg.com (Marc-Andre Lemburg)
Status: Draft
Type: Standards Track
Python-Version: 2.1
Created: 23-Aug-2000
Post-History:


Introduction

    This PEP describes the "attribute docstring" proposal for Python
    2.0.  This PEP tracks the status and ownership of this feature.
    It contains a description of the feature and outlines changes
    necessary to support the feature.  The CVS revision history of
    this file contains the definitive historical record.


Rationale

    This PEP proposes a small addition to the way Python currently
    handles docstrings embedded in Python code.

    Python currently only handles the case of docstrings which appear
    directly after a class definition, a function definition or as
    first string literal in a module.  The string literals are added
    to the objects in question under the __doc__ attribute and are
    from then on available for introspection tools which can extract
    the contained information for help, debugging and documentation
    purposes.

    Docstrings appearing in locations other than the ones mentioned
    are simply ignored and don't result in any code generation.

    Here is an example:

        class C:
            "class C doc-string"

            a = 1
            "attribute C.a doc-string (1)"

            b = 2
            "attribute C.b doc-string (2)"

    The docstrings (1) and (2) are currently being ignored by the
    Python byte code compiler, but could obviously be put to good use
    for documenting the named assignments that precede them.
    
    This PEP proposes to also make use of these cases by proposing
    semantics for adding their content to the objects in which they
    appear under new generated attribute names.

    The original idea behind this approach which also inspired the
    above example was to enable inline documentation of class
    attributes, which can currently only be documented in the class's
    docstring or using comments which are not available for
    introspection.


Implementation

    Docstrings are handled by the byte code compiler as expressions.
    The current implementation special cases the few locations
    mentioned above to make use of these expressions, but otherwise
    ignores the strings completely.

    To enable use of these docstrings for documenting named
    assignments (which is the natural way of defining e.g. class
    attributes), the compiler will have to keep track of the last
    assigned name and then use this name to assign the content of the
    docstring to an attribute of the containing object by means of
    storing it in as a constant which is then added to the object's
    namespace during object construction time.

    In order to preserve features like inheritance and hiding of
    Python's special attributes (ones with leading and trailing double
    underscores), a special name mangling has to be applied which
    uniquely identifies the docstring as belonging to the name
    assignment and allows finding the docstring later on by inspecting
    the namespace.

    The following name mangling scheme achieves all of the above:

        __doc__<attributename>__

    To keep track of the last assigned name, the byte code compiler
    stores this name in a variable of the compiling structure.  This
    variable defaults to NULL.  When it sees a docstring, it then
    checks the variable and uses the name as basis for the above name
    mangling to produce an implicit assignment of the docstring to the
    mangled name.  It then resets the variable to NULL to avoid
    duplicate assignments.

    If the variable does not point to a name (i.e. is NULL), no
    assignments are made.  These will continue to be ignored like
    before.  All classical docstrings fall under this case, so no
    duplicate assignments are done.

    In the above example this would result in the following new class
    attributes to be created:

        C.__doc__a__ == "attribute C.a doc-string (1)"
        C.__doc__b__ == "attribute C.b doc-string (2)"

    A patch to the current CVS version of Python 2.0 which implements
    the above is available on SourceForge at [1].


Caveats of the Implementation
    
    Since the implementation does not reset the compiling structure
    variable when processing a non-expression, e.g. a function
    definition, the last assigned name remains active until either the
    next assignment or the next occurrence of a docstring.

    This can lead to cases where the docstring and assignment may be
    separated by other expressions:

        class C:
            "C doc string"

            b = 2

            def x(self):
                "C.x doc string"
                y = 3
                return 1

            "b's doc string"

    Since the definition of method "x" currently does not reset the
    used assignment name variable, it is still valid when the compiler
    reaches the docstring "b's doc string" and thus assigns the string
    to __doc__b__.

    A possible solution to this problem would be resetting the name
    variable for all non-expression nodes.

    
Copyright

    This document has been placed in the Public Domain.


References

    [1] http://sourceforge.net/patch/?func=detailpatch&patch_id=101264&group_id=5470

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/




From bwarsaw at beopen.com  Fri Aug 25 17:12:34 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:12:34 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
Message-ID: <14758.36066.49304.190172@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> I've asked this question before: when are we going to see
    M> comp.lang.python.announce back online ?

    M> If this is all it takes to post to a moderated newsgroup,
    M> fixing Mailman to do the trick should be really simple.

    M> I'm willing to help here to get this done *before* the Python
    M> 2.0beta1 announcement.

MAL, you must be reading my mind!

I've actually been working on some unofficial patches to Mailman that
will let list admins moderate a moderated newsgroup.  The technical
details are described in a recent post to mailman-developers[1].

I'm testing it out right now.  I first installed this on starship, but
there's no nntp server that starship can post to, so I've since moved
the list to python.org.  However, I'm still having some problems with
the upstream feed, or at least I haven't seen approved messages
appearing on deja or my ISP's server.  I'm not exactly sure why; could
just be propagation delays.

Anyway, if anybody does see my test messages show up in the newsgroup
(not the gatewayed mailing list -- sorry David), please let me know.

-Barry

[1] http://www.python.org/pipermail/mailman-developers/2000-August/005388.html



From bwarsaw at beopen.com  Fri Aug 25 17:16:30 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:16:30 -0400 (EDT)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
	<20000825141420.C16377@xs4all.nl>
	<39A663E5.A1E85044@lemburg.com>
Message-ID: <14758.36302.521877.833943@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> This sounds like an aweful lot of work... wouldn't a quick hack
    M> as intermediate solution suffice for the moment (it needen't
    M> even go into any public Mailman release -- just the Mailman
    M> installation at python.org which handles the announcement
    M> list).

Naw, it's actually the least amount of work, since all the mechanism
is already there.  You just need to add a flag and another hold
criteria.  It's unofficial because I'm in feature freeze.

    M> Ok, I'll wait for Barry to wake up ;-) ... <ringring>

Who says I'm awake?  Don't you know I'm a very effective sleep hacker?
I'm also an effective sleep gardener and sometimes the urge to snore
and plant takes over.  You should see my cucumbers!

the-only-time-in-the-last-year-i've-been-truly-awake-was-when-i
jammed-with-eric-at-ipc8-ly y'rs,
-Barry



From bwarsaw at beopen.com  Fri Aug 25 17:21:43 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 11:21:43 -0400 (EDT)
Subject: [Python-Dev] (214)
References: <l03102805b5cc22d9c375@[193.78.237.177]>
	<LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
Message-ID: <14758.36615.589212.75065@anthem.concentric.net>

>>>>> "TP" == Tim Peters <tim_one at email.msn.com> writes:

    TP> But that's not true!  I defended it <wink>.  Alas (or "thank
    TP> God!", depending on how you look at it), I sent my "In praise
    TP> of" post to the mailing list and apparently the list->news
    TP> gateway dropped it on the floor.

Can other people confirm that list->news is broken?  If so, then that
would explain my c.l.py.a moderation problems.  I know that my
approved test message showed up on CNRI's internal news server because
at least one list member of the c.l.py.a gateway got it, but I haven't
seen it upstream of CNRI.  I'll contact their admins and let them know
the upstream feed could be broken.

-Barry



From tim_one at email.msn.com  Fri Aug 25 17:34:47 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 11:34:47 -0400
Subject: [Python-Dev] (214)
In-Reply-To: <14758.36615.589212.75065@anthem.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEPFHBAA.tim_one@email.msn.com>

[Barry]
> Can other people confirm that list->news is broken?

I don't believe that it is (e.g., several of my c.l.py list mailings today
have already shown up my ISP's news server).

The post in question was mailed

    Thu 8/24/00 3:15 AM (EDT)

Aahz (a fellow mailing-list devotee) noted on c.l.py that it had never shown
up on the newsgroup, and after poking around I couldn't find it anywhere
either.

> ...
> I'll contact their admins and let them know the upstream feed could
> be broken.

Well, you can *always* let them know that <wink>.





From thomas at xs4all.net  Fri Aug 25 17:36:50 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 17:36:50 +0200
Subject: [Python-Dev] (214)
In-Reply-To: <14758.36615.589212.75065@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 25, 2000 at 11:21:43AM -0400
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net>
Message-ID: <20000825173650.G16377@xs4all.nl>

On Fri, Aug 25, 2000 at 11:21:43AM -0400, Barry A. Warsaw wrote:

> Can other people confirm that list->news is broken? 

No, not really. I can confirm that not all messages make it to the
newsgroup: I can't find Tim's posting on PEP 214 anywhere on comp.lang.py.
(and our new super-newsserver definately keeps the postings around long
enough, so I should be able to see it, and I did get it through
python-list!)

However, I *can* find some of my python-list submissions from earlier today,
so it hasn't completely gone to meet its maker, either.

I can also confirm that python-dev itself seems to be missing some messages.
I occasionally see messages quoted which I haven't seen myself, and I've
seen others complain that they haven't seen my messages, as quoted in other
mailings. Not more than a handful in the last week or two, though, and they
*could* be attributed to dementia.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Fri Aug 25 17:39:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 25 Aug 2000 17:39:06 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com> <14758.36066.49304.190172@anthem.concentric.net>
Message-ID: <39A6931A.5B396D26@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     M> I've asked this question before: when are we going to see
>     M> comp.lang.python.announce back online ?
> 
>     M> If this is all it takes to post to a moderated newsgroup,
>     M> fixing Mailman to do the trick should be really simple.
> 
>     M> I'm willing to help here to get this done *before* the Python
>     M> 2.0beta1 announcement.
> 
> MAL, you must be reading my mind!
> 
> I've actually been working on some unofficial patches to Mailman that
> will let list admins moderate a moderated newsgroup.  The technical
> details are described in a recent post to mailman-developers[1].

Cool... :-)
 
> I'm testing it out right now.  I first installed this on starship, but
> there's no nntp server that starship can post to, so I've since moved
> the list to python.org.  However, I'm still having some problems with
> the upstream feed, or at least I haven't seen approved messages
> appearing on deja or my ISP's server.  I'm not exactly sure why; could
> just be propagation delays.
> 
> Anyway, if anybody does see my test messages show up in the newsgroup
> (not the gatewayed mailing list -- sorry David), please let me know.

Nothing has appeared at my ISP yet. Looking at the mailing list
archives, the postings don't have the Approved: header (but
perhaps it's just the archive which doesn't include it).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From bwarsaw at beopen.com  Fri Aug 25 18:20:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:20:59 -0400 (EDT)
Subject: [Python-Dev] (214)
References: <l03102805b5cc22d9c375@[193.78.237.177]>
	<LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com>
	<14758.36615.589212.75065@anthem.concentric.net>
	<20000825173650.G16377@xs4all.nl>
Message-ID: <14758.40171.159233.521885@anthem.concentric.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    >> Can other people confirm that list->news is broken?

    TW> No, not really. I can confirm that not all messages make it to
    TW> the newsgroup: I can't find Tim's posting on PEP 214 anywhere
    TW> on comp.lang.py.  (and our new super-newsserver definately
    TW> keeps the postings around long enough, so I should be able to
    TW> see it, and I did get it through python-list!)

    TW> However, I *can* find some of my python-list submissions from
    TW> earlier today, so it hasn't completely gone to meet its maker,
    TW> either.

    TW> I can also confirm that python-dev itself seems to be missing
    TW> some messages.  I occasionally see messages quoted which I
    TW> haven't seen myself, and I've seen others complain that they
    TW> haven't seen my messages, as quoted in other mailings. Not
    TW> more than a handful in the last week or two, though, and they
    TW> *could* be attributed to dementia.

I found Tim's message in the archives, so I'm curious whether those
missing python-dev messages are also in the archives?  If so, that's a
good indication that Mailman is working, so the problem is upstream
from there.  I'm also not seeing any errors in the log files that
would indicate a Mailman problem.

I have seen some weird behavior from Postfix on that machine:
occasionally messages to my python.org addr, which should be forwarded
to my beopen.com addr just don't get forwarded.  They get dropped in
my spool file.  I have no idea why, and the mail logs don't give a
clue.  I don't know if any of that is related, although I did just
upgrade Postfix to the latest revision.  And there are about 3k
messages sitting in Postfix's queue waiting to go out though.

Sigh.  I really don't want to spend the next week debugging this
stuff. ;/

-Barry



From bwarsaw at beopen.com  Fri Aug 25 18:22:05 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:22:05 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
	<14758.36066.49304.190172@anthem.concentric.net>
	<39A6931A.5B396D26@lemburg.com>
Message-ID: <14758.40237.49311.811744@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    M> Nothing has appeared at my ISP yet. Looking at the mailing list
    M> archives, the postings don't have the Approved: header (but
    M> perhaps it's just the archive which doesn't include it).

Correct.  They're stripped out of the archives.  My re-homed nntpd
test worked though all the way through, so one more test and we're
home free.

-Barry



From thomas at xs4all.net  Fri Aug 25 18:32:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 25 Aug 2000 18:32:24 +0200
Subject: [Python-Dev] (214)
In-Reply-To: <14758.40171.159233.521885@anthem.concentric.net>; from bwarsaw@beopen.com on Fri, Aug 25, 2000 at 12:20:59PM -0400
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net> <20000825173650.G16377@xs4all.nl> <14758.40171.159233.521885@anthem.concentric.net>
Message-ID: <20000825183224.N15110@xs4all.nl>

On Fri, Aug 25, 2000 at 12:20:59PM -0400, Barry A. Warsaw wrote:

> I found Tim's message in the archives, so I'm curious whether those
> missing python-dev messages are also in the archives?  If so, that's a
> good indication that Mailman is working, so the problem is upstream
> from there.  I'm also not seeing any errors in the log files that
> would indicate a Mailman problem.

Well, I saw one message from Guido, where he was replying to someone who was
replying to Mark. Guido claimed he hadn't seen that original message
(Mark's), though I am certain I did see it. The recollections on missing
messages on my part are much more vague, though, so it *still* could be
attributed to dementia (of people, MUA's or MTA's ;)

I'll keep a closer eye on it, though.

> I have seen some weird behavior from Postfix on that machine:
> occasionally messages to my python.org addr, which should be forwarded
> to my beopen.com addr just don't get forwarded.  They get dropped in
> my spool file.  I have no idea why, and the mail logs don't give a
> clue.  I don't know if any of that is related, although I did just
> upgrade Postfix to the latest revision.  And there are about 3k
> messages sitting in Postfix's queue waiting to go out though.

Sendmail, baby! <duck> We're currently running postfix on a single machine
(www.hal2001.org, which also does the Mailman for it) mostly because our
current Sendmail setup has one huge advantage: it works. And it works fine.
We just don't want to change the sendmail rules or fiddle with out
mailertable-setup, but it works ! :-) 

> Sigh.  I really don't want to spend the next week debugging this
> stuff. ;/

So don't. Do what any proper developer would do: proclaim there isn't enough
info (there isn't, unless you can find the thread I'm talking about, above.
I'll see if I can locate it for you, since I think I saved the entire thread
with 'must check this' in the back of my head) and don't fix it until it
happens again. I do not think this is Mailman related, though it might be
python.org-mailman related (as in, the postfix or the link on that machine,
or something.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Fri Aug 25 19:39:41 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 12:39:41 -0500
Subject: [Python-Dev] (214)
In-Reply-To: Your message of "Fri, 25 Aug 2000 12:20:59 -0400."
             <14758.40171.159233.521885@anthem.concentric.net> 
References: <l03102805b5cc22d9c375@[193.78.237.177]> <LNBBLJKPBEHFEDALKOLCAEOJHBAA.tim_one@email.msn.com> <14758.36615.589212.75065@anthem.concentric.net> <20000825173650.G16377@xs4all.nl>  
            <14758.40171.159233.521885@anthem.concentric.net> 
Message-ID: <200008251739.MAA20815@cj20424-a.reston1.va.home.com>

> Sigh.  I really don't want to spend the next week debugging this
> stuff. ;/

Please don't.  This happened to me before, and eventually everything
came through -- sometimes with days delay.  So it's just slowness.

There's a new machine waiting for us at VA Linux.  I'll ask Kahn again
to speed up the transition.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From DavidA at ActiveState.com  Fri Aug 25 18:50:47 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Fri, 25 Aug 2000 09:50:47 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: <14758.36302.521877.833943@anthem.concentric.net>
Message-ID: <Pine.WNT.4.21.0008250949150.816-100000@loom>

> the-only-time-in-the-last-year-i've-been-truly-awake-was-when-i
> jammed-with-eric-at-ipc8-ly y'rs,

And that was really good!  You should do it more often!

Let's make sure we organize a jam session in advance for icp9 -- that way
we can get more folks to bring instruments, berries, sugar, bread, butter,
etc.

i-don't-jam-i-listen-ly y'rs,

--david





From bwarsaw at beopen.com  Fri Aug 25 18:56:12 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 12:56:12 -0400 (EDT)
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
References: <14758.36302.521877.833943@anthem.concentric.net>
	<Pine.WNT.4.21.0008250949150.816-100000@loom>
Message-ID: <14758.42284.829235.406950@anthem.concentric.net>

>>>>> "DA" == David Ascher <DavidA at ActiveState.com> writes:

    DA> And that was really good!  You should do it more often!

Thanks!

    DA> Let's make sure we organize a jam session in advance for icp9
    DA> -- that way we can get more folks to bring instruments,
    DA> berries, sugar, bread, butter, etc.

    DA> i-don't-jam-i-listen-ly y'rs,

Okay, so who's gonna webcast IPC9? :)

-B



From bwarsaw at beopen.com  Fri Aug 25 19:05:22 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Fri, 25 Aug 2000 13:05:22 -0400 (EDT)
Subject: [Python-Dev] The resurrection of comp.lang.python.announce
Message-ID: <14758.42834.289193.548978@anthem.concentric.net>

Well, after nearly 6 months of inactivity, I'm very happy to say that
comp.lang.python.announce is being revived.  It will now be moderated
by a team of volunteers (see below) using a Mailman mailing list.
Details about comp.lang.python.announce, and its mailing list gateway
python-announce-list at python.org can be found at

   http://www.python.org/psa/MailingLists.html#clpa

Posting guidelines can be found at

   ftp://rtfm.mit.edu/pub/usenet/comp.lang.python.announce/python-newsgroup-faq

This message also serves as a call for moderators.  I am looking for 5
experienced Python folks who would like to team moderator the
newsgroup.  It is a big plus if you've moderated newsgroups before.

If you are interested in volunteering, please email me directly.  Once
I've chosen the current crop of moderators, I'll give you instructions
on how to do it.  Don't worry if you don't get chosen this time
around; I'm sure we'll have some rotation in the moderators ranks as
time goes on.

Cheers,
-Barry



From guido at beopen.com  Fri Aug 25 20:12:28 2000
From: guido at beopen.com (Guido van Rossum)
Date: Fri, 25 Aug 2000 13:12:28 -0500
Subject: [Python-Dev] c.l.p.a -- what needs to be done ?
In-Reply-To: Your message of "Fri, 25 Aug 2000 09:50:47 MST."
             <Pine.WNT.4.21.0008250949150.816-100000@loom> 
References: <Pine.WNT.4.21.0008250949150.816-100000@loom> 
Message-ID: <200008251812.NAA21141@cj20424-a.reston1.va.home.com>

> And that was really good!  You should do it more often!

Agreed!

> Let's make sure we organize a jam session in advance for icp9 -- that way
> we can get more folks to bring instruments, berries, sugar, bread, butter,
> etc.

This sounds much more fun (and more Pythonic) than a geeks-with-guns
event! :-)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Fri Aug 25 19:25:13 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 13:25:13 -0400 (EDT)
Subject: [Python-Dev] warning in initpyexpat
Message-ID: <14758.44025.333241.758233@bitdiddle.concentric.net>

gcc -Wall is complaining about possible use of errors_module without
initialization in the initpyexpat function.  Here's the offending code:

    sys_modules = PySys_GetObject("modules");
    {
        PyObject *errmod_name = PyString_FromString("pyexpat.errors");

        if (errmod_name != NULL) {
            errors_module = PyDict_GetItem(d, errmod_name);
            if (errors_module == NULL) {
                errors_module = PyModule_New("pyexpat.errors");
                if (errors_module != NULL) {
                    PyDict_SetItemString(d, "errors", errors_module);
                    PyDict_SetItem(sys_modules, errmod_name, errors_module);
                }
            }
            Py_DECREF(errmod_name);
            if (errors_module == NULL)
                /* Don't code dump later! */
                return;
        }
    }
    errors_dict = PyModule_GetDict(errors_module);

It is indeed the case that errors_module can be used without
initialization.  If PyString_FromString("pyexpat.errors") fails, you
ignore the error and will immediately call PyModule_GetDict with an
uninitialized variable.

You ought to check for the error condition and bail cleanly, rather
than ignoring it and failing somewhere else.

I also wonder why the code that does this check is in its own set of
curly braces; thus, the post to python-dev to discuss the style issue.
Why did you do this?  Is it approved Python style?  It looks cluttered
to me.

Jeremy




From fdrake at beopen.com  Fri Aug 25 19:36:53 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 13:36:53 -0400 (EDT)
Subject: [Python-Dev] Re: warning in initpyexpat
In-Reply-To: <14758.44025.333241.758233@bitdiddle.concentric.net>
References: <14758.44025.333241.758233@bitdiddle.concentric.net>
Message-ID: <14758.44725.345785.430141@cj42289-a.reston1.va.home.com>

Jeremy Hylton writes:
 > It is indeed the case that errors_module can be used without
 > initialization.  If PyString_FromString("pyexpat.errors") fails, you
 > ignore the error and will immediately call PyModule_GetDict with an
 > uninitialized variable.

  I'll fix that.

 > I also wonder why the code that does this check is in its own set of
 > curly braces; thus, the post to python-dev to discuss the style issue.
 > Why did you do this?  Is it approved Python style?  It looks cluttered
 > to me.

  I don't like it either.  ;)  I just wanted a temporary variable, but
I can declare that at the top of initpyexpat().  This will be
corrected as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From gward at mems-exchange.org  Fri Aug 25 20:16:24 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Fri, 25 Aug 2000 14:16:24 -0400
Subject: [Python-Dev] If you thought there were too many PEPs...
Message-ID: <20000825141623.G17277@ludwig.cnri.reston.va.us>

...yow: the Perl community is really going overboard in proposing
enhancements:

[from the Perl "daily" news]
>   [3] Perl 6 RFCs Top 150 Mark; New Perl 6 Lists Added [Links]
> 
>         The number of [4]Perl 6 RFCs hit 161 today. The 100th RFC was
>         [5]Embed full URI support into Perl by Nathan Wiger, allowing
>         URIs like "file:///local/etc/script.conf" to be passed to builtin
>         file functions and operators. The 150th was [6]Extend regex
>         syntax to provide for return of a hash of matched subpatterns by
>         Kevin Walker, and the latest, 161, is [7]OO Integration/Migration
>         Path by Matt Youell.
> 
>         New [8]Perl 6 mailing lists include perl6-language- sublists
>         objects, datetime, errors, data, and regex. perl6-bootstrap is
>         being closed, and perl6-meta is taking its place (the subscriber
>         list will not be transferred).
[...]
>    3. http://www.news.perl.org/perl-news.cgi?item=967225716%7C10542
>    4. http://dev.perl.org/rfc/
>    5. http://dev.perl.org/rfc/100.pod
>    6. http://dev.perl.org/rfc/150.pod
>    7. http://dev.perl.org/rfc/161.pod
>    8. http://dev.perl.org/lists.shtml

-- 
Greg Ward - software developer                gward at mems-exchange.org
MEMS Exchange / CNRI                           voice: +1-703-262-5376
Reston, Virginia, USA                            fax: +1-703-262-5367



From gvwilson at nevex.com  Fri Aug 25 20:30:53 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Fri, 25 Aug 2000 14:30:53 -0400 (EDT)
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
In-Reply-To: <20000825141623.G17277@ludwig.cnri.reston.va.us>
Message-ID: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>

> On Fri, 25 Aug 2000, Greg Ward wrote:
> >         The number of [4]Perl 6 RFCs hit 161 today...
> >         New [8]Perl 6 mailing lists include perl6-language- sublists
> >         objects, datetime, errors, data, and regex. perl6-bootstrap is
> >         being closed, and perl6-meta is taking its place (the subscriber
> >         list will not be transferred).

I've heard from several different sources that when Guy Steele Jr was
hired by Sun to help define the Java language standard, his first proposal
was that the length of the standard be fixed --- anyone who wanted to add
a new feature had to identify an existing feature that would be removed
from the language to make room.  Everyone said, "That's so cool --- but of
course we can't do it..."

Think how much simpler Java would be today if...

;-)

Greg




From effbot at telia.com  Fri Aug 25 21:11:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Fri, 25 Aug 2000 21:11:16 +0200
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
References: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>
Message-ID: <01b701c00ec8$3f47ebe0$f2a6b5d4@hagrid>

greg wrote:
> I've heard from several different sources that when Guy Steele Jr was
> hired by Sun to help define the Java language standard, his first proposal
> was that the length of the standard be fixed.

    "C. A. R. Hoare has suggested that as a rule of
    thumb a language is too complicated if it can't
    be described precisely and readably in fifty
    pages. The Modula-3 committee elevated this to a
    design principle: we gave ourselves a
    "complexity budget" of fifty pages, and chose
    the most useful features that we could
    accommodate within this budget. In the end, we
    were over budget by six lines plus the syntax
    equations. This policy is a bit arbitrary, but
    there are so many good ideas in programming
    language design that some kind of arbitrary
    budget seems necessary to keep a language from
    getting too complicated."

    from "Modula-3: Language definition"
    http://research.compaq.com/SRC/m3defn/html/complete.html

</F>




From akuchlin at mems-exchange.org  Fri Aug 25 21:05:10 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 25 Aug 2000 15:05:10 -0400
Subject: [Python-Dev] Re: If you thought there were too many PEPs...
In-Reply-To: <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>; from gvwilson@nevex.com on Fri, Aug 25, 2000 at 02:30:53PM -0400
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <Pine.LNX.4.10.10008251428160.27802-100000@akbar.nevex.com>
Message-ID: <20000825150510.A22028@kronos.cnri.reston.va.us>

On Fri, Aug 25, 2000 at 02:30:53PM -0400, Greg Wilson wrote:
>was that the length of the standard be fixed --- anyone who wanted to add
>a new feature had to identify an existing feature that would be removed
>from the language to make room.  Everyone said, "That's so cool --- but of

Something similar was done with Modula-3, as GvR is probably well
aware; one of the goals was to keep the language spec less than 50
pages.  In the end I think it winds up being a bit larger, but it was
good discipline anyway.

--amk



From jeremy at beopen.com  Fri Aug 25 22:44:44 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Fri, 25 Aug 2000 16:44:44 -0400 (EDT)
Subject: [Python-Dev] Python 1.6 bug fix strategy
Message-ID: <14758.55996.11900.114220@bitdiddle.concentric.net>

We have gotten several bug reports recently based on 1.6b1.  What
plans, if any, are there to fix these bugs before the 1.6 final
release?  We clearly need to fix them for 2.0b1, but I don't know
about 1.6 final.

Among the bugs are 111403 and 11860, which cause core dumps.  The
former is an obvious bug and has a fairly clear fix.

Jeremy

PS Will 1.6 final be released before 2.0b1?





From tim_one at email.msn.com  Sat Aug 26 01:16:00 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 19:16:00 -0400
Subject: [Python-Dev] Python 1.6 bug fix strategy
In-Reply-To: <14758.55996.11900.114220@bitdiddle.concentric.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com>

[Jeremy Hylton]
> We have gotten several bug reports recently based on 1.6b1.  What
> plans, if any, are there to fix these bugs before the 1.6 final
> release?

My understanding is that 1.6final is done, except for plugging in a license;
i.e., too late even for bugfixes.  If true, "Fixed in 2.0" will soon be a
popular response to all sorts of things -- unless CNRI intends to do its own
work on 1.6.





From MarkH at ActiveState.com  Sat Aug 26 01:57:48 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sat, 26 Aug 2000 09:57:48 +1000
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <14758.29149.992343.502526@bitdiddle.concentric.net>
Message-ID: <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>

[Jeremy]

> On Linux, the sre test fails.  Do you see the same problem on Windows?

Not with either debug or release builds.

Mark.




From skip at mojam.com  Sat Aug 26 02:08:52 2000
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 25 Aug 2000 19:08:52 -0500 (CDT)
Subject: [Python-Dev] Strange compiler crash in debug builds.
In-Reply-To: <ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>
References: <14758.29149.992343.502526@bitdiddle.concentric.net>
	<ECEPKNMJLHAPFFJHDOJBEEKADGAA.MarkH@ActiveState.com>
Message-ID: <14759.2708.62485.72631@beluga.mojam.com>

    Mark> [Jeremy]
    >> On Linux, the sre test fails.  Do you see the same problem on Windows?

    Mark> Not with either debug or release builds.

Nor I on Mandrake Linux.

Skip




From cgw at fnal.gov  Sat Aug 26 02:34:23 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 25 Aug 2000 19:34:23 -0500 (CDT)
Subject: [Python-Dev] Compilation failure, current CVS
Message-ID: <14759.4239.276417.473973@buffalo.fnal.gov>

Just a heads-up - I suspect this is a trivial problem, but I don't
have time to investigate right now ("real life").

Linux buffalo.fnal.gov 2.2.16 #31 SMP
gcc version 2.95.2 19991024 (release)

After cvs update and make distclean, I get this error:

make[1]: Entering directory `/usr/local/src/Python-CVS/python/dist/src/Python'
gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c errors.c -o errors.o
errors.c:368: arguments given to macro `PyErr_BadInternalCall'
make[1]: *** [errors.o] Error 1




From cgw at fnal.gov  Sat Aug 26 03:23:08 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Fri, 25 Aug 2000 20:23:08 -0500 (CDT)
Subject: [Python-Dev] CVS weirdness (was:  Compilation failure, current CVS)
In-Reply-To: <14759.4239.276417.473973@buffalo.fnal.gov>
References: <14759.4239.276417.473973@buffalo.fnal.gov>
Message-ID: <14759.7164.55022.134730@buffalo.fnal.gov>

I blurted out:

 > After cvs update and make distclean, I get this error:
 > 
 > make[1]: Entering directory `/usr/local/src/Python-CVS/python/dist/src/Python'
 > gcc -g -O2 -I./../Include -I.. -DHAVE_CONFIG_H   -c errors.c -o errors.o
 > errors.c:368: arguments given to macro `PyErr_BadInternalCall'
 > make[1]: *** [errors.o] Error 1

There is (no surprise) no problem with Python; but there *is* some
problem with me or my setup or some tool I use or the CVS server.  cvs
update -dAP fixed my problems.  This is the second time I've gotten
these sticky CVS date tags which I never meant to set.

Sorry-for-the-false-alarm-ly yr's,
			     -C




From tim_one at email.msn.com  Sat Aug 26 04:12:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Fri, 25 Aug 2000 22:12:11 -0400
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
Message-ID: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>

Somebody recently added DL_IMPORT macros to two module init functions that
already used their names in DL_EXPORT macros (pyexpat.c and parsermodule.c).
On Windows, that yields the result I (naively?) expected:  compiler warnings
about inconsistent linkage declarations.

This is your basic Undocumented X-Platform Macro Hell, and I suppose the
Windows build should be #define'ing USE_DL_EXPORT for these subprojects
anyway (?), but if I don't hear a good reason for *why* both macros are used
on the same name in the same file, I'll be irresistibly tempted to just
delete the new DL_IMPORT lines.  That is, why would we *ever* use DL_IMPORT
on the name of a module init function?  They only exist to be exported.

baffled-in-reston-ly y'rs  - tim





From fdrake at beopen.com  Sat Aug 26 04:49:30 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Fri, 25 Aug 2000 22:49:30 -0400 (EDT)
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
References: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
Message-ID: <14759.12346.778540.252012@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Somebody recently added DL_IMPORT macros to two module init functions that
 > already used their names in DL_EXPORT macros (pyexpat.c and parsermodule.c).

  That was me.

 > On Windows, that yields the result I (naively?) expected:  compiler warnings
 > about inconsistent linkage declarations.

  Ouch.

 > This is your basic Undocumented X-Platform Macro Hell, and I suppose the
 > Windows build should be #define'ing USE_DL_EXPORT for these subprojects
 > anyway (?), but if I don't hear a good reason for *why* both macros are used
 > on the same name in the same file, I'll be irresistibly tempted to just
 > delete the new DL_IMPORT lines.  That is, why would we *ever* use DL_IMPORT
 > on the name of a module init function?  They only exist to be exported.

  Here's how I arrived at it, but appearantly this doesn't make sense,
because Windows has too many linkage options.  ;)
  Compiling with gcc using the -Wmissing-prototypes option causes a
warning to be printed if there isn't a prototype at all:

cj42289-a(.../linux-beowolf/Modules); gcc -fpic  -g -ansi -Wall -Wmissing-prototypes  -O2 -I../../Include -I.. -DHAVE_CONFIG_H -c ../../Modules/parsermodule.c
../../Modules/parsermodule.c:2852: warning: no previous prototype for `initparser'

  I used the DL_IMPORT since that's how all the prototypes in the
Python headers are set up.  I can either change these to "normal"
prototypes (no DL_xxPORT macros), DL_EXPORT prototypes, or remove the
prototypes completely, and we'll just have to ignore the warning.
  If you can write a few sentences explaining each of these macros and
when they should be used, I'll make sure they land in the
documentation.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From MarkH at ActiveState.com  Sat Aug 26 06:06:40 2000
From: MarkH at ActiveState.com (Mark Hammond)
Date: Sat, 26 Aug 2000 14:06:40 +1000
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEBIHCAA.tim_one@email.msn.com>
Message-ID: <ECEPKNMJLHAPFFJHDOJBOEKHDGAA.MarkH@ActiveState.com>

> This is your basic Undocumented X-Platform Macro Hell, and I suppose the
> Windows build should be #define'ing USE_DL_EXPORT for these subprojects
> anyway (?), but if I don't hear a good reason for *why* both
> macros are used

This is a mess that should be cleaned up.

I take some blame for DL_IMPORT :-(  Originally (and still, as far as I can
tell), DL_IMPORT really means "Python symbol visible outside the core" -
ie, any symbol a dynamic module or embedded application may ever need
(documented, or not :-)

The "import" part of DL_IMPORT is supposed to be from the _clients_ POV.
These apps/extensions are importing these definitions.

This is clearly a poor choice of names, IMO, as the macro USE_DL_EXPORT
changes the meaning from import to export, which is clearly confusing.


DL_EXPORT, on the other hand, seems to have grown while I wasnt looking :-)
As far as I can tell:
* It is used in ways where the implication is clearly "export this symbol
always".
* It is used for extension modules, whether they are builtin or not (eg,
"array" etc use it.
* It behaves differently under Windows than under BeOS, at least.  BeOS
unconditionally defines it as an exported symbol.  Windows only defines it
when building the core.  Extension modules attempting to use this macro to
export them do not work - eg, "winsound.c" uses DL_EXPORT, but is still
forced to add "export:initwinsound" to the linker to get the symbol public.

The ironic thing is, that in Windows at least, DL_EXPORT is working the
exact opposite of how we want it - when it is used for functions built into
the core (eg, builting modules), these symbols do _not_ need to be
exported, but where it is used on extension modules, it fails to make them
public.

So, as you guessed, we have the situation that we have 2 macros that given
their names, are completely misleading :-(

I think that we should make the following change (carefully, of course :-)

* DL_IMPORT -> PYTHON_API
* DL_EXPORT -> PYTHON_MODULE_INIT.

Obviously, the names are up for grabs, but we should change the macros to
what they really _mean_, and getting the correct behaviour shouldn't be a
problem.  I don't see any real cross-platform issues, as long as the macro
reflects what it actually means!

Shall I check in the large number of files affected now?

Over-the-release-manager's-dead-body<wink> ly,

Mark.




From fdrake at beopen.com  Sat Aug 26 07:40:01 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Sat, 26 Aug 2000 01:40:01 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
Message-ID: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>

  I've been playing with dictionaries lately trying to stamp out a
bug:

http://sourceforge.net/bugs/?func=detailbug&bug_id=112558&group_id=5470

  It looks like any fix that really works risks a fair bit of
performance, and that's not good.  My best-effort fix so far is on
SourceForge:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470

but doesn't quite work, according to Guido (I've not yet received
instructions from him about how to reproduce the observed failure).
  None the less, performance is an issue for dictionaries, so I came
up with the idea to use a specialized version for string keys.  When I
saw how few of the dictionaries created by the regression test ever
had anything else, I tried to simply make all dictionaries the
specialized variety (they can degrade themselves as needed).  What I
found was that just over 2% of the dictionaries created by running the
regression test ever held any non-string keys; this may be very
different for "real" programs, but I'm curious about how different.
  I've also done *no* performance testing on my patch for this yet,
and don't expect it to be a big boost without something like the bug
fix I mentioned above, but I could be wrong.  If anyone would like to
play with the idea, I've posted my current patch at:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101309&group_id=5470

  Enjoy!  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From fleck at triton.informatik.uni-bonn.de  Sat Aug 26 10:14:11 2000
From: fleck at triton.informatik.uni-bonn.de (Markus Fleck)
Date: Sat, 26 Aug 2000 10:14:11 +0200 (MET DST)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
In-Reply-To: <39A660CB.7661E20E@lemburg.com> from "M.-A. Lemburg" at Aug 25, 2000 02:04:27 PM
Message-ID: <200008260814.KAA06267@hera.informatik.uni-bonn.de>

M.-A. Lemburg:
> Could someone please summarize what needs to be done to
> post a message to comp.lang.python.announce without taking
> the path via the official (currently defunct) moderator ?

I'm not really defunct, I'm jut not posting any announcements
because I'm not receiving them any more. ;-)))

> I've had a look at the c.l.p.a postings and the only special
> header they include is the "Approved: fleck at informatik.uni-bonn.de"
> header.

Basically, that's all it takes to post to a "moderated" newsgroup.
(Talking about a case of "security by obscurity" here... :-/)
Actually, the string following the "Approved: " may even be random...

In principle, I do have the time again to do daily moderation of incoming
postings for c.l.py.a. Unfortunately, I currently lack the infrastructure
(i.e. the moderation program), which went down together with the old
starship. I was basically waiting for a version of Mailman that could be
used to post to moderated newsgroups. (I should probably have been more
vocal about that, or even should have started hacking Mailman myself... I
*did* start to write something that would grab new announcements daily from
Parnassus and post them to c.l.py.a, and I may even come to finish this in
September, but that doesn't substitute for a "real" moderation tool for
user-supplied postings. Also, it would probably be a lot easier for
Parnassus postings to be built directly from the Parnassus database, instead
from its [generated] HTML pages - the Parnassus author intended to supply
such functionality, but I didn't hear from him yet, either.)

So what's needed now? Primarily, a Mailman installation that can post to
moderated newsgroups (and maybe also do the mail2list gatewaying for
c.l.py.a), and a mail alias that forwards mail for
python-announce at python.org to that Mailman address. Some "daily digest"
generator for Parnassus announcements would be nice to have, too, but
that can only come once the other two things work.

Anyway, thanks for bringing this up again - it puts c.l.py.a at the
top of my to-do list again (where it should be, of course ;-).

Yours,
Markus.



From tim_one at email.msn.com  Sat Aug 26 10:14:48 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 04:14:48 -0400
Subject: [Python-Dev] Looking for a DL_xxPORT macro export
In-Reply-To: <14759.12346.778540.252012@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOECBHCAA.tim_one@email.msn.com>

[Tim, gripes about someone putting module init function names in
 both DL_IMPORT and DL_EXPORT macros]

[Fred Drake]
> That was me.

My IRC chat buddy Fred?  Well, can't get mad at *you*!

>> On Windows, that yields the result I (naively?) expected:
>> compiler warnings about inconsistent linkage declarations.

> Ouch.

Despite that-- as MarkH said later --these macros are as damnably confusing
as original sin, that one says "IMPORT" and the other "EXPORT" *may* have
been cause to guess they might not play well together when applied to a
single name.

> ...
>   Compiling with gcc using the -Wmissing-prototypes option causes a
> warning to be printed if there isn't a prototype at all:

Understood, and your goal is laudable.  I have a question, though:  *all*
module init functions use DL_EXPORT today, and just a few days ago *none* of
them used DL_IMPORT inside the file too.  So how come gcc only warned about
two modules?  Or does it actually warn about all of them, and you snuck this
change into pyexpat and parsermodule while primarily doing other things to
them?

> I can either change these to "normal" prototypes (no DL_xxPORT macros),
> DL_EXPORT prototypes,

I already checked that one in.

> or remove the prototypes completely, and we'll just have to ignore
> the warning.

No way.  "No warnings" is non-negotiable with me -- but since I no longer
get any warnings, I can pretend not to know that you get them under gcc
<wink>.

>   If you can write a few sentences explaining each of these macros and
> when they should be used, I'll make sure they land in the
> documentation.  ;)

I can't -- that's why I posted for help.  The design is currently
incomprehensible; e.g., from the PC config.h:

#ifdef USE_DL_IMPORT
#define DL_IMPORT(RTYPE) __declspec(dllimport) RTYPE
#endif
#ifdef USE_DL_EXPORT
#define DL_IMPORT(RTYPE) __declspec(dllexport) RTYPE
#define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE
#endif

So if you say "use import", the import macro does set up an import, but the
export macro is left undefined (turns out it's later set to an identity
expansion in Python.h, in that case).  But if you say "use export", both
import(!) and export macros are set up to do an export.  It's apparently
illegal to say "use both", but that has to be deduced from the compiler
error that *would* result from redefining the import macro in an
incompatible way.  And if you say neither, the trail snakes back to an
earlier blob of code, where "use import" is magically defined whenever "use
export" *isn't* -- but only if MS_NO_COREDLL is *not* defined.  And the test
of MS_NO_COREDLL is immediately preceded by the comment

    ... MS_NO_COREDLL (do not test this macro)

That covered one of the (I think) four sections in the now 750-line PC
config file that defines these things.  By the time I look at another config
file, my brain is gone.

MarkH is right:  we have to figure what these things are actually trying to
*accomplish*, then gut the code and spell whatever that is in a clear way.
Or, failing that, at least a documented way <wink>.





From tim_one at email.msn.com  Sat Aug 26 10:25:11 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 04:25:11 -0400
Subject: [Python-Dev] Fixing test_poll.py for me just broke it for you
Message-ID: <LNBBLJKPBEHFEDALKOLCKECCHCAA.tim_one@email.msn.com>

Here's the checkin comment.  See test/README for an expanded explanation if
the following isn't clear:


Another new test using "from test.test_support import ...", causing
subtle breakage on Windows (the test is skipped here, but the TestSkipped
exception wasn't recognized as such, because of duplicate copies of
test_support got loaded; so the test looks like a failure under Windows
instead of a skip).
Repaired the import, but

        THIS TEST *WILL* FAIL ON OTHER SYSTEMS NOW!

Again due to the duplicate copies of test_support, the checked-in
"expected output" file actually contains verbose-mode output.  I can't
generate the *correct* non-verbose output on my system.  So, somebody
please do that.





From mal at lemburg.com  Sat Aug 26 10:31:05 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 26 Aug 2000 10:31:05 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <200008260814.KAA06267@hera.informatik.uni-bonn.de>
Message-ID: <39A78048.DA793307@lemburg.com>

Markus Fleck wrote:
> 
> M.-A. Lemburg:
> > I've had a look at the c.l.p.a postings and the only special
> > header they include is the "Approved: fleck at informatik.uni-bonn.de"
> > header.
> 
> Basically, that's all it takes to post to a "moderated" newsgroup.
> (Talking about a case of "security by obscurity" here... :-/)
> Actually, the string following the "Approved: " may even be random...

Wow, so much for spam protection.
 
> In principle, I do have the time again to do daily moderation of incoming
> postings for c.l.py.a. Unfortunately, I currently lack the infrastructure
> (i.e. the moderation program), which went down together with the old
> starship. I was basically waiting for a version of Mailman that could be
> used to post to moderated newsgroups. (I should probably have been more
> vocal about that, or even should have started hacking Mailman myself... I
> *did* start to write something that would grab new announcements daily from
> Parnassus and post them to c.l.py.a, and I may even come to finish this in
> September, but that doesn't substitute for a "real" moderation tool for
> user-supplied postings. Also, it would probably be a lot easier for
> Parnassus postings to be built directly from the Parnassus database, instead
> from its [generated] HTML pages - the Parnassus author intended to supply
> such functionality, but I didn't hear from him yet, either.)
> 
> So what's needed now? Primarily, a Mailman installation that can post to
> moderated newsgroups (and maybe also do the mail2list gatewaying for
> c.l.py.a), and a mail alias that forwards mail for
> python-announce at python.org to that Mailman address. Some "daily digest"
> generator for Parnassus announcements would be nice to have, too, but
> that can only come once the other two things work.
> 
> Anyway, thanks for bringing this up again - it puts c.l.py.a at the
> top of my to-do list again (where it should be, of course ;-).

Barry has just installed a Mailman patch that allows gatewaying
to a moderated newsgroup.

He's also looking for volunteers to do the moderation. I guess
you should apply by sending Barry a private mail (see the
announcement on c.l.p.a ;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Sat Aug 26 11:56:20 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Sat, 26 Aug 2000 11:56:20 +0200
Subject: [Python-Dev] New dictionaries patch on SF
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
Message-ID: <39A79444.D701EF84@lemburg.com>

"Fred L. Drake, Jr." wrote:
> 
>   I've been playing with dictionaries lately trying to stamp out a
> bug:
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112558&group_id=5470
> 
>   It looks like any fix that really works risks a fair bit of
> performance, and that's not good.  My best-effort fix so far is on
> SourceForge:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101277&group_id=5470
> 
> but doesn't quite work, according to Guido (I've not yet received
> instructions from him about how to reproduce the observed failure).

The solution to all this is not easy, since dictionaries can
effectively also be used *after* interpreter finalization (no
thread state). The current PyErr_* APIs all rely on having the
thread state available, so the dictionary implementation would
have to add an extra check for the thread state.

All this will considerably slow down the interpreter and then
only to solve a rare problem... perhaps we should reenable
passing back exceptions viy PyDict_GetItem() instead ?!
This will slow down the interpreter too, but it'll at least
not cause the troubles with hacking the dictionary implementation
to handle exceptions during compares.

>   None the less, performance is an issue for dictionaries, so I came
> up with the idea to use a specialized version for string keys.  When I
> saw how few of the dictionaries created by the regression test ever
> had anything else, I tried to simply make all dictionaries the
> specialized variety (they can degrade themselves as needed).  What I
> found was that just over 2% of the dictionaries created by running the
> regression test ever held any non-string keys; this may be very
> different for "real" programs, but I'm curious about how different.
>   I've also done *no* performance testing on my patch for this yet,
> and don't expect it to be a big boost without something like the bug
> fix I mentioned above, but I could be wrong.  If anyone would like to
> play with the idea, I've posted my current patch at:
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101309&group_id=5470

I very much like the idea of having a customizable lookup
method for builtin dicts.

This would allow using more specific lookup function for
different tasks (it would even be possible switching the
lookup functions at run-time via a new dict method), e.g.
one could think of optimizing string lookups using a
predefined set of slots or by assuring that the stored
keys map 1-1 by using an additional hash value modifier
which is automatically tuned to assure this feature. This
would probably greatly speed up lookups for both successful and
failing searches.

We could add also add special lookup functions for keys
which are known not to raise exceptions during compares
(which is probably what motivated your patch, right ?)
and then fall back to a complicated and slow variant
for the general case.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Sat Aug 26 12:01:40 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Sat, 26 Aug 2000 13:01:40 +0300 (IDT)
Subject: [Python-Dev] Fixing test_poll.py for me just broke it for you
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKECCHCAA.tim_one@email.msn.com>
Message-ID: <Pine.GSO.4.10.10008261301090.20214-100000@sundial>

On Sat, 26 Aug 2000, Tim Peters wrote:

> Again due to the duplicate copies of test_support, the checked-in
> "expected output" file actually contains verbose-mode output.  I can't
> generate the *correct* non-verbose output on my system.  So, somebody
> please do that.

Done.

--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Sat Aug 26 12:27:48 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 26 Aug 2000 12:27:48 +0200
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
In-Reply-To: <39A78048.DA793307@lemburg.com>; from mal@lemburg.com on Sat, Aug 26, 2000 at 10:31:05AM +0200
References: <200008260814.KAA06267@hera.informatik.uni-bonn.de> <39A78048.DA793307@lemburg.com>
Message-ID: <20000826122748.M16377@xs4all.nl>

On Sat, Aug 26, 2000 at 10:31:05AM +0200, M.-A. Lemburg wrote:
> Markus Fleck wrote:
> > > I've had a look at the c.l.p.a postings and the only special
> > > header they include is the "Approved: fleck at informatik.uni-bonn.de"
> > > header.

> > Basically, that's all it takes to post to a "moderated" newsgroup.
> > (Talking about a case of "security by obscurity" here... :-/)
> > Actually, the string following the "Approved: " may even be random...

Yes, it can be completely random. We're talking about USENET here, it wasn't
designed for complicated procedures :-)

> Wow, so much for spam protection.

Well, we have a couple of 'moderated' lists locally, and I haven't, in 5
years, seen anyone fake an Approved: header. Of course, the penalty of doing
so would be severe, but we haven't even had to warn anyone, either, so how
could they know that ? :)

I also think most news-administrators are quite uhm, strict, in that kind of
thing. If any of our clients were found faking Approved: headers, they'd get
a not-very-friendly warning. If they do it a second time, they lose their
account. The news administrators I talked with at SANE2000 (sysadmin
conference) definately shared the same attitude. This isn't email, with
arbitrary headers and open relays and such, this is usenet, where you have
to have a fair bit of clue to keep your newsserver up and running :)

And up to now, spammers have been either too dumb or too smart to figure out
how to post to moderated newsgroups... I hope that if anyone ever does, the
punishment will be severe enough to scare away the rest ;P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Sat Aug 26 13:48:59 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sat, 26 Aug 2000 06:48:59 -0500
Subject: [Python-Dev] Python 1.6 bug fix strategy
In-Reply-To: Your message of "Fri, 25 Aug 2000 19:16:00 -0400."
             <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com> 
References: <LNBBLJKPBEHFEDALKOLCOEBBHCAA.tim_one@email.msn.com> 
Message-ID: <200008261148.GAA07398@cj20424-a.reston1.va.home.com>

> [Jeremy Hylton]
> > We have gotten several bug reports recently based on 1.6b1.  What
> > plans, if any, are there to fix these bugs before the 1.6 final
> > release?
> 
> My understanding is that 1.6final is done, except for plugging in a license;
> i.e., too late even for bugfixes.  If true, "Fixed in 2.0" will soon be a
> popular response to all sorts of things -- unless CNRI intends to do its own
> work on 1.6.

Applying the fix for writelines is easy, and I'll take care of it.

The other patch that jeremy mentioned
(http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=111403)
has no fix that I know of, is not easily reproduced, and was only
spotted in embedded code, so it might be the submittor's fault.
Without a reproducible test case it's unlikely to get fixed, so I'll
let that one go.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Sat Aug 26 17:11:12 2000
From: skip at mojam.com (Skip Montanaro)
Date: Sat, 26 Aug 2000 10:11:12 -0500 (CDT)
Subject: [Python-Dev] Is Python moving too fast? (was Re: Is python commercializationazing? ...)
In-Reply-To: <8o8101020mk@news1.newsguy.com>
References: <Pine.GSO.4.10.10008251845380.13902-100000@sundial>
	<8o66m9$cmn$1@slb3.atl.mindspring.net>
	<slrn8qdfq2.2ko.thor@localhost.localdomain>
	<39A6B447.3AFC880E@seebelow.org>
	<8o8101020mk@news1.newsguy.com>
Message-ID: <14759.56848.238001.346327@beluga.mojam.com>

    Alex> When I told people that the 1.5.2 release I was using, the latest
    Alex> one, had been 100% stable for over a year, I saw lights of wistful
    Alex> desire lighting in their eyes (at least as soon as they understood
    Alex> that here, for once, 'stable' did NOT mean 'dead':-)....  Oh well,
    Alex> it was nice while it lasted; now, the perception of Python will
    Alex> switch back from "magically stable and sound beyond ordinary
    Alex> mortals' parameters" to "quite ready to change core language for
    Alex> the sake of a marginal and debatable minor gain", i.e., "just
    Alex> another neat thing off the net".

I began using Python in early 1994, probably around version 1.0.1.  In the
intervening 6+ years, Python has had what I consider to be five significant
releases: 1.1 (10/11/94), 1.2 (4/10/95), 1.3 (10/8/95), 1.4 (10/25/96) and
1.5 (12/31/97).  (1.5.1 was released 4/13/98 and 1.5.2 was released
4/13/99).  So, while it's been a bit over a year since 1.5.2 was released,
Python really hasn't changed much in over 2.5 years. Guido and his core team
have been very good at maintaining backward compatibility while improving
language features and performance and keeping the language accessible to new
users.

We are now in the midst of several significant changes to the Python
development environment.  From my perspective as a friendly outsider, here's
what I see:

    1.  For the first time in it's 10+ year history, the language actually
        has a team of programmers led by Guido whose full-time job is to
        work on the language.  To the best of my knowledge, Guido's work at
        CNRI and CWI focused on other stuff, to which Python was applied as
        one of the tools.  The same observation can be made about the rest
        of the core PythonLabs team: Tim, Barry, Fred & Jeremy.  All had
        other duties at their previous positions.  Python was an important
        tool in what they did, but it wasn't what they got measured by in
        yearly performance reviews.

    2.  For the first time in its history, a secondary development team has
        surfaced in a highly visible and productive way, thanks to the
        migration to the SourceForge CVS repository.  Many of those people
        have been adding new ideas and code to the language all along, but
        the channel between their ideas and the core distribution was a very
        narrow one.  In the past, only the people at CNRI (and before that,
        CWI) could make direct changes to the source code repository.  In
        fact, I believe Guido used to be the sole filter of every new
        contribution to the tree.  Everything had to pass his eyeballs at
        some point.  That was a natural rate limiter on the pace of change,
        but I believe it probably also filtered out some very good ideas.

	While the SourceForge tools aren't perfect, their patch manager and
	bug tracking system, coupled with the externally accessible CVS
	repository, make it much easier for people to submit changes and for
	developers to manage those changes.  At the moment, browsing the
	patch manager with all options set to "any" shows 22 patches,
	submitted by 11 different people, which have been assigned to 9
	different people (there is a lot of overlap betwee the gang of 9 and
	the gang of 11).  That amount of parallelism in the development just
	wasn't possible before.

    3.  Python is now housed in a company formed to foster open source
        software development.  I won't pretend I understand all the
        implications of that move beyond the obvious reasons stated in item
        one, but there is bound to be some desire by BeOpen to put their
        stamp on the language.  I believe that there are key changes to the
        language that would not have made it into 2.0 had the license
        wrangling between CNRI and BeOpen not dragged out as long as it did.
        Those of us involved as active developers took advantage of that
        lull.  (I say "we", because I was a part of that.  I pushed Greg
        Ewing's original list comprehensions prototype along when the
        opportunity arose.)

    4.  Python's user and programmer base has grown dramatically in the past
        several years.  While it's not possible to actually measure the size
        of the user community, you can get an idea of its growth by looking
        at the increase in list traffic.  Taking a peek at the posting
        numbers at

            http://www.egroups.com/group/python-list

        is instructive.  In January of 1994 there were 76 posts to the list.
        In January of 2000 that number grew to 2678.  (That's with much less
        relative participation today by the core developers than in 1994.)

        In January of 1994 I believe the python-list at cwi.nl (with a possible
        Usenet gateway) was the only available discussion forum about
        Python.  Egroups lists 45 Python-related lists today (I took their
        word for it - they may stretch things a bit).  There are at least
        three (maybe four) distinct dialects of the language as well, not to
        mention the significant growth in supportef platforms in the past
        six years.

All this adds up to a system that is due for some significant change.  Those
of us currently involved are still getting used to the new system, so
perhaps things are moving a bit faster than if we were completely familiar
with this environment.  Many of the things that are new in 2.0 have been
proposed on the list off and on for a long time.  Unicode support, list
comprehensions, augmented assignment and extensions to the print statement
come to mind.  They are not new ideas tossed in with a beer chaser (like
"<blink>").  From the traffic on python-dev about Unicode support, I believe
it was the most challenging thing to add to the language.  By comparison,
the other three items I mentioned above were relatively simple concepts to
grasp and implement.

All these ideas were proposed to the community in the past, but have only
recently gained their own voice (so to speak) with the restructuring of the
development environment and growth in the base of active developers.

This broadening of the channel between the development community and the CVS
repository will obviously take some getting used to.  Once 2.0 is out, I
don't expect this (relatively) furious pace to continue.

-- 
Skip Montanaro (skip at mojam.com)
http://www.mojam.com/
http://www.musi-cal.com/

[Completely unrelated aside: I've never voiced an opinion - pro or con -
about the new print syntax, either on python-list or python-dev.  This will
be my only observation.

I have used the following print statement format for several years when I
wanted to insert some temporary debugging statements that I knew I would
later remove or comment out:

    print ">>", this, that, and, the, other, stuff

because it would make it easier to locate them with a text editor.  (Right
shift, while a very useful construct, is hardly common in my programming.)
Now, I'm happy to say, I will no longer have to quote the ">>" and it will
be easier to get the output to go to sys.stderr...]



From effbot at telia.com  Sat Aug 26 17:31:54 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 26 Aug 2000 17:31:54 +0200
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
Message-ID: <001801c00f72$c72d5860$f2a6b5d4@hagrid>

summary: Tkinter passes 8-bit strings to Tk without any
preprocessing.  Tk itself expects UTF-8, but passes bogus
UTF-8 data right through...  or in other words, Tkinter
treats any 8-bit string that doesn't contain valid UTF-8
as an ISO Latin 1 string...

:::

maybe Tkinter should raise a UnicodeError instead (just
like string comparisions etc).  example:

    w = Label(text="<cp1250 string>")
    UnicodeError: ASCII decoding error: ordinal not in range(128)

this will break existing code, but I think that's better than
confusing the hell out of anyone working on a non-Latin-1
platform...

+0 from myself -- there's no way we can get a +1 solution
(source encoding) into 2.0 without delaying the release...

:::

for some more background, see the bug report below, and
my followup.

</F>

---

Summary: Impossible to get Win32 default font
encoding in widgets

Details: I did not managed to obtain correct font
encoding in widgets on Win32 (NT Workstation,
Polish version, default encoding cp1250). All cp1250
Polish characters were displayed incorrectly. I think,
all characters that do not belong to Latin-1 will be
displayed incorrectly. Regarding Python1.6b1, I
checked the Tcl/Tk installation (8.3.2). The pure
Tcl/Tk programs DO display characters in cp1250
correctly.

As far as I know, the Tcl interpreter woks with
UTF-8 encoded strings. Does Python1.6b1 really
know about it?

---

Follow-Ups:

Date: 2000-Aug-26 08:04
By: effbot

Comment:
this is really a "how do I", rather than a bug
report ;-)

:::

In 1.6 and beyond, Python's default 8-bit
encoding is plain ASCII.  this encoding is only
used when you're using 8-bit strings in "unicode
contexts" -- for example, if you compare an
8-bit string to a unicode string, or pass it to
a subsystem designed to use unicode strings.

If you pass an 8-bit string containing
characters outside the ASCII range to a function
expecting a unicode string, the result is
undefined (it's usually results in an exception,
but some subsystems may have other ideas).

Finally, Tkinter now supports Unicode.  In fact,
it assumes that all strings passed to it are
Unicode.  When using 8-bit strings, it's only
safe to use plain ASCII.

Tkinter currently doesn't raise exceptions for
8-bit strings with non-ASCII characters, but it
probably should.  Otherwise, Tk will attempt to
parse the string as an UTF-8 string, and if that
fails, it assumes ISO-8859-1.

:::

Anyway, to write portable code using characters
outside the ASCII character set, you should use
unicode strings.

in your case, you can use:

   s = unicode("<a cp1250 string>", "cp1250")

to get the platform's default encoding, you can do:

   import locale
   language, encoding = locale.getdefaultlocale()

where encoding should be "cp1250" on your box.

:::

The reason this work under Tcl/Tk is that Tcl
assumes that your source code uses the
platform's default encoding, and converts things
to Unicode (not necessarily UTF-8) for you under
the hood.  Python 2.1 will hopefully support
*explicit* source encodings, but 1.6/2.0
doesn't.

-------------------------------------------------------

For detailed info, follow this link:
http://sourceforge.net/bugs/?func=detailbug&bug_id=112265&group_id=5470




From effbot at telia.com  Sat Aug 26 17:43:38 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sat, 26 Aug 2000 17:43:38 +0200
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
References: <001801c00f72$c72d5860$f2a6b5d4@hagrid>
Message-ID: <002401c00f74$6896a520$f2a6b5d4@hagrid>

>     UnicodeError: ASCII decoding error: ordinal not in range(128)

btw, what the heck is an "ordinal"?

(let's see: it's probably not "a book of rites for the ordination of
deacons, priests, and bishops".  how about an "ordinal number"?
that is, "a number designating the place (as first, second, or third)
occupied by an item in an ordered sequence".  hmm.  does this
mean that I cannot use strings longer than 128 characters?  but
this string was only 12 characters long.  wait, there's another
definition here: "a number assigned to an ordered set that de-
signates both the order of its elements and its cardinal number".
hmm.  what's a "cardinal"?  "a high ecclesiastical official of the
Roman Catholic Church who ranks next below the pope and is
appointed by him to assist him as a member of the college of
cardinals"?  ... oh, here it is: "a number (as 1, 5, 15) that is
used in simple counting and that indicates how many elements
there are in an assemblage".  "assemblage"?)

:::

wouldn't "character" be easier to grok for mere mortals?

...and isn't "range(128)" overly cute?

:::

how about:

UnicodeError: ASCII decoding error: character not in range 0-127

</F>




From tim_one at email.msn.com  Sat Aug 26 22:45:27 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 16:45:27 -0400
Subject: [Python-Dev] test_gettext fails on Windows
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDCHCAA.tim_one@email.msn.com>

Don't know whether this is unique to Win98.

test test_gettext failed -- Writing: 'mullusk', expected: 'bacon\012T'

Here's -v output:

test_gettext
installing gettext
calling bindtextdomain with localedir .
.
None
gettext
gettext
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
mullusk
.py 1.1
 Throatwobble
nudge nudge
This module provides internationalization and localization
support for your Python programs by providing an interface to the GNU
gettext message catalog library.
nudge nudge
1
nudge nudge

Has almost nothing in common with the expected output!





From tim_one at email.msn.com  Sat Aug 26 22:59:42 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 16:59:42 -0400
Subject: [Python-Dev] test_gettext fails on Windows
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEDCHCAA.tim_one@email.msn.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEDFHCAA.tim_one@email.msn.com>

> ...
> Has almost nothing in common with the expected output!

OK, I understand this now:  the setup function opens a binary file for
writing but neglected to *say* it was binary in the "open".  Huge no-no for
portability.  About to check in the fix.






From thomas at xs4all.net  Sat Aug 26 23:12:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sat, 26 Aug 2000 23:12:31 +0200
Subject: [Python-Dev] cPickle
Message-ID: <20000826231231.P16377@xs4all.nl>

I just noticed that test_cpickle makes Python crash (with a segmentation
fault) when there is no copy_reg. The funny bit is this:

centurion:~ > ./python Lib/test/regrtest.py test_cpickle
test_cpickle
test test_cpickle skipped --  No module named copy_reg
1 test skipped: test_cpickle

centurion:~ > ./python Lib/test/regrtest.py test_cookie test_cpickle
test_cookie
test test_cookie skipped --  No module named copy_reg
test_cpickle
Segmentation fault (core dumped)

I suspect there is a bug in the import code, in the case of failed imports. 

Holmes-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Sat Aug 26 23:14:37 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sat, 26 Aug 2000 17:14:37 -0400
Subject: [Python-Dev] Bug #112265: Tkinter seems to treat everything as Latin 1
In-Reply-To: <002401c00f74$6896a520$f2a6b5d4@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEDFHCAA.tim_one@email.msn.com>

>>     UnicodeError: ASCII decoding error: ordinal not in range(128)

> btw, what the heck is an "ordinal"?

It's a technical term <wink>.  But it's used consistently in Python, e.g.,
that's where the name of the builtin ord function comes from!

>>> print ord.__doc__
ord(c) -> integer

Return the integer ordinal of a one character string.
>>>

> ...
> how about an "ordinal number"?  that is, "a number designating the
> place (as first, second, or third) occupied by an item in an
> ordered sequence".

Exactly.  Each character has an arbitrary but fixed position in an arbitrary
but ordered sequence of all characters.  This isn't hard.

> wouldn't "character" be easier to grok for mere mortals?

Doubt it -- they're already confused about the need to distinguish between a
character and its encoding, and the *character* is most certainly not "in"
or "out" of any range of integers.

> ...and isn't "range(128)" overly cute?

Yes.

> UnicodeError: ASCII decoding error: character not in range 0-127

As above, it makes no sense.  How about compromising on

> UnicodeError: ASCII decoding error: ord(character) > 127

?





From tim_one at email.msn.com  Sun Aug 27 11:57:42 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 27 Aug 2000 05:57:42 -0400
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <20000825141623.G17277@ludwig.cnri.reston.va.us>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>

[Greg Ward]
> ...yow: the Perl community is really going overboard in proposing
> enhancements:
> ...
>    4. http://dev.perl.org/rfc/

Following that URL is highly recommended!  There's a real burst of
creativity blooming there, and everyone weary of repeated Python debates
should find it refreshing to discover exactly the same arguments going on
over there (lazy lists, curried functions, less syntax, more syntax, less
explicit, more explicit, go away this isn't stinking LISP, ya but maybe it
oughta be, yadda yadda yadda).  Except the *terms* of the debate are
inverted in so many ways!  For example, this is my favorite Killer Appeal to
Principle so far:

    Perl is really hard for a machine to parse.  *Deliberately*.  If
    you think it shouldn't be, you're missing something.

Certainly a good antidote to Python inbreeding <wink>.

Compared to our PEPs, the Perl RFCs are more a collection of wishlists --
implementation details are often sketchy, or even ignored.  But they're in a
brainstorming mode, so I believe that's both expected & encouraged now.

I was surprised by how often Python gets mentioned, and somtimes by how
confusedly.  For example, in the Perl Coroutines RFC:

    Unlike coroutines as defined by Knuth, and implemented in laguages
    such as Simula or Python, perl does not have an explicit "resume"
    call for invoking coroutines.

Mistake -- or Guido's time machine <wink>?

Those who hate Python PEP 214 should check out Perl RFC 39, which proposes
to introduce

    ">" LIST "<"

as a synonym for

    "print" LIST

My favorite example:

    perl -e '><><' # cat(1)

while, of course

    ><;

prints the current value of $_.

I happen to like Perl enough that I enjoy this stuff.  You may wish to take
a different lesson from it <wink>.

whichever-it's-a-mistake-to-ignore-people-having-fun-ly y'rs  - tim





From tim_one at email.msn.com  Sun Aug 27 13:13:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 27 Aug 2000 07:13:35 -0400
Subject: [Python-Dev] Is Python moving too fast? (was Re: Is python commercializationazing? ...)
In-Reply-To: <14759.56848.238001.346327@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEFHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> I began using Python in early 1994, probably around version 1.0.1.

And it's always good to hear a newcomer's perspective <wink>.  Seriously, it
was a wonderful sane sketch of what's been happening lately.  Some glosses:

> ...
> From my perspective as a friendly outsider, ...

Nobody fall for that ingratiating ploy:  Skip's a Python Developer at
SourceForge too.  And glad to have him there!

> ...
>     3.  Python is now housed in a company formed to foster open source
>         software development.  I won't pretend I understand all the
>         implications of that move ... but there is bound to be some
>         desire by BeOpen to put their stamp on the language.

There is more desire on BeOpen's part-- at least at first --to just pay our
salaries.  Many people have asked for language or library changes or
enhancements in the past based on demanding real-world needs, but until very
recently the only possible response was "huh -- send in a patch, and maybe
Guido will check it in".  Despite the high approachability of Python's
implementation, often that's just too much of a task for the people seeking
solutions.  But if they want it enough to pay for it, or aren't even sure
exactly what they need, they can hire us to do it now (shameless plug:
mailto:pythonlabs-info at beopen.com).  I doubt there's any team better
qualified, and while I've been a paid prostitute my whole career, you can
still trust Guido to keep us honest <wink>.  For example, that's how
Python's Unicode features got developed (although at CNRI).

> I believe that there are key changes to the language that would not
> have made it into 2.0 had the license wrangling between CNRI and
> BeOpen not dragged out as long as it did.

Absolutely.  You may <snort> have missed some of the endless posts on this
topic:  we were *going* to release 2.0b1 on July 1st.  I was at Guido's
house late the night before, everything was cooking, and we were mere hours
away from uploading the 2.0b1 tarball for release.  Then CNRI pulled the
plug in an email, and we've been trying to get it back into the outlet ever
since.  When it became clear that things weren't going to settle at once,
and that we needed to produce a 1.6 release too with *only* the stuff
developed under CNRI's tenure, that left us twiddling our thumbs.  There
were a pile of cool (but, as you said later, old!) ideas Guido wanted to get
in anyway, so he opened the door.  Had things turned out as we *hoped*, they
would have gone into 2.1 instead, and that's all there was to that.

> ...
> All this adds up to a system that is due for some significant change.

Sure does.  But it's working great so far, so don't jinx it by questioning
*anything* <wink>.

> ...
> Once 2.0 is out, I don't expect this (relatively) furious pace to
> continue.

I suspect it will continue-- maybe even accelerate --but *shift*.  We're
fast running out of *any* feasible (before P3K) "core language" idea that
Guido has ever had a liking for, so I expect the core language changes to
slow waaaaay down again.  The libraries may be a different story, though.
For example, there are lots of GUIs out there, and Tk isn't everyone's
favorite yet remains especially favored in the std distribution; Python is
being used in new areas where it's currently harder to use than it should be
(e.g., deeply embedded systems); some of the web-related modules could
certainly stand a major boost in consistency, functionality and ease-of-use;
and fill in the blank _________.  There are infrastructure issues too, like
what to do on top of Distutils to make it at least as useful as CPAN.  Etc
etc etc ... there's a *ton* of stuff to be done beyond fiddling with the
language per se.  I won't be happy until there's a Python in every toaster
<wink>.

although-*perhaps*-light-bulbs-don't-really-need-it-ly y'rs  - tim





From thomas at xs4all.net  Sun Aug 27 13:42:28 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 27 Aug 2000 13:42:28 +0200
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Sun, Aug 27, 2000 at 05:57:42AM -0400
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com>
Message-ID: <20000827134228.A500@xs4all.nl>

On Sun, Aug 27, 2000 at 05:57:42AM -0400, Tim Peters wrote:
> [Greg Ward]
> > ...yow: the Perl community is really going overboard in proposing
> > enhancements:
> > ...
> >    4. http://dev.perl.org/rfc/

> Following that URL is highly recommended!

Indeed. Thanx for pointing it out again (and Greg, too), I've had a barrel
of laughs (and good impressions, both) already :)

> I was surprised by how often Python gets mentioned, and somtimes by how
> confusedly.

Well, 'python' is mentioned explicitly 12 times, in 7 different RFCs.
There'll be some implicit ones, of course, but it's not as much as I would
have expected, based on howmany times I hear my perl-hugging colleague
comment on how cool a particular Python feature is ;)

> For example, in the Perl Coroutines RFC:
> 
>     Unlike coroutines as defined by Knuth, and implemented in laguages
>     such as Simula or Python, perl does not have an explicit "resume"
>     call for invoking coroutines.
> 
> Mistake -- or Guido's time machine <wink>?

Neither. Someone elses time machine, as the URL given in the references
section shows: they're not talking about coroutines in the core, but as
'addon'. And not necessarily as stackless, either, there are a couple of
implementations.

(Other than that I don't like the Perl coroutine proposal: I think
single process coroutines make a lot more sense, though I can see why they
are arguing for such a 'i/o-based' model.)

My personal favorite, up to now, is RFC 28: Perl should stay Perl. Anyone
upset by the new print statement should definately read it ;) The other RFCs
going "don't change *that*" are good too, showing that not everyone is
losing themselves in wishes ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From effbot at telia.com  Sun Aug 27 17:20:08 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Sun, 27 Aug 2000 17:20:08 +0200
Subject: [Python-Dev] If you thought there were too many PEPs...
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com> <20000827134228.A500@xs4all.nl>
Message-ID: <000901c0103a$4a48b380$f2a6b5d4@hagrid>

thomas wrote:
> My personal favorite, up to now, is RFC 28: Perl should stay Perl.

number 29 is also a good one: don't ever add an alias
for "unlink" (written by someone who have never ever
read the POSIX or ANSI C standards ;-)

:::

btw, Python's remove/unlink implementation is slightly
broken -- they both map to unlink, but that's not the
right way to do it:

from SUSv2:

    int remove(const char *path);

    If path does not name a directory, remove(path)
    is equivalent to unlink(path). 

    If path names a directory, remove(path) is equi-
    valent to rmdir(path). 

should I fix this?

</F>




From guido at beopen.com  Sun Aug 27 20:28:46 2000
From: guido at beopen.com (Guido van Rossum)
Date: Sun, 27 Aug 2000 13:28:46 -0500
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: Your message of "Sun, 27 Aug 2000 17:20:08 +0200."
             <000901c0103a$4a48b380$f2a6b5d4@hagrid> 
References: <20000825141623.G17277@ludwig.cnri.reston.va.us> <LNBBLJKPBEHFEDALKOLCMEECHCAA.tim_one@email.msn.com> <20000827134228.A500@xs4all.nl>  
            <000901c0103a$4a48b380$f2a6b5d4@hagrid> 
Message-ID: <200008271828.NAA14847@cj20424-a.reston1.va.home.com>

> btw, Python's remove/unlink implementation is slightly
> broken -- they both map to unlink, but that's not the
> right way to do it:
> 
> from SUSv2:
> 
>     int remove(const char *path);
> 
>     If path does not name a directory, remove(path)
>     is equivalent to unlink(path). 
> 
>     If path names a directory, remove(path) is equi-
>     valent to rmdir(path). 
> 
> should I fix this?

That's a new one -- didn't exist when I learned Unix.

I guess we can fix this in 2.1.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From dgoodger at bigfoot.com  Sun Aug 27 21:27:22 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Sun, 27 Aug 2000 15:27:22 -0400
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <39A68B42.4E3F8A3D@lemburg.com>
References: <39A68B42.4E3F8A3D@lemburg.com>
Message-ID: <B5CEE3D9.81F2%dgoodger@bigfoot.com>

Some comments:

1. I think the idea of attribute docstrings is a great one. It would assist
in the auto-documenting of code immeasurably.

2. I second Frank Niessink (frankn=nuws at cs.vu.nl), who wrote:

> wouldn't the naming
> scheme <attributename>.__doc__ be a better one?
> 
> So if:
> 
> class C:
>   a = 1
>   """Description of a."""
> 
> then:
> 
> C.a.__doc__ == "Description of a."

'C.a.__doc__' is far more natural and Pythonic than 'C.__doc__a__'. The
latter would also require ugly tricks to access.

3. However, what would happen to C.a.__doc__ (or C.__doc__a__ for that
matter) when attribute 'a' is reassigned? For example:

    class C:
        a = 1  # class attribute, default value for instance attribute
        """Description of a."""

        def __init__(self, arg=None):
            if arg is not None:
                self.a = arg  # instance attribute
            self.b = []
            """Description of b."""

    instance = C(2)

What would instance.a.__doc__ (instance.__doc__a__) be? Would the __doc__ be
wiped out by the reassignment, or magically remain unless overridden?

4. How about instance attributes that are never class attributes? Like
'instance.b' in the example above?

5. Since docstrings "belong" to the attribute preceeding them, wouldn't it
be more Pythonic to write:

    class C:
        a = 1
            """Description of a."""

? (In case of mail viewer problems, each line above is indented relative to
the one before.) This emphasizes the relationship between the docstring and
the attribute. Of course, such an approach may entail a more complicated
modification to the Python source, but also more complete IMHO.

6. Instead of mangling names, how about an alternative approach? Each class,
instance, module, and function gets a single special name (call it
'__docs__' for now), a dictionary of attribute-name to docstring mappings.
__docs__ would be the docstring equivalent to __dict__. These dictionary
entries would not be affected by reassignment unless a new docstring is
specified. So, in the example from (3) above, we would have:

    >>> instance.__docs__
    {'b': 'Description of b.'}
    >>> C.__docs__
    {'a': 'Description of a.'}

Just as there is a built-in function 'dir' to apply Inheritance rules to
instance.__dict__, there would have to be a function 'docs' to apply
inheritance to instance.__docs__:

    >>> docs(instance)
    {'a': 'Description of a.', 'b': 'Description of b.'}

There are repercussions here. A module containing the example from (3) above
would have a __docs__ dictionary containing mappings for docstrings for each
top-level class and function defined, in addition to docstrings for each
global variable.


In conclusion, although this proposal has great promise, it still needs
work. If it's is to be done at all, better to do it right.

This could be the first true test of the PEP system; getting input from the
Python user community as well as the core PythonLabs and Python-Dev groups.
Other PEPs have been either after-the-fact or, in the case of those features
approved for inclusion in Python 2.0, too rushed for a significant
discussion.

-- 
David Goodger    dgoodger at bigfoot.com    Open-source projects:
 - The Go Tools Project: http://gotools.sourceforge.net
 (more to come!)




From thomas at xs4all.net  Mon Aug 28 01:16:24 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 01:16:24 +0200
Subject: [Python-Dev] Python keywords
Message-ID: <20000828011624.E500@xs4all.nl>


Mark, (and the rest of python-dev)

There was a thread here a few weeks ago (or so, I seem to have misplaced
that particular thread :P) about using Python keywords as identifiers in
some cases. You needed that ability for .NET-Python, where the specs say any
identifier should be possible as methods and attributes, and there were some
comments on the list on how to do that (by Guido, for one.)

Well, the attached patch sort-of does that. I tried making it a bit nicer,
but that involved editing all places that currently use the NAME-type node,
and most of those don't advertise that they're doing that :-S The attached
patch is in no way nice, but it does work:

>>> class X:
...     def print(self, x):
...             print "printing", x
... 
>>> x = X()
>>> x.print(1)
printing 1
>>> x.print
<method X.print of X instance at 0x8207fc4>
>>> x.assert = 1
>>>

However, it also allows this at the top level, currently:
>>> def print(x):
...     print "printing", x
... 

which results in some unexpected behaviour:
>>> print(1)
1
>>> globals()['print'](1)
printing 1

But when combining it with modules, it does work as expected, of course:

# printer.py:
def print(x, y):
        print "printing", x, "and", y
#

>>> import printer
>>> printer.print
<function print at 0x824120c>
>>> printer.print(1, 2)
printing 1 and 2

Another plus-side of this particular method is that it's simple and
straightforward, if a bit maintenance-intensive :-) But the big question is:
is this enough for what you need ? Or do you need the ability to use
keywords in *all* identifiers, including variable names and such ? Because
that is quite a bit harder ;-P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
-------------- next part --------------
Index: Grammar/Grammar
===================================================================
RCS file: /cvsroot/python/python/dist/src/Grammar/Grammar,v
retrieving revision 1.41
diff -c -r1.41 Grammar
*** Grammar/Grammar	2000/08/24 20:11:30	1.41
--- Grammar/Grammar	2000/08/27 23:15:53
***************
*** 19,24 ****
--- 19,28 ----
  #diagram:output\textwidth 20.04cm\oddsidemargin  0.0cm\evensidemargin 0.0cm
  #diagram:rules
  
+ # for reference: everything allowed in a 'def' or trailer expression.
+ # (I might have missed one or two ;)
+ # ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally')
+ 
  # Start symbols for the grammar:
  #	single_input is a single interactive statement;
  #	file_input is a module or sequence of commands read from an input file;
***************
*** 28,34 ****
  file_input: (NEWLINE | stmt)* ENDMARKER
  eval_input: testlist NEWLINE* ENDMARKER
  
! funcdef: 'def' NAME parameters ':' suite
  parameters: '(' [varargslist] ')'
  varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']
  fpdef: NAME | '(' fplist ')'
--- 32,38 ----
  file_input: (NEWLINE | stmt)* ENDMARKER
  eval_input: testlist NEWLINE* ENDMARKER
  
! funcdef: 'def' ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally') parameters ':' suite
  parameters: '(' [varargslist] ')'
  varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']
  fpdef: NAME | '(' fplist ')'
***************
*** 87,93 ****
  atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist '`' | NAME | NUMBER | STRING+
  listmaker: test ( list_for | (',' test)* [','] )
  lambdef: 'lambda' [varargslist] ':' test
! trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
  subscriptlist: subscript (',' subscript)* [',']
  subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
  sliceop: ':' [test]
--- 91,97 ----
  atom: '(' [testlist] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist '`' | NAME | NUMBER | STRING+
  listmaker: test ( list_for | (',' test)* [','] )
  lambdef: 'lambda' [varargslist] ':' test
! trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' ( NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | 'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | 'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | 'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally')
  subscriptlist: subscript (',' subscript)* [',']
  subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
  sliceop: ':' [test]

From greg at cosc.canterbury.ac.nz  Mon Aug 28 05:16:35 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 28 Aug 2000 15:16:35 +1200 (NZST)
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <B5CEE3D9.81F2%dgoodger@bigfoot.com>
Message-ID: <200008280316.PAA16831@s454.cosc.canterbury.ac.nz>

David Goodger <dgoodger at bigfoot.com>:

> 6. Instead of mangling names, how about an alternative approach? Each class,
> instance, module, and function gets a single special name (call it
> '__docs__' for now), a dictionary of attribute-name to docstring
> mappings.

Good idea!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From moshez at math.huji.ac.il  Mon Aug 28 08:30:23 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Mon, 28 Aug 2000 09:30:23 +0300 (IDT)
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: <000901c0103a$4a48b380$f2a6b5d4@hagrid>
Message-ID: <Pine.GSO.4.10.10008280930000.5796-100000@sundial>

On Sun, 27 Aug 2000, Fredrik Lundh wrote:

> btw, Python's remove/unlink implementation is slightly
> broken -- they both map to unlink, but that's not the
> right way to do it:
> 
> from SUSv2:
> 
>     int remove(const char *path);
> 
>     If path does not name a directory, remove(path)
>     is equivalent to unlink(path). 
> 
>     If path names a directory, remove(path) is equi-
>     valent to rmdir(path). 
> 
> should I fix this?

1. Yes.
2. After the feature freeze.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From tanzer at swing.co.at  Mon Aug 28 08:32:17 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Mon, 28 Aug 2000 08:32:17 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: Your message of "Fri, 25 Aug 2000 17:05:38 +0200."
             <39A68B42.4E3F8A3D@lemburg.com> 
Message-ID: <m13TISv-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

>     This PEP proposes a small addition to the way Python currently
>     handles docstrings embedded in Python code.
(snip)
>     Here is an example:
> 
>         class C:
>             "class C doc-string"
> 
>             a = 1
>             "attribute C.a doc-string (1)"
> 
>             b = 2
>             "attribute C.b doc-string (2)"
> 
>     The docstrings (1) and (2) are currently being ignored by the
>     Python byte code compiler, but could obviously be put to good use
>     for documenting the named assignments that precede them.
>     
>     This PEP proposes to also make use of these cases by proposing
>     semantics for adding their content to the objects in which they
>     appear under new generated attribute names.

Great proposal. This would make docstrings even more useful.

>     In order to preserve features like inheritance and hiding of
>     Python's special attributes (ones with leading and trailing double
>     underscores), a special name mangling has to be applied which
>     uniquely identifies the docstring as belonging to the name
>     assignment and allows finding the docstring later on by inspecting
>     the namespace.
> 
>     The following name mangling scheme achieves all of the above:
> 
>         __doc__<attributename>__

IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
__docs__ dictionary is a better solution:

- It provides all docstrings for the attributes of an object in a
  single place.

  * Handy in interactive mode.
  * This simplifies the generation of documentation considerably.

- It is easier to explain in the documentation

>     To keep track of the last assigned name, the byte code compiler
>     stores this name in a variable of the compiling structure.  This
>     variable defaults to NULL.  When it sees a docstring, it then
>     checks the variable and uses the name as basis for the above name
>     mangling to produce an implicit assignment of the docstring to the
>     mangled name.  It then resets the variable to NULL to avoid
>     duplicate assignments.

Normally, Python concatenates adjacent strings. It doesn't do this
with docstrings. I think Python's behavior would be more consistent
if docstrings were concatenated like any other strings.

>     Since the implementation does not reset the compiling structure
>     variable when processing a non-expression, e.g. a function
>     definition, the last assigned name remains active until either the
>     next assignment or the next occurrence of a docstring.
> 
>     This can lead to cases where the docstring and assignment may be
>     separated by other expressions:
> 
>         class C:
>             "C doc string"
> 
>             b = 2
> 
>             def x(self):
>                 "C.x doc string"
>                 y = 3
>                 return 1
> 
>             "b's doc string"
> 
>     Since the definition of method "x" currently does not reset the
>     used assignment name variable, it is still valid when the compiler
>     reaches the docstring "b's doc string" and thus assigns the string
>     to __doc__b__.

This is rather surprising behavior. Does this mean that a string in
the middle of a function definition would be interpreted as the
docstring of the function?

For instance,

    def spam():
        a = 3
        "Is this spam's docstring? (not in 1.5.2)"
        return 1

Anyway, the behavior of Python should be the same for all kinds of
docstrings. 

>     A possible solution to this problem would be resetting the name
>     variable for all non-expression nodes.

IMHO, David Goodger's proposal of indenting the docstring relative to the
attribute it refers to is a better solution.

If that requires too many changes to the parser, the name variable
should be reset for all statement nodes.

Hoping-to-use-attribute-docstrings-soon ly,
Christian

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Mon Aug 28 10:28:16 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:28:16 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <39A68B42.4E3F8A3D@lemburg.com> <B5CEE3D9.81F2%dgoodger@bigfoot.com>
Message-ID: <39AA22A0.D533598A@lemburg.com>

[Note: Please CC: all messages on this thread to me directly as I
 am the PEP maintainer. If you don't, then I might not read your
 comments.]

David Goodger wrote:
> 
> Some comments:
> 
> 1. I think the idea of attribute docstrings is a great one. It would assist
> in the auto-documenting of code immeasurably.

Agreed ;-)
 
> 2. I second Frank Niessink (frankn=nuws at cs.vu.nl), who wrote:
> 
> > wouldn't the naming
> > scheme <attributename>.__doc__ be a better one?
> >
> > So if:
> >
> > class C:
> >   a = 1
> >   """Description of a."""
> >
> > then:
> >
> > C.a.__doc__ == "Description of a."
> 
> 'C.a.__doc__' is far more natural and Pythonic than 'C.__doc__a__'. The
> latter would also require ugly tricks to access.

This doesn't work, since Python objects cannot have arbitrary
attributes. Also, I wouldn't want to modify attribute objects indirectly
from the outside as the above implies.

I don't really see the argument of __doc__a__ being hard to
access: these attributes are meant for tools to use, not
humans ;-), and these tools can easily construct the right
lookup names by scanning the dir(obj) and then testing for
the various __doc__xxx__ strings.
 
> 3. However, what would happen to C.a.__doc__ (or C.__doc__a__ for that
> matter) when attribute 'a' is reassigned? For example:
> 
>     class C:
>         a = 1  # class attribute, default value for instance attribute
>         """Description of a."""
> 
>         def __init__(self, arg=None):
>             if arg is not None:
>                 self.a = arg  # instance attribute
>             self.b = []
>             """Description of b."""
> 
>     instance = C(2)
> 
> What would instance.a.__doc__ (instance.__doc__a__) be? Would the __doc__ be
> wiped out by the reassignment, or magically remain unless overridden?

See above. This won't work.
 
> 4. How about instance attributes that are never class attributes? Like
> 'instance.b' in the example above?

I don't get the point... doc strings should always be considered
constant and thus be defined in the class/module definition.
 
> 5. Since docstrings "belong" to the attribute preceeding them, wouldn't it
> be more Pythonic to write:
> 
>     class C:
>         a = 1
>             """Description of a."""
> 
> ? (In case of mail viewer problems, each line above is indented relative to
> the one before.) This emphasizes the relationship between the docstring and
> the attribute. Of course, such an approach may entail a more complicated
> modification to the Python source, but also more complete IMHO.

Note that Python's indents block and these are always preceeded
by a line ending in a colon. The above idea would break this.

> 6. Instead of mangling names, how about an alternative approach? Each class,
> instance, module, and function gets a single special name (call it
> '__docs__' for now), a dictionary of attribute-name to docstring mappings.
> __docs__ would be the docstring equivalent to __dict__. These dictionary
> entries would not be affected by reassignment unless a new docstring is
> specified. So, in the example from (3) above, we would have:
> 
>     >>> instance.__docs__
>     {'b': 'Description of b.'}
>     >>> C.__docs__
>     {'a': 'Description of a.'}
> 
> Just as there is a built-in function 'dir' to apply Inheritance rules to
> instance.__dict__, there would have to be a function 'docs' to apply
> inheritance to instance.__docs__:
> 
>     >>> docs(instance)
>     {'a': 'Description of a.', 'b': 'Description of b.'}
> 
> There are repercussions here. A module containing the example from (3) above
> would have a __docs__ dictionary containing mappings for docstrings for each
> top-level class and function defined, in addition to docstrings for each
> global variable.

This would not work well together with class inheritance.
 
> In conclusion, although this proposal has great promise, it still needs
> work. If it's is to be done at all, better to do it right.
> 
> This could be the first true test of the PEP system; getting input from the
> Python user community as well as the core PythonLabs and Python-Dev groups.
> Other PEPs have been either after-the-fact or, in the case of those features
> approved for inclusion in Python 2.0, too rushed for a significant
> discussion.

We'll see whether this "global" approach is a good one ;-)
In any case, I think it'll give more awareness of the PEP
system.

Thanks for the comments,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 28 10:55:15 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:55:15 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TISv-000wcEC@swing.co.at>
Message-ID: <39AA28F3.1968E27@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> 
> >     This PEP proposes a small addition to the way Python currently
> >     handles docstrings embedded in Python code.
> (snip)
> >     Here is an example:
> >
> >         class C:
> >             "class C doc-string"
> >
> >             a = 1
> >             "attribute C.a doc-string (1)"
> >
> >             b = 2
> >             "attribute C.b doc-string (2)"
> >
> >     The docstrings (1) and (2) are currently being ignored by the
> >     Python byte code compiler, but could obviously be put to good use
> >     for documenting the named assignments that precede them.
> >
> >     This PEP proposes to also make use of these cases by proposing
> >     semantics for adding their content to the objects in which they
> >     appear under new generated attribute names.
> 
> Great proposal. This would make docstrings even more useful.

Right :-)
 
> >     In order to preserve features like inheritance and hiding of
> >     Python's special attributes (ones with leading and trailing double
> >     underscores), a special name mangling has to be applied which
> >     uniquely identifies the docstring as belonging to the name
> >     assignment and allows finding the docstring later on by inspecting
> >     the namespace.
> >
> >     The following name mangling scheme achieves all of the above:
> >
> >         __doc__<attributename>__
> 
> IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> __docs__ dictionary is a better solution:
> 
> - It provides all docstrings for the attributes of an object in a
>   single place.
> 
>   * Handy in interactive mode.
>   * This simplifies the generation of documentation considerably.
> 
> - It is easier to explain in the documentation

The downside is that it doesn't work well together with
class inheritance: docstrings of the above form can
be overridden or inherited just like any other class
attribute.
 
> >     To keep track of the last assigned name, the byte code compiler
> >     stores this name in a variable of the compiling structure.  This
> >     variable defaults to NULL.  When it sees a docstring, it then
> >     checks the variable and uses the name as basis for the above name
> >     mangling to produce an implicit assignment of the docstring to the
> >     mangled name.  It then resets the variable to NULL to avoid
> >     duplicate assignments.
> 
> Normally, Python concatenates adjacent strings. It doesn't do this
> with docstrings. I think Python's behavior would be more consistent
> if docstrings were concatenated like any other strings.

Huh ? It does...

>>> class C:
...     "first line"\
...     "second line"
... 
>>> C.__doc__
'first linesecond line'

And the same works for the attribute doc strings too.

> >     Since the implementation does not reset the compiling structure
> >     variable when processing a non-expression, e.g. a function
> >     definition, the last assigned name remains active until either the
> >     next assignment or the next occurrence of a docstring.
> >
> >     This can lead to cases where the docstring and assignment may be
> >     separated by other expressions:
> >
> >         class C:
> >             "C doc string"
> >
> >             b = 2
> >
> >             def x(self):
> >                 "C.x doc string"
> >                 y = 3
> >                 return 1
> >
> >             "b's doc string"
> >
> >     Since the definition of method "x" currently does not reset the
> >     used assignment name variable, it is still valid when the compiler
> >     reaches the docstring "b's doc string" and thus assigns the string
> >     to __doc__b__.
> 
> This is rather surprising behavior. Does this mean that a string in
> the middle of a function definition would be interpreted as the
> docstring of the function?

No, since at the beginning of the function the name variable
is set to NULL.
 
> For instance,
> 
>     def spam():
>         a = 3
>         "Is this spam's docstring? (not in 1.5.2)"
>         return 1
> 
> Anyway, the behavior of Python should be the same for all kinds of
> docstrings.
> 
> >     A possible solution to this problem would be resetting the name
> >     variable for all non-expression nodes.
> 
> IMHO, David Goodger's proposal of indenting the docstring relative to the
> attribute it refers to is a better solution.
> 
> If that requires too many changes to the parser, the name variable
> should be reset for all statement nodes.

See my other mail: indenting is only allowed for blocks of
code and these are usually started with a colon -- doesn't
work here.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 28 10:58:34 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 10:58:34 +0200
Subject: [Python-Dev] Python keywords
References: <20000828011624.E500@xs4all.nl>
Message-ID: <39AA29BA.73EA9FB3@lemburg.com>

Thomas Wouters wrote:
> 
> Mark, (and the rest of python-dev)
> 
> There was a thread here a few weeks ago (or so, I seem to have misplaced
> that particular thread :P) about using Python keywords as identifiers in
> some cases. You needed that ability for .NET-Python, where the specs say any
> identifier should be possible as methods and attributes, and there were some
> comments on the list on how to do that (by Guido, for one.)

Are you sure you want to confuse Python source code readers by
making keywords usable as identifiers ?

What would happen to Python simple to parse grammar -- would
syntax highlighting still be as simple as it is now ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Mon Aug 28 12:54:13 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 05:54:13 -0500
Subject: [Python-Dev] Python keywords
In-Reply-To: Your message of "Mon, 28 Aug 2000 01:16:24 +0200."
             <20000828011624.E500@xs4all.nl> 
References: <20000828011624.E500@xs4all.nl> 
Message-ID: <200008281054.FAA22728@cj20424-a.reston1.va.home.com>

[Thomas Wouters]
> There was a thread here a few weeks ago (or so, I seem to have misplaced
> that particular thread :P) about using Python keywords as identifiers in
> some cases. You needed that ability for .NET-Python, where the specs say any
> identifier should be possible as methods and attributes, and there were some
> comments on the list on how to do that (by Guido, for one.)
> 
> Well, the attached patch sort-of does that. I tried making it a bit nicer,
> but that involved editing all places that currently use the NAME-type node,
> and most of those don't advertise that they're doing that :-S The attached
> patch is in no way nice, but it does work:
> 
> >>> class X:
> ...     def print(self, x):
> ...             print "printing", x
> ... 
> >>> x = X()
> >>> x.print(1)
> printing 1
> >>> x.print
> <method X.print of X instance at 0x8207fc4>
> >>> x.assert = 1
> >>>
> 
> However, it also allows this at the top level, currently:
> >>> def print(x):
> ...     print "printing", x
> ... 

Initially I thought this would be fine, but on second thought I'm not
so sure.  To a newbie who doesn't know all the keywords, this would be
confusing:

  >>> def try(): # my first function
  ...     print "hello"
  ...
  >>> try()
    File "<stdin>", line 1
      try()
	 ^
  SyntaxError: invalid syntax
  >>>

I don't know how best to fix this -- using different syntax for 'def'
inside a class than outside would require a complete rewrite of the
grammar, which is not a good idea.  Perhaps a 2nd pass compile-time
check would be sufficient.

> which results in some unexpected behaviour:
> >>> print(1)
> 1
> >>> globals()['print'](1)
> printing 1
> 
> But when combining it with modules, it does work as expected, of course:
> 
> # printer.py:
> def print(x, y):
>         print "printing", x, "and", y
> #
> 
> >>> import printer
> >>> printer.print
> <function print at 0x824120c>
> >>> printer.print(1, 2)
> printing 1 and 2
> 
> Another plus-side of this particular method is that it's simple and
> straightforward, if a bit maintenance-intensive :-) But the big question is:
> is this enough for what you need ? Or do you need the ability to use
> keywords in *all* identifiers, including variable names and such ? Because
> that is quite a bit harder ;-P

I believe that one other thing is needed: keyword parameters (only in
calls, not in definitions).  Also, I think you missed a few reserved
words, e.g. 'and', 'or'.  See Lib/keyword.py!

A comment on the patch: wouldn't it be *much* better to change the
grammar to introduce a new nonterminal, e.g. unres_name, as follows:

unres_name; NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | \
  'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | \
  'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | \
  'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally'

and use this elsewhere in the rules:

funcdef: 'def' unres_name parameters ':' suite
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' unres_name

Then you'd have to fix compile.c of course, but only in two places (I
think?).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Mon Aug 28 13:16:18 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 06:16:18 -0500
Subject: [Python-Dev] If you thought there were too many PEPs...
In-Reply-To: Your message of "Mon, 28 Aug 2000 09:30:23 +0300."
             <Pine.GSO.4.10.10008280930000.5796-100000@sundial> 
References: <Pine.GSO.4.10.10008280930000.5796-100000@sundial> 
Message-ID: <200008281116.GAA22841@cj20424-a.reston1.va.home.com>

> > from SUSv2:
> > 
> >     int remove(const char *path);
> > 
> >     If path does not name a directory, remove(path)
> >     is equivalent to unlink(path). 
> > 
> >     If path names a directory, remove(path) is equi-
> >     valent to rmdir(path). 
> > 
> > should I fix this?
> 
> 1. Yes.
> 2. After the feature freeze.

Agreed.  Note that the correct fix is to use remove() if it exists and
emulate it if it doesn't.

On Windows, I believe remove() exists but probably not with the above
semantics so it should be emulated.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Mon Aug 28 14:33:59 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 14:33:59 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
Message-ID: <39AA5C37.2F1846B3@lemburg.com>

I've been tossing some ideas around w/r to adding pragma style
declarations to Python and would like to hear what you think
about these:

1. Embed pragma declarations in comments:

	#pragma: name = value

   Problem: comments are removed by the tokenizer, yet the compiler
   will have to make use of them, so some logic would be needed
   to carry them along.

2. Reusing a Python keyword to build a new form of statement:

	def name = value

   Problem: not sure whether the compiler and grammar could handle
   this.

   The nice thing about this kind of declaration is that it would
   generate a node which the compiler could actively use. Furthermore,
   scoping would come for free. This one is my favourite.

3. Add a new keyword:

	decl name = value

   Problem: possible code breakage.

This is only a question regarding the syntax of these meta-
information declarations. The semantics remain to be solved
in a different discussion.

Comments ?

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Mon Aug 28 14:38:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 14:38:13 +0200
Subject: [Python-Dev] Python keywords
In-Reply-To: <200008281054.FAA22728@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 28, 2000 at 05:54:13AM -0500
References: <20000828011624.E500@xs4all.nl> <200008281054.FAA22728@cj20424-a.reston1.va.home.com>
Message-ID: <20000828143813.F500@xs4all.nl>

On Mon, Aug 28, 2000 at 05:54:13AM -0500, Guido van Rossum wrote:

> > However, it also allows this at the top level, currently:
> > >>> def print(x):
> > ...     print "printing", x
> > ... 

> Initially I thought this would be fine, but on second thought I'm not
> so sure.  To a newbie who doesn't know all the keywords, this would be
> confusing:
> 
>   >>> def try(): # my first function
>   ...     print "hello"
>   ...
>   >>> try()
>     File "<stdin>", line 1
>       try()
> 	 ^
>   SyntaxError: invalid syntax
>   >>>
> 
> I don't know how best to fix this -- using different syntax for 'def'
> inside a class than outside would require a complete rewrite of the
> grammar, which is not a good idea.  Perhaps a 2nd pass compile-time
> check would be sufficient.

Hmm. I'm not really sure. I think it's nice to be able to use
'object.print', and it would be, well, inconsistent, not to allow
'module.print' (or module.exec, for that matter), but I realize how
confusing it can be.

Perhaps generate a warning ? :-P

> I believe that one other thing is needed: keyword parameters (only in
> calls, not in definitions).  Also, I think you missed a few reserved
> words, e.g. 'and', 'or'.  See Lib/keyword.py!

Ahh, yes. I knew there had to be a list of keywords, but I was too tired to
go haunt for it, last night ;) 

> A comment on the patch: wouldn't it be *much* better to change the
> grammar to introduce a new nonterminal, e.g. unres_name, as follows:

> unres_name; NAME | 'for' | 'if' | 'while' | 'else' | 'elif' | 'def' | \
>   'class' | 'print' | 'del' | 'raise' | 'exec' | 'in' | 'is' | 'from' | \
>   'pass' | 'import' | 'global' | 'assert' | 'return' | 'break' | \
>   'continue' | 'try' | 'except' | 'not' | 'lambda' | 'finally'

> and use this elsewhere in the rules:

> funcdef: 'def' unres_name parameters ':' suite
> trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' unres_name

> Then you'd have to fix compile.c of course, but only in two places (I
> think?).

I tried this before, a week or two ago, but it was too much of a pain. The
nodes get tossed around no end, and tracking down where they are STR()'d and
TYPE()'d is, well, annoying ;P I tried to hack around it by making STR() and
CHILD() do some magic, but it didn't quite work. I kind of gave up and
decided it had to be done in the metagrammar, which drove me insane last
night ;-) and then decided to 'prototype' it first.

Then again, maybe I missed something. I might try it again. It would
definately be the better solution ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From DavidA at ActiveState.com  Thu Aug 24 02:25:55 2000
From: DavidA at ActiveState.com (David Ascher)
Date: Wed, 23 Aug 2000 17:25:55 -0700 (Pacific Daylight Time)
Subject: [Python-Dev] [Announce] ActivePython 1.6 beta release (fwd)
Message-ID: <Pine.WNT.4.21.0008231725340.272-100000@loom>

It is my pleasure to announce the availability of the beta release of
ActivePython 1.6, build 100.

This binary distribution, based on Python 1.6b1, is available from
ActiveState's website at:

    http://www.ActiveState.com/Products/ActivePython/

ActiveState is committed to making Python easy to install and use on all
major platforms. ActivePython contains the convenience of swift
installation, coupled with commonly used modules, providing you with a
total package to meets your Python needs. Additionally, for Windows users,
ActivePython provides a suite of Windows tools, developed by Mark Hammond.

ActivePython is provided in convenient binary form for Windows, Linux and
Solaris under a variety of installation packages, available at:

    http://www.ActiveState.com/Products/ActivePython/Download.html

For support information, mailing list subscriptions and archives, a bug
reporting system, and fee-based technical support, please go to

    http://www.ActiveState.com/Products/ActivePython/

Please send us feedback regarding this release, either through the mailing
list or through direct email to ActivePython-feedback at ActiveState.com.

ActivePython is free, and redistribution of ActivePython within your
organization is allowed.  The ActivePython license is available at
http://www.activestate.com/Products/ActivePython/License_Agreement.html
and in the software packages.

We look forward to your comments and to making ActivePython suit your
Python needs in future releases.

Thank you,

-- David Ascher & the ActivePython team
   ActiveState Tool Corporation











From nhodgson at bigpond.net.au  Mon Aug 28 16:22:50 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Tue, 29 Aug 2000 00:22:50 +1000
Subject: [Python-Dev] Python identifiers - was: Python keywords
References: <20000828011624.E500@xs4all.nl>
Message-ID: <019601c010fb$731007c0$8119fea9@neil>

   As well as .NET requiring a mechanism for accessing externally defined
identifiers which clash with Python keywords, it would be good to allow
access to identifiers containing non-ASCII characters. This is allowed in
.NET. C# copies the Java convention of allowing \u escapes in identifiers as
well as character/string literals.

   Has there been any thought to allowing this in Python? The benefit of
this convention over encoding the file in UTF-8 or an 8 bit character set is
that it is ASCII safe and can be manipulated correctly by common tools. My
interest in this is in the possibility of extending Scintilla and PythonWin
to directly understand this sequence, showing the correct glyph rather than
the \u sequence.

   Neil




From bwarsaw at beopen.com  Mon Aug 28 15:44:45 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 28 Aug 2000 09:44:45 -0400 (EDT)
Subject: [Python-Dev] Re: c.l.p.a -- what needs to be done ?
References: <39A660CB.7661E20E@lemburg.com>
	<200008260814.KAA06267@hera.informatik.uni-bonn.de>
Message-ID: <14762.27853.159285.488297@anthem.concentric.net>

>>>>> "MF" == Markus Fleck <fleck at triton.informatik.uni-bonn.de> writes:

    MF> In principle, I do have the time again to do daily moderation
    MF> of incoming postings for c.l.py.a. Unfortunately, I currently
    MF> lack the infrastructure (i.e. the moderation program), which
    MF> went down together with the old starship. I was basically
    MF> waiting for a version of Mailman that could be used to post to
    MF> moderated newsgroups. (I should probably have been more vocal
    MF> about that, or even should have started hacking Mailman
    MF> myself...

All this is in place now.
    
    MF> I *did* start to write something that would grab new
    MF> announcements daily from Parnassus and post them to c.l.py.a,
    MF> and I may even come to finish this in September, but that
    MF> doesn't substitute for a "real" moderation tool for
    MF> user-supplied postings. Also, it would probably be a lot
    MF> easier for Parnassus postings to be built directly from the
    MF> Parnassus database, instead from its [generated] HTML pages -
    MF> the Parnassus author intended to supply such functionality,
    MF> but I didn't hear from him yet, either.)

I think that would be a cool thing to work on.  As I mentioned to
Markus in private email, it would be great if the Parnassus->news tool
added the special c.l.py.a footer so that automated scripts on the
/other/ end could pull the messages off the newsgroup, search for the
footer, and post them to web pages, etc.

    MF> So what's needed now? Primarily, a Mailman installation that
    MF> can post to moderated newsgroups (and maybe also do the
    MF> mail2list gatewaying for c.l.py.a), and a mail alias that
    MF> forwards mail for python-announce at python.org to that Mailman
    MF> address. Some "daily digest" generator for Parnassus
    MF> announcements would be nice to have, too, but that can only
    MF> come once the other two things work.

All this is in place, as MAL said.  Markus, if you'd like to be a
moderator, email me and I'd be happy to add you.

And let's start encouraging people to post to c.l.py.a and
python-announce at Python.org again!

-Barry




From bwarsaw at beopen.com  Mon Aug 28 17:01:24 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Mon, 28 Aug 2000 11:01:24 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
Message-ID: <14762.32452.579356.483473@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake, Jr <fdrake at beopen.com> writes:

    Fred> None the less, performance is an issue for dictionaries, so
    Fred> I came up with the idea to use a specialized version for
    Fred> string keys.

Note that JPython does something similar for dictionaries that are
used for namespaces.  See PyStringMap.java.

-Barry



From fdrake at beopen.com  Mon Aug 28 17:19:44 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 11:19:44 -0400 (EDT)
Subject: [Python-Dev] New dictionaries patch on SF
In-Reply-To: <14762.32452.579356.483473@anthem.concentric.net>
References: <14759.22577.303296.239155@cj42289-a.reston1.va.home.com>
	<14762.32452.579356.483473@anthem.concentric.net>
Message-ID: <14762.33552.622374.428515@cj42289-a.reston1.va.home.com>

Barry A. Warsaw writes:
 > Note that JPython does something similar for dictionaries that are
 > used for namespaces.  See PyStringMap.java.

  The difference is that there are no code changes outside
dictobject.c to make this useful for my proposal -- there isn't a new
object type involved.  The PyStringMap class is actually a different
implementation (which I did dig into a bit at one point, to create
versions that weren't bound to JPython).
  My modified dictionary objects are just dictionary objects that
auto-degrade themselves as soon as a non-string key is looked up
(including while setting values).  But the approach and rational are
very similar.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From guido at beopen.com  Mon Aug 28 19:09:30 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 12:09:30 -0500
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: Your message of "Mon, 28 Aug 2000 14:33:59 +0200."
             <39AA5C37.2F1846B3@lemburg.com> 
References: <39AA5C37.2F1846B3@lemburg.com> 
Message-ID: <200008281709.MAA24142@cj20424-a.reston1.va.home.com>

> I've been tossing some ideas around w/r to adding pragma style
> declarations to Python and would like to hear what you think
> about these:
> 
> 1. Embed pragma declarations in comments:
> 
> 	#pragma: name = value
> 
>    Problem: comments are removed by the tokenizer, yet the compiler
>    will have to make use of them, so some logic would be needed
>    to carry them along.
> 
> 2. Reusing a Python keyword to build a new form of statement:
> 
> 	def name = value
> 
>    Problem: not sure whether the compiler and grammar could handle
>    this.
> 
>    The nice thing about this kind of declaration is that it would
>    generate a node which the compiler could actively use. Furthermore,
>    scoping would come for free. This one is my favourite.
> 
> 3. Add a new keyword:
> 
> 	decl name = value
> 
>    Problem: possible code breakage.
> 
> This is only a question regarding the syntax of these meta-
> information declarations. The semantics remain to be solved
> in a different discussion.

I say add a new reserved word pragma and accept the consequences.  The
other solutions are just too ugly.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From jeremy at beopen.com  Mon Aug 28 18:36:33 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Mon, 28 Aug 2000 12:36:33 -0400 (EDT)
Subject: [Python-Dev] need help with build on HP-UX
Message-ID: <14762.38161.405971.414152@bitdiddle.concentric.net>

We have a bug report for Python 1.5.2 that says building with threads
enabled causes a core dump with the interpreter is started.

#110650:
http://sourceforge.net/bugs/?func=detailbug&bug_id=110650&group_id=5470

I don't have access to an HP-UX box on which to text this problem.  If
anyone does, could they verify whether the problem exists with the
current code?

Jeremy



From nathan at islanddata.com  Mon Aug 28 18:51:24 2000
From: nathan at islanddata.com (Nathan Clegg)
Date: Mon, 28 Aug 2000 09:51:24 -0700 (PDT)
Subject: [Python-Dev] RE: need help with build on HP-UX
In-Reply-To: <14762.38161.405971.414152@bitdiddle.concentric.net>
Message-ID: <XFMail.20000828095124.nathan@islanddata.com>

I can't say for current code, but I ran into this problem with 1.5.2.  I
resolved it by installing pthreads instead of HP's native.  Is/should this be a
prerequisite?



On 28-Aug-2000 Jeremy Hylton wrote:
> We have a bug report for Python 1.5.2 that says building with threads
> enabled causes a core dump with the interpreter is started.
> 
>#110650:
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110650&group_id=5470
> 
> I don't have access to an HP-UX box on which to text this problem.  If
> anyone does, could they verify whether the problem exists with the
> current code?
> 
> Jeremy
> 
> -- 
> http://www.python.org/mailman/listinfo/python-list



----------------------------------
Nathan Clegg
 nathan at islanddata.com





From guido at beopen.com  Mon Aug 28 20:34:55 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 13:34:55 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: Your message of "Mon, 28 Aug 2000 10:20:08 MST."
             <200008281720.KAA09138@slayer.i.sourceforge.net> 
References: <200008281720.KAA09138@slayer.i.sourceforge.net> 
Message-ID: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>

How about popen4?  Or is that Windows specific?

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Mon Aug 28 19:36:06 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 19:36:06 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <39A68B42.4E3F8A3D@lemburg.com> <B5CEE3D9.81F2%dgoodger@bigfoot.com> <39AA22A0.D533598A@lemburg.com> <200008281515.IAA27799@netcom.com>
Message-ID: <39AAA306.2CBD5383@lemburg.com>

Aahz Maruch wrote:
> 
> [p&e]
> 
> In article <39AA22A0.D533598A at lemburg.com>,
> M.-A. Lemburg <mal at lemburg.com> wrote:
> >
> >>     >>> docs(instance)
> >>     {'a': 'Description of a.', 'b': 'Description of b.'}
> >>
> >> There are repercussions here. A module containing the example from (3) above
> >> would have a __docs__ dictionary containing mappings for docstrings for each
> >> top-level class and function defined, in addition to docstrings for each
> >> global variable.
> >
> >This would not work well together with class inheritance.
> 
> Could you provide an example explaining this?  Using a dict *seems* like
> a good idea to me, too.

class A:
    " Base class for database "

    x = "???"
    " name of the database; override in subclasses ! "

    y = 1
    " run in auto-commit ? "

class D(A):

    x = "mydb"
    """ name of the attached database; note that this must support
        transactions 
    """

This will give you:

A.__doc__x__ == " name of the database; override in subclasses ! "
A.__doc__y__ == " run in auto-commit ? "
D.__doc__x__ == """ name of the attached database; note that this must support
        transactions 
    """
D.__doc__y__ == " run in auto-commit ? "

There's no way you are going to achieve this using dictionaries.

Note: You can always build dictionaries of docstring by using
the existing Python introspection features. This PEP is
meant to provide the data -- not the extraction tools.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From gvwilson at nevex.com  Mon Aug 28 19:43:29 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 13:43:29 -0400 (EDT)
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: <200008281709.MAA24142@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>

> > Marc-Andre Lemburg:
> > 1. Embed pragma declarations in comments:
> > 	#pragma: name = value
> > 
> > 2. Reusing a Python keyword to build a new form of statement:
> > 	def name = value
> > 
> > 3. Add a new keyword:
> > 	decl name = value

> Guido van Rossum:
> I say add a new reserved word pragma and accept the consequences.  
> The other solutions are just too ugly.

Greg Wilson:
Will pragma values be available at run-time, e.g. in a special
module-level dictionary variable '__pragma__', so that:

    pragma "encoding" = "UTF8"
    pragma "division" = "fractional"

has the same effect as:

    __pragma__["encoding"] = "UTF8"
    __pragma__["division"] = "fractional"

If that's the case, would it be better to use the dictionary syntax?  Or
does the special form simplify pragma detection so much as to justify
adding new syntax?

Also, what's the effect of putting a pragma in the middle of a file,
rather than at the top?  Does 'import' respect pragmas, or are they
per-file?  I've seen Fortran files that start with 20 lines of:

    C$VENDOR PROPERTY DEFAULT

to disable any settings that might be in effect when the file is included
in another, just so that the author of the include'd file could be sure of
the semantics of the code he was writing.

Thanks,

Greg




From fdrake at beopen.com  Mon Aug 28 20:00:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 14:00:43 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
	<200008281834.NAA24777@cj20424-a.reston1.va.home.com>
Message-ID: <14762.43211.814471.424886@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > How about popen4?  Or is that Windows specific?

  Haven't written it yet.  It's a little different from just wrappers
around popen2 module functions.  The Popen3 class doesn't support it
yet.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Mon Aug 28 20:06:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 28 Aug 2000 13:06:49 -0500 (CDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <200008281834.NAA24777@cj20424-a.reston1.va.home.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
	<200008281834.NAA24777@cj20424-a.reston1.va.home.com>
Message-ID: <14762.43577.780248.889686@beluga.mojam.com>

    Guido> How about popen4?  Or is that Windows specific?

This is going to sound really dumb, but for all N where N >= 2, how many
popenN routines are there?  Do they represent a subclass of rabbits?  Until
the thread about Windows and os.popen2 started, I, living in a dream world
where my view of libc approximated 4.2BSD, wasn't even aware any popenN
routines existed.  In fact, on my Mandrake box that seems to still be the
case:

    % man -k popen
    popen, pclose (3)    - process I/O
    % nm -a /usr/lib/libc.a | egrep popen
    iopopen.o:
    00000188 T _IO_new_popen
    00000188 W _IO_popen
    00000000 a iopopen.c
    00000188 T popen

In fact, the os module documentation only describes popen, not popenN.

Where'd all these other popen variants come from?  Where can I find them
documented online?

Skip



From fdrake at beopen.com  Mon Aug 28 20:22:27 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Mon, 28 Aug 2000 14:22:27 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/test/output test_popen2,1.2,1.3
In-Reply-To: <14762.43577.780248.889686@beluga.mojam.com>
References: <200008281720.KAA09138@slayer.i.sourceforge.net>
	<200008281834.NAA24777@cj20424-a.reston1.va.home.com>
	<14762.43577.780248.889686@beluga.mojam.com>
Message-ID: <14762.44515.597067.695634@cj42289-a.reston1.va.home.com>

Skip Montanaro writes:
 > In fact, the os module documentation only describes popen, not popenN.

  This will be fixed.

 > Where'd all these other popen variants come from?  Where can I find them
 > documented online?

  In the popen2 module docs, there are descriptions for popen2() and
popen3().  popen4() is new from the Windows world.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From mal at lemburg.com  Mon Aug 28 20:57:26 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 20:57:26 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
Message-ID: <39AAB616.460FA0A8@lemburg.com>

Greg Wilson wrote:
> 
> > > Marc-Andre Lemburg:
> > > 1. Embed pragma declarations in comments:
> > >     #pragma: name = value
> > >
> > > 2. Reusing a Python keyword to build a new form of statement:
> > >     def name = value
> > >
> > > 3. Add a new keyword:
> > >     decl name = value
> 
> > Guido van Rossum:
> > I say add a new reserved word pragma and accept the consequences.
> > The other solutions are just too ugly.
> 
> Greg Wilson:
> Will pragma values be available at run-time, e.g. in a special
> module-level dictionary variable '__pragma__', so that:
> 
>     pragma "encoding" = "UTF8"
>     pragma "division" = "fractional"
> 
> has the same effect as:
> 
>     __pragma__["encoding"] = "UTF8"
>     __pragma__["division"] = "fractional"
> 
> If that's the case, would it be better to use the dictionary syntax?  Or
> does the special form simplify pragma detection so much as to justify
> adding new syntax?

Pragmas tell the compiler to make certain assumptions about the
scope they appear in. It may be useful have their values available
as __pragma__ dict too, but only for introspection purposes and
then only for objects which support the attribute.

If we were to use a convention such as your proposed dictionary
assignment for these purposes, the compiler would have to treat
these assignments in special ways. Adding a new reserved word is
much cleaner.

> Also, what's the effect of putting a pragma in the middle of a file,
> rather than at the top?  Does 'import' respect pragmas, or are they
> per-file?  I've seen Fortran files that start with 20 lines of:
> 
>     C$VENDOR PROPERTY DEFAULT
> 
> to disable any settings that might be in effect when the file is included
> in another, just so that the author of the include'd file could be sure of
> the semantics of the code he was writing.

The compiler will see the pragma definition as soon as it reaches
it during compilation. All subsequent compilation (up to where
the compilation block ends, i.e. up to module, function and class
boundaries) will be influenced by the setting.

This is in line with all other declarations in Python, e.g. those
of global variables, functions and classes.

Imports do not affect pragmas since pragmas are a compile
time thing.

Here are some possible applications of pragmas (just to toss in
a few ideas):

# Cause global lookups to be cached in function's locals for future
# reuse.
pragma globals = 'constant'

# Cause all Unicode literals in the current scope to be
# interpreted as UTF-8.
pragma encoding = 'utf-8'

# Use -OO style optimizations
pragma optimization = 2

# Default division mode
pragma division = 'float'

The basic syntax in the above examples is:

	"pragma" NAME "=" (NUMBER | STRING+)

It has to be that simple to allow the compiler use the information
at compilation time.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From skip at mojam.com  Mon Aug 28 21:17:47 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 28 Aug 2000 14:17:47 -0500 (CDT)
Subject: [Python-Dev] Pragma-style declaration syntax
In-Reply-To: <39AAB616.460FA0A8@lemburg.com>
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
	<39AAB616.460FA0A8@lemburg.com>
Message-ID: <14762.47835.129388.512169@beluga.mojam.com>

    MAL> Here are some possible applications of pragmas (just to toss in
    MAL> a few ideas):

    MAL> # Cause global lookups to be cached in function's locals for future
    MAL> # reuse.
    MAL> pragma globals = 'constant'

    MAL> # Cause all Unicode literals in the current scope to be
    MAL> # interpreted as UTF-8.
    MAL> pragma encoding = 'utf-8'

    MAL> # Use -OO style optimizations
    MAL> pragma optimization = 2

    MAL> # Default division mode
    MAL> pragma division = 'float'

Marc-Andre,

My interpretation of the word "pragma" (and I think a probably common
interpretation) is that it is a "hint to the compiler" which the compiler
can ignore if it chooses.  See

    http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?query=pragma

Your use of the word suggests that you propose to implement something more
akin to a "directive", that is, something the compiler is not free to
ignore.  Ignoring the pragma in the first and third examples will likely
only make the program run slower.  Ignoring the second or fourth pragmas
would likely result in incorrect compilation of the source.

Whatever you come up with, I think the distinction between hint and
directive will have to be made clear in the documentation.

Skip




From tanzer at swing.co.at  Mon Aug 28 18:27:44 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Mon, 28 Aug 2000 18:27:44 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: Your message of "Mon, 28 Aug 2000 10:55:15 +0200."
             <39AA28F3.1968E27@lemburg.com> 
Message-ID: <m13TRlA-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

> > IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> > __docs__ dictionary is a better solution:
> > 
> > - It provides all docstrings for the attributes of an object in a
> >   single place.
> > 
> >   * Handy in interactive mode.
> >   * This simplifies the generation of documentation considerably.
> > 
> > - It is easier to explain in the documentation
> 
> The downside is that it doesn't work well together with
> class inheritance: docstrings of the above form can
> be overridden or inherited just like any other class
> attribute.

Yep. That's why David also proposed a `doc' function combining the
`__docs__' of a class with all its ancestor's __docs__.

> > Normally, Python concatenates adjacent strings. It doesn't do this
> > with docstrings. I think Python's behavior would be more consistent
> > if docstrings were concatenated like any other strings.
> 
> Huh ? It does...
> 
> >>> class C:
> ...     "first line"\
> ...     "second line"
> ... 
> >>> C.__doc__
> 'first linesecond line'
> 
> And the same works for the attribute doc strings too.

Surprise. I tried it this morning. Didn't use a backslash, though. And almost 
overlooked it now.

> > >             b = 2
> > >
> > >             def x(self):
> > >                 "C.x doc string"
> > >                 y = 3
> > >                 return 1
> > >
> > >             "b's doc string"
> > >
> > >     Since the definition of method "x" currently does not reset the
> > >     used assignment name variable, it is still valid when the compiler
> > >     reaches the docstring "b's doc string" and thus assigns the string
> > >     to __doc__b__.
> > 
> > This is rather surprising behavior. Does this mean that a string in
> > the middle of a function definition would be interpreted as the
> > docstring of the function?
> 
> No, since at the beginning of the function the name variable
> is set to NULL.

Fine. Could the attribute docstrings follow the same pattern, then?

> > >     A possible solution to this problem would be resetting the name
> > >     variable for all non-expression nodes.
> > 
> > IMHO, David Goodger's proposal of indenting the docstring relative to the
> > attribute it refers to is a better solution.
> > 
> > If that requires too many changes to the parser, the name variable
> > should be reset for all statement nodes.
> 
> See my other mail: indenting is only allowed for blocks of
> code and these are usually started with a colon -- doesn't
> work here.

Too bad.

It's-still-a-great-addition-to-Python ly, 
Christian

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Mon Aug 28 21:29:04 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 21:29:04 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <Pine.LNX.4.10.10008281331590.1863-100000@akbar.nevex.com>
		<39AAB616.460FA0A8@lemburg.com> <14762.47835.129388.512169@beluga.mojam.com>
Message-ID: <39AABD80.5089AEAF@lemburg.com>

Skip Montanaro wrote:
> 
>     MAL> Here are some possible applications of pragmas (just to toss in
>     MAL> a few ideas):
> 
>     MAL> # Cause global lookups to be cached in function's locals for future
>     MAL> # reuse.
>     MAL> pragma globals = 'constant'
> 
>     MAL> # Cause all Unicode literals in the current scope to be
>     MAL> # interpreted as UTF-8.
>     MAL> pragma encoding = 'utf-8'
> 
>     MAL> # Use -OO style optimizations
>     MAL> pragma optimization = 2
> 
>     MAL> # Default division mode
>     MAL> pragma division = 'float'
> 
> Marc-Andre,
> 
> My interpretation of the word "pragma" (and I think a probably common
> interpretation) is that it is a "hint to the compiler" which the compiler
> can ignore if it chooses.  See
> 
>     http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?query=pragma
> 
> Your use of the word suggests that you propose to implement something more
> akin to a "directive", that is, something the compiler is not free to
> ignore.  Ignoring the pragma in the first and third examples will likely
> only make the program run slower.  Ignoring the second or fourth pragmas
> would likely result in incorrect compilation of the source.
> 
> Whatever you come up with, I think the distinction between hint and
> directive will have to be made clear in the documentation.

True, I see the pragma statement as directive. Perhaps its not
the best name after all -- but then it is likely not to be in
use in existing Python programs as identifier, so perhaps we
just need to make it clear in the documentation that some
pragma statements will carry important information, not only hints.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Aug 28 21:35:58 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 28 Aug 2000 21:35:58 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TRlA-000wcEC@swing.co.at>
Message-ID: <39AABF1E.171BFD00@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> 
> > > IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> > > __docs__ dictionary is a better solution:
> > >
> > > - It provides all docstrings for the attributes of an object in a
> > >   single place.
> > >
> > >   * Handy in interactive mode.
> > >   * This simplifies the generation of documentation considerably.
> > >
> > > - It is easier to explain in the documentation
> >
> > The downside is that it doesn't work well together with
> > class inheritance: docstrings of the above form can
> > be overridden or inherited just like any other class
> > attribute.
> 
> Yep. That's why David also proposed a `doc' function combining the
> `__docs__' of a class with all its ancestor's __docs__.

The same can be done for __doc__<attrname>__ style attributes:
a helper function would just need to look at dir(Class) and then
extract the attribute doc strings it finds. It could also do
a DFS search to find a complete API description of the class
by emulating attribute lookup and combine method and attribute
docstrings to produce some nice online documentation output.
 
> > > Normally, Python concatenates adjacent strings. It doesn't do this
> > > with docstrings. I think Python's behavior would be more consistent
> > > if docstrings were concatenated like any other strings.
> >
> > Huh ? It does...
> >
> > >>> class C:
> > ...     "first line"\
> > ...     "second line"
> > ...
> > >>> C.__doc__
> > 'first linesecond line'
> >
> > And the same works for the attribute doc strings too.
> 
> Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> overlooked it now.

You could also wrap the doc string in parenthesis or use a triple
quote string.
 
> > > >             b = 2
> > > >
> > > >             def x(self):
> > > >                 "C.x doc string"
> > > >                 y = 3
> > > >                 return 1
> > > >
> > > >             "b's doc string"
> > > >
> > > >     Since the definition of method "x" currently does not reset the
> > > >     used assignment name variable, it is still valid when the compiler
> > > >     reaches the docstring "b's doc string" and thus assigns the string
> > > >     to __doc__b__.
> > >
> > > This is rather surprising behavior. Does this mean that a string in
> > > the middle of a function definition would be interpreted as the
> > > docstring of the function?
> >
> > No, since at the beginning of the function the name variable
> > is set to NULL.
> 
> Fine. Could the attribute docstrings follow the same pattern, then?

They could and probably should by resetting the variable
after all constructs which do not assign attributes.
 
> > > >     A possible solution to this problem would be resetting the name
> > > >     variable for all non-expression nodes.
> > >
> > > IMHO, David Goodger's proposal of indenting the docstring relative to the
> > > attribute it refers to is a better solution.
> > >
> > > If that requires too many changes to the parser, the name variable
> > > should be reset for all statement nodes.
> >
> > See my other mail: indenting is only allowed for blocks of
> > code and these are usually started with a colon -- doesn't
> > work here.
> 
> Too bad.
> 
> It's-still-a-great-addition-to-Python ly,
> Christian

Me thinks so too ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at beopen.com  Mon Aug 28 23:59:36 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 16:59:36 -0500
Subject: [Python-Dev] Lukewarm about range literals
Message-ID: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>

I chatted with some PythonLabs folks this morning and nobody had any
real enthusiasm for range literals.  I notice that:

  for i in [:100]: print i

looks a bit too much like line noise.  I remember that Randy Pausch
once mentioned that a typical newbie will read this as:

  for i in 100 print i

and they will have a heck of a time to reconstruct the punctuation,
with all sorts of errors lurking, e.g.:

  for i in [100]: print i
  for i in [100:]: print i
  for i in :[100]: print i

Is there anyone who wants to champion this?

Sorry, Thomas!  I'm not doing this to waste your time!  It honestly
only occurred to me this morning, after Tim mentioned he was at most
lukewarm about it...

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From thomas at xs4all.net  Mon Aug 28 23:06:31 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 23:06:31 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Mon, Aug 28, 2000 at 04:59:36PM -0500
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <20000828230630.I500@xs4all.nl>

On Mon, Aug 28, 2000 at 04:59:36PM -0500, Guido van Rossum wrote:

> Sorry, Thomas!  I'm not doing this to waste your time!  It honestly
> only occurred to me this morning, after Tim mentioned he was at most
> lukewarm about it...

Heh, no problem. It was good practice, and if you remember (or search your
mail archive) I was only lukewarm about it, too, back when you asked me to
write it! And been modulating between 'lukewarm' and 'stonecold', inbetween
generators, tuple-ranges that look like hardware-addresses, nonint-ranges
and what not.

Less-docs-to-write-if-noone-champions-this-then-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at beopen.com  Tue Aug 29 00:30:10 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 17:30:10 -0500
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
Message-ID: <200008282230.RAA30148@cj20424-a.reston1.va.home.com>

Now that the CNRI license issues are nearly settled, BeOpen.com needs
to put its own license on Python 2.0 (as a derivative work of CNRI's
Python 1.6) too.  We want an open discussion about the new license
with the Python community, and have established a mailing list for
this purpose.  To participate subscribe, go to

   http://mailman.beopen.com/mailman/listinfo/license-py20

and follow the instructions for subscribing.  The mailing list is
unmoderated, open to all, and archived
(at http://mailman.beopen.com/pipermail/license-py20/).

Your questions, concerns and suggestions are welcome!

Our initial thoughts are to use a slight adaptation of the CNRI
license for Python 1.6, adding an "or GPL" clause, meaning that Python
2.0 can be redistributed under the Python 2.0 license or under the GPL
(like Perl can be redistributed under the Artistic license or under
the GPL).

Note that I don't want this list to degenerate into flaming about the
CNRI license (except as it pertains directly to the 2.0 license) --
there's little we can do about the CNRI license, and it has been
beaten to death on comp.lang.python.

In case you're in the dark about the CNRI license, please refer to
http://www.python.org/1.6/download.html for the license text and to
http://www.python.org/1.6/license_faq.html for a list of frequently
asked questions about the license and CNRI's answers.

Note that we're planning to release the first beta release of Python
2.0 on September 4 -- however we can change the license for the final
release.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Mon Aug 28 23:46:19 2000
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 28 Aug 2000 16:46:19 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <14762.56747.826063.269390@beluga.mojam.com>

    Guido> I notice that:

    Guido>   for i in [:100]: print i

    Guido> looks a bit too much like line noise.  I remember that Randy
    Guido> Pausch once mentioned that a typical newbie will read this as:

    Guido>   for i in 100 print i

Just tossing out a couple ideas here.  I don't see either mentioned in the
current version of the PEP.

    1. Would it help readability if there were no optional elements in range
       literals?  That way you'd have to write

	for i in [0:100]: print i

    2. Would it be more visually obvious to use ellipsis notation to
       separate the start and end inidices?

        >>> for i in [0...100]: print i
	0
	1
	...
	99

	>>> for i in [0...100:2]: print i
	0
	2
	...
	98

I don't know if either are possible syntactically.

Skip



From thomas at xs4all.net  Mon Aug 28 23:55:36 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 28 Aug 2000 23:55:36 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14762.56747.826063.269390@beluga.mojam.com>; from skip@mojam.com on Mon, Aug 28, 2000 at 04:46:19PM -0500
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com> <14762.56747.826063.269390@beluga.mojam.com>
Message-ID: <20000828235536.J500@xs4all.nl>

On Mon, Aug 28, 2000 at 04:46:19PM -0500, Skip Montanaro wrote:

> I don't know if either are possible syntactically.

They are perfectly possible (in fact, more easily so than the current
solution, if it hadn't already been written.) I like the elipsis syntax
myself, but mostly because i have *no* use for elipses, currently. It's also
reminiscent of the range-creating '..' syntax I learned in MOO, a long time
ago ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gvwilson at nevex.com  Tue Aug 29 00:04:41 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 18:04:41 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000828235536.J500@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>

> Thomas Wouters wrote:
> They are perfectly possible (in fact, more easily so than the current
> solution, if it hadn't already been written.) I like the elipsis
> syntax myself, but mostly because i have *no* use for elipses,
> currently. It's also reminiscent of the range-creating '..' syntax I
> learned in MOO, a long time ago ;)

I would vote -1 on [0...100:10] --- even range(0, 100, 10) reads better,
IMHO.  I understand Guido et al's objections to:

    for i in [:100]:

but in my experience, students coming to Python from other languages seem
to expect to be able to say "do this N times" very simply.  Even:

    for i in range(100):

raises eyebrows.  I know it's all syntactic sugar, but it comes up in the
first hour of every course I've taught...

Thanks,

Greg




From nowonder at nowonder.de  Tue Aug 29 02:41:57 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 29 Aug 2000 00:41:57 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
Message-ID: <39AB06D5.BD99855@nowonder.de>

Greg Wilson wrote:
> 
> I would vote -1 on [0...100:10] --- even range(0, 100, 10) reads better,

I don't like [0...100] either. It just looks bad.
But I really *do* like [0..100] (maybe that's Pascal being my first
serious language).

That said, I prefer almost any form of range literals over the current
situation. range(0,100) has no meaning to me (maybe because English is
not my mother tongue), but [0..100] looks like "from 0 to 100"
(although one might expect len([1..100]) == 100).

> but in my experience, students coming to Python from other languages seem
> to expect to be able to say "do this N times" very simply.  Even:
> 
>     for i in range(100):
> 
> raises eyebrows.  I know it's all syntactic sugar, but it comes up in the
> first hour of every course I've taught...

I fully agree on that one, although I think range(N) to
iterate N times isn't as bad as range(len(SEQUENCE)) to
iterate over the indices of a sequence.

not-voting---but-you-might-be-able-to-guess-ly y'rs
Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From cgw at fnal.gov  Tue Aug 29 00:47:30 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 28 Aug 2000 17:47:30 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
References: <200008282159.QAA29918@cj20424-a.reston1.va.home.com>
Message-ID: <14762.60418.53633.223999@buffalo.fnal.gov>

I guess I'm in the minority here because I kind of like the range
literal syntax.

Guido van Rossum writes:
 > I notice that:
 > 
 >   for i in [:100]: print i
 > 
 > looks a bit too much like line noise.  I remember that Randy Pausch
 > once mentioned that a typical newbie will read this as:
 > 
 >   for i in 100 print i

When I was a complete Python newbie (back around 1994) I thought that
the syntax

l2 = l1[:]

for copying lists looked pretty mysterious and weird.  But after
spending some time programming Python I've come to think that the
slice syntax is perfectly natural.  Should constructs be banned from
the language simply because they might confuse newbies?  I don't think
so.

I for one like Thomas' range literals.  They fit very naturally into
the existing Python concept of slices.

 > and they will have a heck of a time to reconstruct the punctuation,
 > with all sorts of errors lurking, e.g.:
 > 
 >   for i in [100]: print i
 >   for i in [100:]: print i
 >   for i in :[100]: print i

This argument seems a bit weak to me; you could take just about any
Python expression and mess up the punctuation with misplaced colons.

 > Is there anyone who wants to champion this?

I don't know about "championing" it but I'll give it a +1, if that
counts for anything.




From gvwilson at nevex.com  Tue Aug 29 01:02:29 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Mon, 28 Aug 2000 19:02:29 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14762.60418.53633.223999@buffalo.fnal.gov>
Message-ID: <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com>

> Charles wrote:
> When I was a complete Python newbie (back around 1994) I thought that
> the syntax
> 
> l2 = l1[:]
> 
> for copying lists looked pretty mysterious and weird.  But after
> spending some time programming Python I've come to think that the
> slice syntax is perfectly natural.  Should constructs be banned from
> the language simply because they might confuse newbies?

Greg writes:
Well, it *is* the reason we switched from Perl to Python in our software
engineering course...

Greg




From guido at beopen.com  Tue Aug 29 02:33:01 2000
From: guido at beopen.com (Guido van Rossum)
Date: Mon, 28 Aug 2000 19:33:01 -0500
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Your message of "Mon, 28 Aug 2000 19:02:29 -0400."
             <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com> 
References: <Pine.LNX.4.10.10008281901370.12053-100000@akbar.nevex.com> 
Message-ID: <200008290033.TAA30757@cj20424-a.reston1.va.home.com>

> > Charles wrote:
> > When I was a complete Python newbie (back around 1994) I thought that
> > the syntax
> > 
> > l2 = l1[:]
> > 
> > for copying lists looked pretty mysterious and weird.  But after
> > spending some time programming Python I've come to think that the
> > slice syntax is perfectly natural.  Should constructs be banned from
> > the language simply because they might confuse newbies?
> 
> Greg writes:
> Well, it *is* the reason we switched from Perl to Python in our software
> engineering course...

And the original proposal for range literals also came from the
Numeric corner of the world (I believe Paul Dubois first suggested it
to me).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From tim_one at email.msn.com  Tue Aug 29 03:51:33 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Mon, 28 Aug 2000 21:51:33 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000828230630.I500@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>

Just brain-dumping here:

Thomas did an excellent job on the patch!  It's clean & crisp and, I think,
bulletproof.  Just want that to be clear.

As the reviewer, I spent about 2 hours playing with it, trying it out in my
code.  And I simply liked it less the more I used it; e.g.,

for i in [:len(a)]:
    a[i] += 1

struck me as clumsier and uglier than

for i in range(len(a)):
    a[i] += 1

at once-- which I expected due to the novelty --but didn't grow on me at
*all*.  Which is saying something, since I'm the world's longest-standing
fan of "for i indexing a" <wink>; i.e., I'm *no* fan of the range(len(...))
business, and this seems even worse.  Despite that I should know 100x better
at all levels, I kept finding myself trying to write stuff like

for i in [:a]:  # or [len(a)] a couple times, even [a:] once
    a[i] += 1

Charles likes slices.  Me too!  I *love* them.  But as a standalone notation
(i.e., not as a subscript), part of the glory of slicing breaks down:  for
the list a, a[:] makes good sense, but when *iterating* over a,  it's
suddenly [:len(a)] because there's no context to supply a correct upper
bound.

For 2.0, the question is solely yes-or-no on this specific notation.  If it
goes in, it will never go away.  I was +0 at first, at best -0 now.  It does
nothing for me I can't do just as easily-- and I think more clearly --with
range.  The kinds of "extensions"/variations mentioned in the PEP make me
shiver, too.

Post 2.0, who knows.  I'm not convinced Python actually needs another
arithmetic-progression *list* notation.  If it does, I've always been fond
of Haskell's range literals (but note that they include the endpoint):

Prelude> [1..10]
[1,2,3,4,5,6,7,8,9,10]
Prelude> [1, 3 .. 10]
[1,3,5,7,9]
Prelude> [10, 9 .. 1]
[10,9,8,7,6,5,4,3,2,1]
Prelude> [10, 7 .. -5]
[10,7,4,1,-2,-5]
Prelude>

Of course Haskell is profoundly lazy too, so "infinite" literals are just as
normal there:

Prelude> take 5 [1, 100 ..]
[1,100,199,298,397]
Prelude> take 5 [3, 2 ..]
[3,2,1,0,-1]

It's often easier to just list the first two terms than to figure out the
*last* term and name the stride.  I like notations that let me chuckle "hey,
you're the computer, *you* figure out the silly details" <wink>.





From dgoodger at bigfoot.com  Tue Aug 29 05:05:41 2000
From: dgoodger at bigfoot.com (David Goodger)
Date: Mon, 28 Aug 2000 23:05:41 -0400
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <39AABF1E.171BFD00@lemburg.com>
References: <m13TRlA-000wcEC@swing.co.at><39AABF1E.171BFD00@lemburg.com>
Message-ID: <B5D0A0C4.82E1%dgoodger@bigfoot.com>

on 2000-08-28 15:35, M.-A. Lemburg (mal at lemburg.com) wrote:

> Christian Tanzer wrote:
>> 
>> "M.-A. Lemburg" <mal at lemburg.com> wrote:
>> 
>>>> IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
>>>> __docs__ dictionary is a better solution:
>>>> 
>>>> - It provides all docstrings for the attributes of an object in a
>>>> single place.
>>>> 
>>>> * Handy in interactive mode.
>>>> * This simplifies the generation of documentation considerably.
>>>> 
>>>> - It is easier to explain in the documentation
>>> 
>>> The downside is that it doesn't work well together with
>>> class inheritance: docstrings of the above form can
>>> be overridden or inherited just like any other class
>>> attribute.
>> 
>> Yep. That's why David also proposed a `doc' function combining the
>> `__docs__' of a class with all its ancestor's __docs__.
> 
> The same can be done for __doc__<attrname>__ style attributes:
> a helper function would just need to look at dir(Class) and then
> extract the attribute doc strings it finds. It could also do
> a DFS search to find a complete API description of the class
> by emulating attribute lookup and combine method and attribute
> docstrings to produce some nice online documentation output.

Using dir(Class) wouldn't find any inherited attributes of the class. A
depth-first search would be required for any use of attribute docstrings.


From cgw at fnal.gov  Tue Aug 29 06:38:41 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Mon, 28 Aug 2000 23:38:41 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
References: <20000828230630.I500@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <14763.15953.563107.722452@buffalo.fnal.gov>

Tim Peters writes:

 > As the reviewer, I spent about 2 hours playing with it, trying it out in my
 > code.  And I simply liked it less the more I used it

That's 2 hours more than I (and probably most other people) spent
trying it out.

 > For 2.0, the question is solely yes-or-no on this specific notation.  If it
 > goes in, it will never go away.

This strikes me as an extremely strong argument.  If the advantages
aren't really all that clear, then adopting this syntax for range
literals now removes the possibility to come up with a better way at a
later date ("opportunity cost", as the economists say).

The Haskell examples you shared are pretty neat.

FWIW, I retract my earlier +1.




From ping at lfw.org  Tue Aug 29 07:09:39 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 00:09:39 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>

On Mon, 28 Aug 2000, Tim Peters wrote:
> Post 2.0, who knows.  I'm not convinced Python actually needs another
> arithmetic-progression *list* notation.  If it does, I've always been fond
> of Haskell's range literals (but note that they include the endpoint):
> 
> Prelude> [1..10]
> [1,2,3,4,5,6,7,8,9,10]
> Prelude> [1, 3 .. 10]
> [1,3,5,7,9]
> Prelude> [10, 9 .. 1]
> [10,9,8,7,6,5,4,3,2,1]
> Prelude> [10, 7 .. -5]
> [10,7,4,1,-2,-5]

I think these examples are beautiful.  There is no reason why we couldn't
fit something like this into Python.  Imagine this:

    - The ".." operator produces a tuple (or generator) of integers.
      It should probably have precedence just above "in".
    
    - "a .. b", where a and b are integers, produces the sequence
      of integers (a, a+1, a+2, ..., b).

    - If the left argument is a tuple of two integers, as in
      "a, b .. c", then we get the sequence of integers from
      a to c with step b-a, up to and including c if c-a happens
      to be a multiple of b-a (exactly as in Haskell).

And, optionally:

    - The "..!" operator produces a tuple (or generator) of integers.
      It functions exactly like the ".." operator except that the
      resulting sequence does not include the endpoint.  (If you read
      "a .. b" as "go from a up to b", then read "a ..! b" as "go from
      a up to, but not including b".)

If this operator existed, we could then write:

    for i in 2, 4 .. 20:
        print i

    for i in 1 .. 10:
        print i*i

    for i in 0 ..! len(a):
        a[i] += 1

...and these would all do the obvious things.


-- ?!ng





From greg at cosc.canterbury.ac.nz  Tue Aug 29 07:04:05 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Aug 2000 17:04:05 +1200 (NZST)
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
In-Reply-To: <200008282230.RAA30148@cj20424-a.reston1.va.home.com>
Message-ID: <200008290504.RAA17003@s454.cosc.canterbury.ac.nz>

> meaning that Python
> 2.0 can be redistributed under the Python 2.0 license or under the
> GPL

Are you sure that's possible? Doesn't the CNRI license
require that its terms be passed on to users of derivative
works? If so, a user of Python 2.0 couldn't just remove the
CNRI license and replace it with the GPL.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Aug 29 07:17:38 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Aug 2000 17:17:38 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
Message-ID: <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>

Ka-Ping Yee <ping at lfw.org>:

>    for i in 1 .. 10:
>        print i*i

That looks quite nice to me!

>    for i in 0 ..! len(a):
>        a[i] += 1

And that looks quite ugly. Couldn't it just as well be

    for i in 0 .. len(a)-1:
        a[i] += 1

and be vastly clearer?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From bwarsaw at beopen.com  Tue Aug 29 07:30:31 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 01:30:31 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
	<200008290517.RAA17013@s454.cosc.canterbury.ac.nz>
Message-ID: <14763.19063.973751.122546@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Ka-Ping Yee <ping at lfw.org>:

    >> for i in 1 .. 10: print i*i

    GE> That looks quite nice to me!

Indeed.

    >> for i in 0 ..! len(a): a[i] += 1

    GE> And that looks quite ugly. Couldn't it just as well be

    |     for i in 0 .. len(a)-1:
    |         a[i] += 1

    GE> and be vastly clearer?

I agree.  While I read 1 ..! 10 as "from one to not 10" that doesn't
exactly tell me what the sequence /does/ run to. ;)

-Barry



From effbot at telia.com  Tue Aug 29 09:09:02 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 09:09:02 +0200
Subject: [Python-Dev] Lukewarm about range literals
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <01c501c01188$a21232e0$766940d5@hagrid>

tim peters wrote:
> Charles likes slices.  Me too!  I *love* them.  But as a standalone notation
> (i.e., not as a subscript), part of the glory of slicing breaks down:  for
> the list a, a[:] makes good sense, but when *iterating* over a,  it's
> suddenly [:len(a)] because there's no context to supply a correct upper
> bound.

agreed.  ranges and slices are two different things.  giving
them the same syntax is a lousy idea.

> Post 2.0, who knows.  I'm not convinced Python actually needs another
> arithmetic-progression *list* notation.  If it does, I've always been fond
> of Haskell's range literals (but note that they include the endpoint):
> 
> Prelude> [1..10]
> [1,2,3,4,5,6,7,8,9,10]
> Prelude> [1, 3 .. 10]
> [1,3,5,7,9]

isn't that taken from SETL?

(the more I look at SETL, the more Pythonic it looks.  not too
bad for something that was designed in the late sixties ;-)

talking about SETL, now that the range literals are gone, how
about revisiting an old proposal:

    "...personally, I prefer their "tuple former" syntax over the the
    current PEP202 proposal:

        [expression : iterator]

        [n : n in range(100)]
        [(x**2, x) : x in range(1, 6)]
        [a : a in y if a > 5]

    (all examples are slightly pythonified; most notably, they
    use "|" or "st" (such that) instead of "if")

    the expression can be omitted if it's the same thing as the
    loop variable, *and* there's at least one "if" clause:

        [a in y if a > 5]

    also note that their "for-in" statement can take qualifiers:

        for a in y if a > 5:
            ...

</F>




From tanzer at swing.co.at  Tue Aug 29 08:42:17 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Tue, 29 Aug 2000 08:42:17 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: Your message of "Mon, 28 Aug 2000 21:35:58 +0200."
             <39AABF1E.171BFD00@lemburg.com> 
Message-ID: <m13Tf69-000wcDC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

> > > > IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> > > > __docs__ dictionary is a better solution:
(snip)
> > > The downside is that it doesn't work well together with
> > > class inheritance: docstrings of the above form can
> > > be overridden or inherited just like any other class
> > > attribute.
> > 
> > Yep. That's why David also proposed a `doc' function combining the
> > `__docs__' of a class with all its ancestor's __docs__.
> 
> The same can be done for __doc__<attrname>__ style attributes:
> a helper function would just need to look at dir(Class) and then
> extract the attribute doc strings it finds. It could also do
> a DFS search to find a complete API description of the class
> by emulating attribute lookup and combine method and attribute
> docstrings to produce some nice online documentation output.

Of course, one can get at all docstrings by using `dir'. But it is a
pain and slow as hell. And nothing one would use in interactive mode.

As Python already handles the analogous case for `__dict__' and
`getattr', it seems to be just a SMOP to do it for `__docs__', too. 

> > > > Normally, Python concatenates adjacent strings. It doesn't do this
> > > > with docstrings. I think Python's behavior would be more consistent
> > > > if docstrings were concatenated like any other strings.
> > >
> > > Huh ? It does...
> > >
> > > >>> class C:
> > > ...     "first line"\
> > > ...     "second line"
> > > ...
> > > >>> C.__doc__
> > > 'first linesecond line'
> > >
> > > And the same works for the attribute doc strings too.
> > 
> > Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> > overlooked it now.
> 
> You could also wrap the doc string in parenthesis or use a triple
> quote string.

Wrapping a docstring in parentheses doesn't work in 1.5.2:

Python 1.5.2 (#5, Jan  4 2000, 11:37:02)  [GCC 2.7.2.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> class C:
...   ("first line"
...    "second line")
... 
>>> C.__doc__ 
>>> 

Triple quoted strings work -- that's what I'm constantly using. The
downside is, that the docstrings either contain spurious white space
or it messes up the layout of the code (if you start subsequent lines
in the first column).

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Tue Aug 29 11:00:49 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 11:00:49 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13TRlA-000wcEC@swing.co.at><39AABF1E.171BFD00@lemburg.com> <B5D0A0C4.82E1%dgoodger@bigfoot.com>
Message-ID: <39AB7BC1.5670ACDC@lemburg.com>

David Goodger wrote:
> 
> on 2000-08-28 15:35, M.-A. Lemburg (mal at lemburg.com) wrote:
> 
> > Christian Tanzer wrote:
> >>
> >> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> >>
> >>>> IMHO, David Goodger's (<dgoodger at bigfoot.com>) idea of using a
> >>>> __docs__ dictionary is a better solution:
> >>>>
> >>>> - It provides all docstrings for the attributes of an object in a
> >>>> single place.
> >>>>
> >>>> * Handy in interactive mode.
> >>>> * This simplifies the generation of documentation considerably.
> >>>>
> >>>> - It is easier to explain in the documentation
> >>>
> >>> The downside is that it doesn't work well together with
> >>> class inheritance: docstrings of the above form can
> >>> be overridden or inherited just like any other class
> >>> attribute.
> >>
> >> Yep. That's why David also proposed a `doc' function combining the
> >> `__docs__' of a class with all its ancestor's __docs__.
> >
> > The same can be done for __doc__<attrname>__ style attributes:
> > a helper function would just need to look at dir(Class) and then
> > extract the attribute doc strings it finds. It could also do
> > a DFS search to find a complete API description of the class
> > by emulating attribute lookup and combine method and attribute
> > docstrings to produce some nice online documentation output.
> 
> Using dir(Class) wouldn't find any inherited attributes of the class. A
> depth-first search would be required for any use of attribute docstrings.

Uhm, yes... that's what I wrote in the last paragraph.

> The advantage of the __doc__attribute__ name-mangling scheme (over __docs__
> dictionaries) would be that the attribute docstrings would be accessible
> from subclasses and class instances. But since "these attributes are meant
> for tools to use, not humans," this is not an issue.

I understand that you would rather like a "frozen" version
of the class docs, but this simply doesn't work out for
the common case of mixin classes and classes which are built
at runtime.

The name mangling is meant for internal use and just to give
the beast a name ;-) 

Doc tools can then take whatever action
they find necessary and apply the needed lookup, formatting
and content extraction. They might even add a frozen __docs__
attribute to classes which are known not to change after
creation.

I use such a function which I call freeze() to optimize many
static classes in my applications: the function scans all
available attributes in the inheritance tree and adds them
directly to the class in question. This gives some noticable
speedups for deeply nested class structures or ones which
use many mixin classes.

> Just to *find* all attribute names, in order to extract the docstrings, you
> would *have* to go through a depth-first search of all base classes. Since
> you're doing that anyway, why not collect docstrings as you collect
> attributes? There would be no penalty. In fact, such an optimized function
> could be written and included in the standard distribution.
> 
> A perfectly good model exists in __dict__ and dir(). Why not imitate it?

Sure, but let's do that in a doc() utility function.

I want to keep the implementation of this PEP clean and simple.
All meta-logic should be applied by external helpers.

> on 2000-08-28 04:28, M.-A. Lemburg (mal at lemburg.com) wrote:
> > This would not work well together with class inheritance.
> 
> It seems to me that it would work *exactly* as does class inheritance,
> cleanly and elegantly.

Right, and that's why I'm proposing to use attributes for the
docstrings as well: the docstrings will then behave just like
the attributes they describe.

> The __doc__attribute__ name-mangling scheme strikes
> me as un-Pythonic, to be honest.

It may look a bit strange, but it's certainly not un-Pythonic:
just look at private name mangling or the many __xxx__ hooks
which Python uses.
 
> Let me restate: I think the idea of attribute docstring is great. It brings
> a truly Pythonic, powerful auto-documentation system (a la POD or JavaDoc)
> closer. And I'm willing to help!

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Tue Aug 29 11:41:15 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 11:41:15 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13Tf69-000wcDC@swing.co.at>
Message-ID: <39AB853B.217402A2@lemburg.com>

Christian Tanzer wrote:
> 
> > > > >>> class C:
> > > > ...     "first line"\
> > > > ...     "second line"
> > > > ...
> > > > >>> C.__doc__
> > > > 'first linesecond line'
> > > >
> > > > And the same works for the attribute doc strings too.
> > >
> > > Surprise. I tried it this morning. Didn't use a backslash, though. And almost
> > > overlooked it now.
> >
> > You could also wrap the doc string in parenthesis or use a triple
> > quote string.
> 
> Wrapping a docstring in parentheses doesn't work in 1.5.2:
> 
> Python 1.5.2 (#5, Jan  4 2000, 11:37:02)  [GCC 2.7.2.1] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> >>> class C:
> ...   ("first line"
> ...    "second line")
> ...
> >>> C.__doc__
> >>>

Hmm, looks like you're right... the parenthesis probably only
work for "if" and function calls. This works:

function("firstline"
	 "secondline")

> Triple quoted strings work -- that's what I'm constantly using. The
> downside is, that the docstrings either contain spurious white space
> or it messes up the layout of the code (if you start subsequent lines
> in the first column).

Just a question of how smart you doc string extraction
tools are. Have a look at hack.py:

	http://starship.python.net/~lemburg/hack.py

and its docs() API:

>>> class C:
...     """ first line
...         second line
...         third line
...     """
... 
>>> import hack 
>>> hack.docs(C)
Class  :
    first line
    second line
    third line

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jack at oratrix.nl  Tue Aug 29 11:44:30 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 29 Aug 2000 11:44:30 +0200
Subject: [Python-Dev] Pragma-style declaration syntax 
In-Reply-To: Message by "M.-A. Lemburg" <mal@lemburg.com> ,
	     Mon, 28 Aug 2000 20:57:26 +0200 , <39AAB616.460FA0A8@lemburg.com> 
Message-ID: <20000829094431.AA5DB303181@snelboot.oratrix.nl>

> The basic syntax in the above examples is:
> 
> 	"pragma" NAME "=" (NUMBER | STRING+)
> 
> It has to be that simple to allow the compiler use the information
> at compilation time.

Can we have a bit more syntax, so other packages that inspect the source 
(freeze and friends come to mind) can also use the pragma scheme?

Something like
	"pragma" NAME ("." NAME)+ "=" (NUMBER | STRING+)
should allow freeze to use something like

pragma freeze.exclude = "win32ui, sunaudiodev, linuxaudiodev"

which would be ignored by the compiler but interpreted by freeze.
And, if they're stored in the __pragma__ dictionary too, as was suggested 
here, you can also add pragmas specific for class browsers, debuggers and such.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From tim_one at email.msn.com  Tue Aug 29 11:45:24 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 05:45:24 -0400
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
In-Reply-To: <01c501c01188$a21232e0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>

[/F]
> agreed.  ranges and slices are two different things.  giving
> them the same syntax is a lousy idea.

Don't know about *that*, but it doesn't appear to work as well as was hoped.

[Tim]
>> Post 2.0, who knows.  I'm not convinced Python actually needs
>> another arithmetic-progression *list* notation.  If it does, I've
>> always been fond of Haskell's range literals (but note that they
>> include the endpoint):
>>
>> Prelude> [1..10]
>> [1,2,3,4,5,6,7,8,9,10]
>> Prelude> [1, 3 .. 10]
>> [1,3,5,7,9]

> isn't that taken from SETL?

Sure looks like it to me.  The Haskell designers explicitly credited SETL
for list comprehensions, but I don't know that they do for this gimmick too.
Of course Haskell's "infinite" list builders weren't in SETL, and, indeed,
expressions like [1..] are pretty common in Haskell programs.  One of the
prettiest programs ever in any language ever:

primes = sieve [2..]
         where sieve (x:xs) = x :
                              sieve [n | n <- xs, n `mod` x /= 0]

which defines the list of all primes.

> (the more I look at SETL, the more Pythonic it looks.  not too
> bad for something that was designed in the late sixties ;-)

It was way ahead of its time.  Still is!  Check out its general loop
construct, though -- now *that's* a kitchen sink.  Guido mentioned that
ABC's Lambert Meertens spent a year's sabbatical at NYU when SETL was in its
heyday, and I figure that's where ABC got quantifiers in boolean expressions
(if each x in list has p(x); if no x in list has p(x); if some x in list has
p(x)).  Have always wondered why Python didn't have that too; I ask that
every year, but so far Guido has never answered it <wink>.

> talking about SETL, now that the range literals are gone, how
> about revisiting an old proposal:
>
>     "...personally, I prefer their "tuple former" syntax over the the
>     current PEP202 proposal:
>
>         [expression : iterator]
>
>         [n : n in range(100)]
>         [(x**2, x) : x in range(1, 6)]
>         [a : a in y if a > 5]
>
>     (all examples are slightly pythonified; most notably, they
>     use "|" or "st" (such that) instead of "if")
>
>     the expression can be omitted if it's the same thing as the
>     loop variable, *and* there's at least one "if" clause:
>
>         [a in y if a > 5]
>
>     also note that their "for-in" statement can take qualifiers:
>
>         for a in y if a > 5:
>             ...

You left off the last sentence from the first time you posted this:

>     is there any special reason why we cannot use colon instead
>     of "for"?

Guido then said we couldn't use a colon because that would make [x : y] too
hard to parse, because range literals were of the same form.  Thomas went on
to point out that it's worse than that, it's truly ambiguous.

Now I expect you prefaced this with "now that the range literals are gone"
expecting that everyone would just remember all that <wink>.  Whether they
did or not, now they should.

I counted two replies beyond those.  One from Peter Schneider-Kamp was
really selling another variant.  The other from Marc-Andre Lemburg argued
that while the shorthand is convenient for mathematicians, "I doubt that
CP4E users
get the grasp of this".

Did I miss anything?

Since Guido didn't chime in again, I assumed he was happy with how things
stood.  I further assume he picked on a grammar technicality to begin with
because that's the way he usually shoots down a proposal he doesn't want to
argue about -- "no new keywords" has served him extremely well that way
<wink>.  That is, I doubt that "now that the range literals are gone" (if
indeed they are!) will make any difference to him, and with the release one
week away he'd have to get real excited real fast.

I haven't said anything about it, but I'm with Marc-Andre on this:  sets
were *extremely* heavily used in SETL, and brevity in their expression was a
great virtue there because of it.  listcomps won't be that heavily used in
Python, and I think it's downright Pythonic to leave them wordy in order to
*discourage* fat hairy listcomp expressions.  They've been checked in for
quite a while now, and I like them fine as they are in practice.

I've also got emails like this one in pvt:

    The current explanation "[for and if clauses] nest in the same way
    for loops and if statements nest now." is pretty clear and easy to
    remember.

That's important too, because despite pockets of hysteria to the contrary on
c.l.py, this is still Python.  When I first saw your first example:

     [n : n in range(100)]

I immediately read "n in range(100)" as a true/false expression, because
that's what it *is* in 1.6 unless immediately preceded by "for".  The
current syntax preserves that.  Saving two characters (":" vs "for") isn't
worth it in Python.  The vertical bar *would* be "worth it" to me, because
that's what's used in SETL, Haskell *and* common mathematical practice for
"such that".  Alas, as Guido is sure to point out, that's too hard to parse
<0.9 wink>.

consider-it-channeled-unless-he-thunders-back-ly y'rs  - tim





From mal at lemburg.com  Tue Aug 29 12:40:11 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 12:40:11 +0200
Subject: [Python-Dev] Pragma-style declaration syntax
References: <20000829094431.AA5DB303181@snelboot.oratrix.nl>
Message-ID: <39AB930B.F34673AB@lemburg.com>

Jack Jansen wrote:
> 
> > The basic syntax in the above examples is:
> >
> >       "pragma" NAME "=" (NUMBER | STRING+)
> >
> > It has to be that simple to allow the compiler use the information
> > at compilation time.
> 
> Can we have a bit more syntax, so other packages that inspect the source
> (freeze and friends come to mind) can also use the pragma scheme?
> 
> Something like
>         "pragma" NAME ("." NAME)+ "=" (NUMBER | STRING+)
> should allow freeze to use something like
> 
> pragma freeze.exclude = "win32ui, sunaudiodev, linuxaudiodev"
> 
> which would be ignored by the compiler but interpreted by freeze.
> And, if they're stored in the __pragma__ dictionary too, as was suggested
> here, you can also add pragmas specific for class browsers, debuggers and such.

Hmm, freeze_exclude would have also done the trick.

The only thing that will have to be assured is that the
arguments are readily available at compile time. Adding
a dot shouldn't hurt ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Tue Aug 29 13:02:14 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 13:02:14 +0200
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Aug 29, 2000 at 05:45:24AM -0400
References: <01c501c01188$a21232e0$766940d5@hagrid> <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com>
Message-ID: <20000829130214.L500@xs4all.nl>

On Tue, Aug 29, 2000 at 05:45:24AM -0400, Tim Peters wrote:

> Saving two characters (":" vs "for") isn't worth it in Python.  The
> vertical bar *would* be "worth it" to me, because that's what's used in
> SETL, Haskell *and* common mathematical practice for "such that".  Alas,
> as Guido is sure to point out, that's too hard to parse

It's impossible to parse, of course, unless you require the parentheses
around the expression preceding it :)

[ (n) | n in range(100) if n%2 ]

I-keep-writing-'where'-instead-of-'if'-in-those-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gvwilson at nevex.com  Tue Aug 29 13:28:58 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 07:28:58 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008290517.RAA17013@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>

> > Ka-Ping Yee <ping at lfw.org>:
> >    for i in 1 .. 10:
> >        print i*i
> >    for i in 0 ..! len(a):
> >        a[i] += 1

Greg Wilson writes:

The problem with using ellipsis is that there's no obvious way to include
a stride --- how do you hit every second (or n'th) element, rather than
every element?  I'd rather stick to range() than adopt:

    for i in [1..10:5]

Thanks,
Greg

BTW, I understand from side conversations that adding a 'keys()' method to
sequences, so that arbitrary collections could be iterated over using:

    for i in S.keys():
        print i, S[i]

was considered and rejected.  If anyone knows why, I'd be grateful for a
recap.





From guido at beopen.com  Tue Aug 29 14:36:38 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:36:38 -0500
Subject: [Python-Dev] Python 2.0 License Discussion Mailing List Created
In-Reply-To: Your message of "Tue, 29 Aug 2000 17:04:05 +1200."
             <200008290504.RAA17003@s454.cosc.canterbury.ac.nz> 
References: <200008290504.RAA17003@s454.cosc.canterbury.ac.nz> 
Message-ID: <200008291236.HAA32070@cj20424-a.reston1.va.home.com>

[Greg Ewing]
> > meaning that Python
> > 2.0 can be redistributed under the Python 2.0 license or under the
> > GPL
> 
> Are you sure that's possible? Doesn't the CNRI license
> require that its terms be passed on to users of derivative
> works? If so, a user of Python 2.0 couldn't just remove the
> CNRI license and replace it with the GPL.

I don't know the answer to this, but Bob Weiner, BeOpen's CTO, claims
that according to BeOpen's lawyer this is okay.  I'll ask him about
it.

I'll post his answer (when I get it) on the license-py20 list.  I
encourage to subscribe and repost this question there for the
archives!

(There were some early glitches with the list address, but they have
been fixed.)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Tue Aug 29 14:41:53 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:41:53 -0500
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: Your message of "Tue, 29 Aug 2000 09:09:02 +0200."
             <01c501c01188$a21232e0$766940d5@hagrid> 
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>  
            <01c501c01188$a21232e0$766940d5@hagrid> 
Message-ID: <200008291241.HAA32136@cj20424-a.reston1.va.home.com>

> isn't that taken from SETL?
> 
> (the more I look at SETL, the more Pythonic it looks.  not too
> bad for something that was designed in the late sixties ;-)

You've got it backwards: Python's predecessor, ABC, was inspired by
SETL -- Lambert Meertens spent a year with the SETL group at NYU
before coming up with the final ABC design!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From nowonder at nowonder.de  Tue Aug 29 15:41:30 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Tue, 29 Aug 2000 13:41:30 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
Message-ID: <39ABBD8A.B9B3136@nowonder.de>

Greg Wilson wrote:
> 
> BTW, I understand from side conversations that adding a 'keys()' method to
> sequences, so that arbitrary collections could be iterated over using:
> 
>     for i in S.keys():
>         print i, S[i]
> 
> was considered and rejected.  If anyone knows why, I'd be grateful for a
> recap.

If I remember correctly, it was rejected because adding
keys(), items() etc. methods to sequences would make all
objects (in this case sequences and mappings) look the same.

More accurate information from:
http://sourceforge.net/patch/?func=detailpatch&patch_id=101178&group_id=5470

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From fredrik at pythonware.com  Tue Aug 29 13:49:17 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 13:49:17 +0200
Subject: Actually about PEP 202 (listcomps), not (was RE: [Python-Dev] Lukewarm about range literals)
References: <01c501c01188$a21232e0$766940d5@hagrid> <LNBBLJKPBEHFEDALKOLCMEKBHCAA.tim_one@email.msn.com> <20000829130214.L500@xs4all.nl>
Message-ID: <01ae01c011af$2a3550a0$0900a8c0@SPIFF>

thomas wrote:
> > Saving two characters (":" vs "for") isn't worth it in Python.  The
> > vertical bar *would* be "worth it" to me, because that's what's used in
> > SETL, Haskell *and* common mathematical practice for "such that".  Alas,
> > as Guido is sure to point out, that's too hard to parse
>
> It's impossible to parse, of course, unless you require the parentheses
> around the expression preceding it :)
>
> [ (n) | n in range(100) if n%2 ]

I'm pretty sure Tim meant "|" instead of "if".  the SETL syntax is:

    [ n : n in range(100) | n%2 ]

(that is, ":" instead of for, and "|" or "st" instead of "if".  and yes,
they have nice range literals too, so don't take that "range" too
literal ;-)

in SETL, that can also be abbreviated to:

    [ n in range(100) | n%2 ]

which, of course, is a perfectly valid (though slightly obscure)
python expression...

</F>




From guido at beopen.com  Tue Aug 29 14:53:32 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 07:53:32 -0500
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: Your message of "Tue, 29 Aug 2000 07:41:53 EST."
             <200008291241.HAA32136@cj20424-a.reston1.va.home.com> 
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <01c501c01188$a21232e0$766940d5@hagrid>  
            <200008291241.HAA32136@cj20424-a.reston1.va.home.com> 
Message-ID: <200008291253.HAA32332@cj20424-a.reston1.va.home.com>

> It was way ahead of its time.  Still is!  Check out its general loop
> construct, though -- now *that's* a kitchen sink.  Guido mentioned that
> ABC's Lambert Meertens spent a year's sabbatical at NYU when SETL was in its
> heyday, and I figure that's where ABC got quantifiers in boolean expressions
> (if each x in list has p(x); if no x in list has p(x); if some x in list has
> p(x)).  Have always wondered why Python didn't have that too; I ask that
> every year, but so far Guido has never answered it <wink>.

I don't recall you asking me that even *once* before now.  Proof,
please?

Anyway, the answer is that I saw diminishing returns from adding more
keywords and syntax.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From skip at mojam.com  Tue Aug 29 16:46:23 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 09:46:23 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <39AB06D5.BD99855@nowonder.de>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
	<39AB06D5.BD99855@nowonder.de>
Message-ID: <14763.52415.747655.334938@beluga.mojam.com>

    Peter> I don't like [0...100] either. It just looks bad.  But I really
    Peter> *do* like [0..100] (maybe that's Pascal being my first serious
    Peter> language).

Which was why I proposed "...".  It's sort of like "..", but has the
advantage of already being a recognized token.  I doubt there would be much
problem adding ".." as a token either.

What we really want I think is something that evokes the following in the
mind of the reader

    for i from START to END incrementing by STEP:

without gobbling up all those keywords.  That might be one of the following:

    for i in [START..END,STEP]:
    for i in [START:END:STEP]:
    for i in [START..END:STEP]:

I'm sure there are other possibilities, but given the constraints of putting
the range literal in square brackets and not allowing a comma as the first
separator, the choices seem limited.

Perhaps it will just have to wait until Py3K when a little more grammar
fiddling is possible.

Skip



From thomas at xs4all.net  Tue Aug 29 16:52:21 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 16:52:21 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 09:46:23AM -0500
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com> <39AB06D5.BD99855@nowonder.de> <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <20000829165221.N500@xs4all.nl>

On Tue, Aug 29, 2000 at 09:46:23AM -0500, Skip Montanaro wrote:

> Which was why I proposed "...".  It's sort of like "..", but has the
> advantage of already being a recognized token.  I doubt there would be much
> problem adding ".." as a token either.

"..." is not a token, it's three tokens:

subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]

So adding ".." should be no problem.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gvwilson at nevex.com  Tue Aug 29 16:55:34 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 10:55:34 -0400 (EDT)
Subject: [Python-Dev] pragmas as callbacks
In-Reply-To: <39AAB616.460FA0A8@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291023160.21280-100000@akbar.nevex.com>

If a mechanism for providing meta-information about code is going to be
added to Python, then I would like it to be flexible enough for developers
to define/add their own.  It's just like allowing developers to extend the
type system with new classes, rather than handing them a fixed set of
built-in types and saying, "Good luck".  (Most commercial Fortran
compilers take the second approach, by providing a bunch of inflexible,
vendor-specific pragmas.  It's a nightmare...)

I think that pragmas are essentially callbacks into the interpreter.  When
I put:

    pragma encoding = "UTF-16"

I am telling the interpreter to execute its 'setEncoding()' method right
away.

So, why not present pragmas in that way?  I.e., why not expose the Python
interpreter as a callable object while the source is being parsed and
compiled?  I think that:

    __python__.setEncoding("UTF-16")

is readable, and can be extended in lots of well-structured ways by
exposing exactly as much of the interpreter as is deemed safe. Arguments
could be restricted to constants, or built-in operations on constants, to
start with, without compromising future extensibility.

Greg





From skip at mojam.com  Tue Aug 29 16:55:49 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 09:55:49 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
References: <20000828230630.I500@xs4all.nl>
	<LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com>
Message-ID: <14763.52981.603640.415652@beluga.mojam.com>

One of the original arguments for range literals as I recall was that
indexing of loops could get more efficient.  The compiler would know that
[0:100:2] represents a series of integers and could conceivably generate
more efficient loop indexing code (and so could Python2C and other compilers
that generated C code).  This argument doesn't seem to be showing up here at
all.  Does it carry no weight in the face of the relative inscrutability of
the syntax?

Skip



From cgw at fnal.gov  Tue Aug 29 17:29:20 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 10:29:20 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000829165221.N500@xs4all.nl>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
	<39AB06D5.BD99855@nowonder.de>
	<14763.52415.747655.334938@beluga.mojam.com>
	<20000829165221.N500@xs4all.nl>
Message-ID: <14763.54992.458188.483296@buffalo.fnal.gov>

Thomas Wouters writes:
 > On Tue, Aug 29, 2000 at 09:46:23AM -0500, Skip Montanaro wrote:
 > 
 > > Which was why I proposed "...".  It's sort of like "..", but has the
 > > advantage of already being a recognized token.  I doubt there would be much
 > > problem adding ".." as a token either.
 > 
 > "..." is not a token, it's three tokens:
 > 
 > subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
 > 
 > So adding ".." should be no problem.

I have another idea.  I don't think it's been discussed previously,
but I came late to this party.  Sorry if this is old hat.


How about a:b to indicate the range starting at a and ending with b-1?

I claim that this syntax is already implicit in Python.

Think about the following:  if S is a sequence and i an index,

     S[i]

means the pairing of the sequence S with the index i.  Sequences and
indices are `dual' in the sense that pairing them together yields a
value.  I am amused by the fact that in the C language, 

     S[i] = *(S+i) = *(i+S) = i[S] 

which really shows this duality.

Now we already have

     S[a:b]

to denote the slice operation, but this can also be described as the
pairing of S with the range literal a:b

According to this view, the square braces indicate the pairing or
mapping operation itself, they are not part of the range literal.
They shouldn't be part of the range literal syntax.  Thinking about
this gets confused by the additional use of `[' for list construction.
If you take them away, you could even defend having 1:5 create an
xrange-like object rather than a list.

I think this also shows why [a:b] is *not* the natural syntax for a
range literal.

This is beautfully symmetric to me - 1..3 looks like it should be a
closed interval (including the endpoints), but it's very natural and
Pythonic that a:b is semi-open: the existing "slice invariance" 

     S[a:b] + S[b:c] = S[a:c] 

could be expressed as 

     a:b + b:c = a:c

which is very attractive to me, but of course there are problems.


The syntax Tim disfavored:

     for i in [:len(a)]:

now becomes

     for i in 0:len(a):  
     #do not allow elided endpoints outside of a [ context

which doesn't look so bad to me, but is probably ambiguous.  Hmmm,
could this possibly work or is it too much of a collision with the use
of `:' to indicate block structure?

Tim - I agree that the Haskell prime-number printing program is indeed
one of the prettiest programs ever.  Thanks for posting it.

Hold-off-on-range-literals-for-2.0-ly yr's,
				-C




From mal at lemburg.com  Tue Aug 29 17:37:39 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 17:37:39 +0200
Subject: [Python-Dev] pragmas as callbacks
References: <Pine.LNX.4.10.10008291023160.21280-100000@akbar.nevex.com>
Message-ID: <39ABD8C3.DABAAA6B@lemburg.com>

Greg Wilson wrote:
> 
> If a mechanism for providing meta-information about code is going to be
> added to Python, then I would like it to be flexible enough for developers
> to define/add their own.  It's just like allowing developers to extend the
> type system with new classes, rather than handing them a fixed set of
> built-in types and saying, "Good luck".  (Most commercial Fortran
> compilers take the second approach, by providing a bunch of inflexible,
> vendor-specific pragmas.  It's a nightmare...)

I don't think that Python will move in that direction. pragmas are
really only meant to add some form of meta-information to a Python
source file which would otherwise have to be passed to the compiler
in order to produce correct output. It's merely a way of defining
compile time flags for Python modules which allow more flexible
compilation.

Other tools might also make use of these pragmas, e.g. freeze,
to allow inspection of a module without having to execute it.

> I think that pragmas are essentially callbacks into the interpreter.  When
> I put:
> 
>     pragma encoding = "UTF-16"
> 
> I am telling the interpreter to execute its 'setEncoding()' method right
> away.

pragmas have a different target: they tell the compiler (or some
other non-executing tool) to make a certain assumption about the
code it is currently busy compiling.

The compiler is not expected to execute any Python code when it
sees a pragma, it will only set a few internal variables according
to the values stated in the pragma or simply ignore it if the
pragma uses an unknown key and then proceed with compiling.
 
> So, why not present pragmas in that way?  I.e., why not expose the Python
> interpreter as a callable object while the source is being parsed and
> compiled?  I think that:
> 
>     __python__.setEncoding("UTF-16")
> 
> is readable, and can be extended in lots of well-structured ways by
> exposing exactly as much of the interpreter as is deemed safe. Arguments
> could be restricted to constants, or built-in operations on constants, to
> start with, without compromising future extensibility.

The natural place for these APIs would be the sys module... 
no need for an extra __python__ module or object.

I'd rather not add complicated semantics to pragmas -- they
should be able to set flags, but not much more.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Tue Aug 29 17:40:39 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Tue, 29 Aug 2000 18:40:39 +0300 (IDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.54992.458188.483296@buffalo.fnal.gov>
Message-ID: <Pine.GSO.4.10.10008291838220.13338-100000@sundial>

On Tue, 29 Aug 2000, Charles G Waldman wrote:

> I have another idea.  I don't think it's been discussed previously,
> but I came late to this party.  Sorry if this is old hat.
> 
> How about a:b to indicate the range starting at a and ending with b-1?

I think it's nice. I'm not sure I like it yet, but it's an interesting
idea. Someone's gonna yell ": is ambiguos". Well, you know how, when
you know Python, you go around telling people "() don't create tuples,
commas do" and feeling all wonderful? Well, we can do the same with
ranges <wink>.

(:)-ly y'rs, Z.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From guido at beopen.com  Tue Aug 29 18:37:40 2000
From: guido at beopen.com (Guido van Rossum)
Date: Tue, 29 Aug 2000 11:37:40 -0500
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Your message of "Tue, 29 Aug 2000 10:29:20 EST."
             <14763.54992.458188.483296@buffalo.fnal.gov> 
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com> <39AB06D5.BD99855@nowonder.de> <14763.52415.747655.334938@beluga.mojam.com> <20000829165221.N500@xs4all.nl>  
            <14763.54992.458188.483296@buffalo.fnal.gov> 
Message-ID: <200008291637.LAA04186@cj20424-a.reston1.va.home.com>

> How about a:b to indicate the range starting at a and ending with b-1?

I believe this is what the Nummies originally suggested.

> which doesn't look so bad to me, but is probably ambiguous.  Hmmm,
> could this possibly work or is it too much of a collision with the use
> of `:' to indicate block structure?

Alas, it could never work.  Look at this:

  for i in a:b:c

Does it mean

  for i in (a:b) : c

or

  for i in a: (b:c)

?

So we're back to requiring *some* form of parentheses.

I'm postponing this discussion until after Python 2.0 final is
released -- the feature freeze is real!

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From gvwilson at nevex.com  Tue Aug 29 17:54:23 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 11:54:23 -0400 (EDT)
Subject: [Python-Dev] pragmas as callbacks
In-Reply-To: <39ABD8C3.DABAAA6B@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291149040.23391-100000@akbar.nevex.com>

> Marc-Andre Lemburg wrote:
> I'd rather not add complicated semantics to pragmas -- they should be
> able to set flags, but not much more.

Greg Wilson writes:

That's probably what every Fortran compiler vendor said at first --- "just
a couple of on/off flags".  Then it was, "Set numeric values (like the
debugging level)".  A full-blown HPF compiler's pragmas are now a complete
programming language, so that you can (for example) specify how to
partition one array based on the partitioning in another.

Same thing happened with the C preprocessor --- more and more directives
crept in over time.  And the Microsoft C++ compiler.  And I'm sure this
list's readers could come up with dozens of more examples.

Pragmas are a way to give instructions to the interpreter; when you let
people give something instructions, you're letting them program it, and I
think it's best to design your mechanism from the start to support that.

Greg "oh no, not another parallelization directive" Wilson





From skip at mojam.com  Tue Aug 29 18:19:27 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 11:19:27 -0500 (CDT)
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
Message-ID: <14763.57999.57444.678054@beluga.mojam.com>

Don't know if this should concern us in preparation for 2.0b1 release, but
the following came across c.l.py this morning.  

FYI.

Skip

------- start of forwarded message (RFC 934 encapsulation) -------
X-Digest: Python-list digest, Vol 1 #3344 - 13 msgs
Message: 11
Newsgroups: comp.lang.python
Organization: Concentric Internet Services
Lines: 41
Message-ID: <39ABD9A1.A8ECDEC8 at faxnet.com>
NNTP-Posting-Host: 208.36.195.178
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Path: news!uunet!ffx.uu.net!newsfeed.mathworks.com!feeder.via.net!newshub2.rdc1.sfba.home.com!news.home.com!newsfeed.concentric.net!global-news-master
Xref: news comp.lang.python:110026
Precedence: bulk
List-Id: General discussion list for the Python programming language <python-list.python.org>
From: Jon LaCour <jal at faxnet.com>
Sender: python-list-admin at python.org
To: python-list at python.org
Subject: Python Problem - Important!
Date: 29 Aug 2000 15:41:00 GMT
Reply-To: jal at faxnet.com

I am beginning development of a very large web application, and I would
like to use Python (no, Zope is not an option).  PyApache seems to be my
best bet, but there is a MASSIVE problem with Python/PyApache that
prevents it from being even marginally useful to me, and to most major
software companies.

My product requires database access, and the database module that I use
for connecting depends on a library called mxDateTime.  This is a very
robust library that is in use all over our system (it has never given us
problems).  Yet, when I use PyApache to connect to a database, I have
major issues.

I have seen this same problem posted to this newsgroup and to the
PyApache mailing list several times from over a year ago, and it appears
to be unresolved.  The essential problem is this: the second time a
module is loaded, Python has cleaned up its dictionaries in its cleanup
mechanism, and does not allow a re-init.  With mxDateTime this gives an
error:

    "TypeError:  call of non-function (type None)"

Essentially, this is a major problem in either the Python internals, or
in PyApache.  After tracing the previous discussions on this issue, it
appears that this is a Python problem.  I am very serious when I say
that this problem *must* be resolved before Python can be taken
seriously for use in web applications, especially when Zope is not an
option.  I require the use of Apache's security features, and several
other Apache extensions.  If anyone knows how to resolve this issue, or
can even point out a way that I can resolve this *myself* I would love
to hear it.

This is the single stumbling block standing in the way of my company
converting almost entirely to Python development, and I am hoping that
python developers will take this bug and smash it quickly.

Thanks in advance, please cc: all responses to my email address at
jal at faxnet.com.

Jonathan LaCour
Developer, VertiSoft

------- end -------



From effbot at telia.com  Tue Aug 29 18:50:24 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 18:50:24 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <14763.57999.57444.678054@beluga.mojam.com>
Message-ID: <003301c011d9$3c1bbc80$766940d5@hagrid>

skip wrote:
> Don't know if this should concern us in preparation for 2.0b1 release, but
> the following came across c.l.py this morning.  

http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

    "The problem you describe is an artifact of the way mxDateTime 
    tries to reuse the time.time() API available through the 
    standard Python time module"

> Essentially, this is a major problem in either the Python internals, or
> in PyApache.

ah, the art of debugging...

</F>




From mal at lemburg.com  Tue Aug 29 18:46:29 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 18:46:29 +0200
Subject: [Python-Dev] pragmas as callbacks
References: <Pine.LNX.4.10.10008291149040.23391-100000@akbar.nevex.com>
Message-ID: <39ABE8E5.44073A09@lemburg.com>

Greg Wilson wrote:
> 
> > Marc-Andre Lemburg wrote:
> > I'd rather not add complicated semantics to pragmas -- they should be
> > able to set flags, but not much more.
> 
> Greg Wilson writes:
> 
> That's probably what every Fortran compiler vendor said at first --- "just
> a couple of on/off flags".  Then it was, "Set numeric values (like the
> debugging level)".  A full-blown HPF compiler's pragmas are now a complete
> programming language, so that you can (for example) specify how to
> partition one array based on the partitioning in another.
> 
> Same thing happened with the C preprocessor --- more and more directives
> crept in over time.  And the Microsoft C++ compiler.  And I'm sure this
> list's readers could come up with dozens of more examples.
>
> Pragmas are a way to give instructions to the interpreter; when you let
> people give something instructions, you're letting them program it, and I
> think it's best to design your mechanism from the start to support that.

I don't get your point: you can "program" the interpreter by
calling various sys module APIs to set interpreter flags already.

Pragmas are needed to tell the compiler what to do with a
source file. They extend the command line flags which are already
available to a more fine-grained mechanism. That's all -- nothing
more.

If a programmer wants to influence compilation globally,
then she would have to set the sys module flags prior to invoking
compile().

(This is already possible using mx.Tools additional sys builtins,
e.g. you can tell the compiler to work in optimizing mode prior
to invoking compile(). Some version of these will most likely go
into 2.1.)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Tue Aug 29 18:48:43 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 11:48:43 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008291637.LAA04186@cj20424-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10008281801430.9081-100000@akbar.nevex.com>
	<39AB06D5.BD99855@nowonder.de>
	<14763.52415.747655.334938@beluga.mojam.com>
	<20000829165221.N500@xs4all.nl>
	<14763.54992.458188.483296@buffalo.fnal.gov>
	<200008291637.LAA04186@cj20424-a.reston1.va.home.com>
Message-ID: <14763.59755.137579.785257@buffalo.fnal.gov>

Guido van Rossum writes:

 > Alas, it could never work.  Look at this:
 > 
 >   for i in a:b:c
 > 
 > Does it mean
 > 
 >   for i in (a:b) : c
 > 
 > or
 > 
 >   for i in a: (b:c)

Of course, it means "for i in the range from a to b-1 with stride c", but as written it's
illegal because you'd need another `:' after the c.  <wink>

 > I'm postponing this discussion until after Python 2.0 final is
 > released -- the feature freeze is real!

Absolutely.  I won't bring this up again, until the approprate time.




From mal at lemburg.com  Tue Aug 29 18:54:49 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 18:54:49 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <14763.57999.57444.678054@beluga.mojam.com> <003301c011d9$3c1bbc80$766940d5@hagrid>
Message-ID: <39ABEAD9.B106E53E@lemburg.com>

Fredrik Lundh wrote:
> 
> skip wrote:
> > Don't know if this should concern us in preparation for 2.0b1 release, but
> > the following came across c.l.py this morning.
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> 
>     "The problem you describe is an artifact of the way mxDateTime
>     tries to reuse the time.time() API available through the
>     standard Python time module"
> 

Here is a pre-release version of mx.DateTime which should fix
the problem (the new release will use the top-level mx package
-- it does contain a backward compatibility hack though):

http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip

Please let me know if it fixes your problem... I don't use PyApache.

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jal at ns1.quickrecord.com  Tue Aug 29 19:17:17 2000
From: jal at ns1.quickrecord.com (Jonathan LaCour)
Date: Tue, 29 Aug 2000 13:17:17 -0400 (EDT)
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
In-Reply-To: <39ABEAD9.B106E53E@lemburg.com>
Message-ID: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com>

Well, it appears that this version raises a different problem. Do I need
to be running anything higher than python-1.5.2?  Possibly this has
something to do with how I installed this pre-release.  I simply moved 
the old DateTime directory out of the site-packages directory, and
then moved the mx, and DateTime directories from the zip that was
provided into the site-packages directory, and restarted. Here is the
traceback from the apache error log:

patientSearchResults.py failed for 192.168.168.130, reason: the script
raised an unhandled exception. Script's traceback follows:
Traceback (innermost last):
  File "/home/httpd/html/py-bin/patientSearchResults.py", line 3, in ?
    import ODBC.Solid
  File "/usr/lib/python1.5/site-packages/ODBC/__init__.py", line 21, in ?
    import DateTime # mxDateTime package must be installed first !
  File "/usr/lib/python1.5/site-packages/DateTime/__init__.py", line 17,
in ?
    from mx.DateTime import *
  File "/usr/lib/python1.5/site-packages/mx/DateTime/__init__.py", line
20, in ?    from DateTime import *
  File "/usr/lib/python1.5/site-packages/mx/DateTime/DateTime.py", line 8,
in ?
    from mxDateTime import *
  File
"/usr/lib/python1.5/site-packages/mx/DateTime/mxDateTime/__init__.py",
line 12, in ?
    setnowapi(time.time)
NameError: setnowapi


On Tue, 29 Aug 2000, M.-A. Lemburg wrote:

> Fredrik Lundh wrote:
> > 
> > skip wrote:
> > > Don't know if this should concern us in preparation for 2.0b1 release, but
> > > the following came across c.l.py this morning.
> > 
> > http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> > 
> >     "The problem you describe is an artifact of the way mxDateTime
> >     tries to reuse the time.time() API available through the
> >     standard Python time module"
> > 
> 
> Here is a pre-release version of mx.DateTime which should fix
> the problem (the new release will use the top-level mx package
> -- it does contain a backward compatibility hack though):
> 
> http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> 
> Please let me know if it fixes your problem... I don't use PyApache.
> 
> Thanks,
> -- 
> Marc-Andre Lemburg
> ______________________________________________________________________
> Business:                                      http://www.lemburg.com/
> Python Pages:                           http://www.lemburg.com/python/
> 




From gvwilson at nevex.com  Tue Aug 29 19:21:52 2000
From: gvwilson at nevex.com (Greg Wilson)
Date: Tue, 29 Aug 2000 13:21:52 -0400 (EDT)
Subject: [Python-Dev] Re: pragmas as callbacks
In-Reply-To: <39ABE8E5.44073A09@lemburg.com>
Message-ID: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>

> > > Marc-Andre Lemburg wrote:
> > > I'd rather not add complicated semantics to pragmas -- they should be
> > > able to set flags, but not much more.

> > Greg Wilson writes:
> > Pragmas are a way to give instructions to the interpreter; when you let
> > people give something instructions, you're letting them program it, and I
> > think it's best to design your mechanism from the start to support that.

> Marc-Andre Lemburg:
> I don't get your point: you can "program" the interpreter by
> calling various sys module APIs to set interpreter flags already.
> 
> Pragmas are needed to tell the compiler what to do with a
> source file. They extend the command line flags which are already
> available to a more fine-grained mechanism. That's all -- nothing
> more.

Greg Wilson writes:
I understand, but my experience with other languages indicates that once
you have a way to set the parser's flags from within the source file being
parsed, people are going to want to be able to do it conditionally, i.e.
to set one flag based on the value of another.  Then they're going to want
to see if particular flags have been set to something other than their
default values, and so on.  Pragmas are a way to embed programs for the
parser in the file being parsed.  If we're going to allow this at all, we
will save ourselves a lot of future grief by planning for this now.

Thanks,
Greg




From mal at lemburg.com  Tue Aug 29 19:24:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 19:24:08 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com>
Message-ID: <39ABF1B8.426B7A6@lemburg.com>

Jonathan LaCour wrote:
> 
> Well, it appears that this version raises a different problem. Do I need
> to be running anything higher than python-1.5.2?  Possibly this has
> something to do with how I installed this pre-release.  I simply moved
> the old DateTime directory out of the site-packages directory, and
> then moved the mx, and DateTime directories from the zip that was
> provided into the site-packages directory, and restarted. Here is the
> traceback from the apache error log:
> 
> patientSearchResults.py failed for 192.168.168.130, reason: the script
> raised an unhandled exception. Script's traceback follows:
> Traceback (innermost last):
>   File "/home/httpd/html/py-bin/patientSearchResults.py", line 3, in ?
>     import ODBC.Solid
>   File "/usr/lib/python1.5/site-packages/ODBC/__init__.py", line 21, in ?
>     import DateTime # mxDateTime package must be installed first !
>   File "/usr/lib/python1.5/site-packages/DateTime/__init__.py", line 17,
> in ?
>     from mx.DateTime import *
>   File "/usr/lib/python1.5/site-packages/mx/DateTime/__init__.py", line
> 20, in ?    from DateTime import *
>   File "/usr/lib/python1.5/site-packages/mx/DateTime/DateTime.py", line 8,
> in ?
>     from mxDateTime import *
>   File
> "/usr/lib/python1.5/site-packages/mx/DateTime/mxDateTime/__init__.py",
> line 12, in ?
>     setnowapi(time.time)
> NameError: setnowapi

This API is new... could it be that you didn't recompile the
mxDateTime C extension inside the package ?

> On Tue, 29 Aug 2000, M.-A. Lemburg wrote:
> 
> > Fredrik Lundh wrote:
> > >
> > > skip wrote:
> > > > Don't know if this should concern us in preparation for 2.0b1 release, but
> > > > the following came across c.l.py this morning.
> > >
> > > http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470
> > >
> > >     "The problem you describe is an artifact of the way mxDateTime
> > >     tries to reuse the time.time() API available through the
> > >     standard Python time module"
> > >
> >
> > Here is a pre-release version of mx.DateTime which should fix
> > the problem (the new release will use the top-level mx package
> > -- it does contain a backward compatibility hack though):
> >
> > http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> >
> > Please let me know if it fixes your problem... I don't use PyApache.
> >
> > Thanks,
> > --
> > Marc-Andre Lemburg
> > ______________________________________________________________________
> > Business:                                      http://www.lemburg.com/
> > Python Pages:                           http://www.lemburg.com/python/
> >

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From mwh21 at cam.ac.uk  Tue Aug 29 19:34:15 2000
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 29 Aug 2000 18:34:15 +0100
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: Skip Montanaro's message of "Tue, 29 Aug 2000 09:55:49 -0500 (CDT)"
References: <20000828230630.I500@xs4all.nl> <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <14763.52981.603640.415652@beluga.mojam.com>
Message-ID: <m38ztgyns8.fsf@atrus.jesus.cam.ac.uk>

Skip Montanaro <skip at mojam.com> writes:

> One of the original arguments for range literals as I recall was that
> indexing of loops could get more efficient.  The compiler would know that
> [0:100:2] represents a series of integers and could conceivably generate
> more efficient loop indexing code (and so could Python2C and other compilers
> that generated C code).  This argument doesn't seem to be showing up here at
> all.  Does it carry no weight in the face of the relative inscrutability of
> the syntax?

IMHO, no.  A compiler sufficiently smart to optimize range literals
ought to be sufficiently smart to optimize most calls to "range".  At
least, I think so.  I also think the inefficiency of list construction
in Python loops is a red herring; executing the list body involves
going round & round the eval loop, and I'd be amazed if that didn't
dominate (note that - on my system at least - loops involving range
are often (marginally) faster than ones using xrange, presumably due
to the special casing of list[integer] in eval_code2).

Sure, it would be nice is this aspect got optimized, but lets speed up
the rest of the interpreter enough so you can notice first!

Cheers,
Michael

-- 
  Very rough; like estimating the productivity of a welder by the
  amount of acetylene used.         -- Paul Svensson, comp.lang.python
    [on the subject of the measuring programmer productivity by LOC]




From mal at lemburg.com  Tue Aug 29 19:41:25 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 19:41:25 +0200
Subject: [Python-Dev] Re: pragmas as callbacks
References: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>
Message-ID: <39ABF5C5.6605CF55@lemburg.com>

Greg Wilson wrote:
> 
> > > > Marc-Andre Lemburg wrote:
> > > > I'd rather not add complicated semantics to pragmas -- they should be
> > > > able to set flags, but not much more.
> 
> > > Greg Wilson writes:
> > > Pragmas are a way to give instructions to the interpreter; when you let
> > > people give something instructions, you're letting them program it, and I
> > > think it's best to design your mechanism from the start to support that.
> 
> > Marc-Andre Lemburg:
> > I don't get your point: you can "program" the interpreter by
> > calling various sys module APIs to set interpreter flags already.
> >
> > Pragmas are needed to tell the compiler what to do with a
> > source file. They extend the command line flags which are already
> > available to a more fine-grained mechanism. That's all -- nothing
> > more.
> 
> Greg Wilson writes:
> I understand, but my experience with other languages indicates that once
> you have a way to set the parser's flags from within the source file being
> parsed, people are going to want to be able to do it conditionally, i.e.
> to set one flag based on the value of another.  Then they're going to want
> to see if particular flags have been set to something other than their
> default values, and so on.  Pragmas are a way to embed programs for the
> parser in the file being parsed.  If we're going to allow this at all, we
> will save ourselves a lot of future grief by planning for this now.

I don't think mixing compilation with execution is a good idea.

If we ever want to add this feature, we can always use a
pragma for it ;-) ...

def mysettings(compiler, locals, globals, target):
    compiler.setoptimization(2)

# Call the above hook for every new compilation block
pragma compiler_hook = "mysettings"

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Tue Aug 29 20:42:41 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 29 Aug 2000 14:42:41 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <14764.1057.909517.977904@bitdiddle.concentric.net>

Does anyone have suggestions for how to detect unbounded recursion in
the Python core on Unix platforms?

Guido assigned me bug 112943 yesterday and gave it priority 9.
http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470

The bug in question causes a core dump on Unix because of a broken
__radd__.  There's another bug (110615) that does that same thing with
recursive invocations of __repr__.

And, of course, there's:
def foo(x): 
    return foo(x)
foo(None)

I believe that these bugs have been fixed on Windows.  Fredrik
confirmed this for one of them, but I don't remember which one.  Would
someone mind confirming and updating the records in the bug tracker?

I don't see an obvious solution.  Is there any way to implement
PyOS_CheckStack on Unix?  I imagine that each platform would have its
own variant and that there is no hope of getting them debugged before
2.0b1. 

We could add some counters in eval_code2 and raise an exception after
some arbitrary limit is reached.  Arbitrary limits seem bad -- and any
limit would have to be fairly restrictive because each variation on
the bug involves a different number of C function calls between
eval_code2 invocations.

We could special case each of the __special__ methods in C to raise an
exception upon recursive calls with the same arguments, but this is
complicated and expensive.  It does not catch the simplest version, 
the foo function above.

Does stackless raise exceptions cleanly on each of these bugs?  That
would be an argument worth mentioning in the PEP, eh?

Any other suggestions are welcome.

Jeremy



From effbot at telia.com  Tue Aug 29 21:15:30 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 21:15:30 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <00af01c011ed$86671dc0$766940d5@hagrid>

jeremy wrote:
>  Guido assigned me bug 112943 yesterday and gave it priority 9.
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470
> 
> The bug in question causes a core dump on Unix because of a broken
> __radd__.  There's another bug (110615) that does that same thing with
> recursive invocations of __repr__.
> 
> And, of course, there's:
> def foo(x): 
>     return foo(x)
> foo(None)
> 
> I believe that these bugs have been fixed on Windows.  Fredrik
> confirmed this for one of them, but I don't remember which one.  Would
> someone mind confirming and updating the records in the bug tracker?

my checkstack patch fixes #110615 and #112943 on windows.
cannot login to sourceforge right now, so I cannot update the
descriptions.

> I don't see an obvious solution.  Is there any way to implement
> PyOS_CheckStack on Unix?

not that I know...  you better get a real operating system ;-)

</F>




From mal at lemburg.com  Tue Aug 29 21:26:52 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 21:26:52 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <39AC0E7C.922536AA@lemburg.com>

Jeremy Hylton wrote:
> 
> Does anyone have suggestions for how to detect unbounded recursion in
> the Python core on Unix platforms?
> 
> Guido assigned me bug 112943 yesterday and gave it priority 9.
> http://sourceforge.net/bugs/?func=detailbug&bug_id=112943&group_id=5470
> 
> The bug in question causes a core dump on Unix because of a broken
> __radd__.  There's another bug (110615) that does that same thing with
> recursive invocations of __repr__.
> 
> And, of course, there's:
> def foo(x):
>     return foo(x)
> foo(None)
> 
> I believe that these bugs have been fixed on Windows.  Fredrik
> confirmed this for one of them, but I don't remember which one.  Would
> someone mind confirming and updating the records in the bug tracker?
> 
> I don't see an obvious solution.  Is there any way to implement
> PyOS_CheckStack on Unix?  I imagine that each platform would have its
> own variant and that there is no hope of getting them debugged before
> 2.0b1.

I've looked around in the include files for Linux but haven't
found any APIs which could be used to check the stack size.
Not even getrusage() returns anything useful for the current
stack size.

For the foo() example I found that on my machine the core dump
happens at depth 9821 (counted from 0), so setting the recursion
limit to something around 9000 should fix it at least for
Linux2.

> We could add some counters in eval_code2 and raise an exception after
> some arbitrary limit is reached.  Arbitrary limits seem bad -- and any
> limit would have to be fairly restrictive because each variation on
> the bug involves a different number of C function calls between
> eval_code2 invocations.
> 
> We could special case each of the __special__ methods in C to raise an
> exception upon recursive calls with the same arguments, but this is
> complicated and expensive.  It does not catch the simplest version,
> the foo function above.
> 
> Does stackless raise exceptions cleanly on each of these bugs?  That
> would be an argument worth mentioning in the PEP, eh?
> 
> Any other suggestions are welcome.
> 
> Jeremy
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From jeremy at beopen.com  Tue Aug 29 21:40:49 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Tue, 29 Aug 2000 15:40:49 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC0E7C.922536AA@lemburg.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
Message-ID: <14764.4545.972459.760991@bitdiddle.concentric.net>

>>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:

  >> I don't see an obvious solution.  Is there any way to implement
  >> PyOS_CheckStack on Unix?  I imagine that each platform would have
  >> its own variant and that there is no hope of getting them
  >> debugged before 2.0b1.

  MAL> I've looked around in the include files for Linux but haven't
  MAL> found any APIs which could be used to check the stack size.
  MAL> Not even getrusage() returns anything useful for the current
  MAL> stack size.

Right.  

  MAL> For the foo() example I found that on my machine the core dump
  MAL> happens at depth 9821 (counted from 0), so setting the
  MAL> recursion limit to something around 9000 should fix it at least
  MAL> for Linux2.

Right.  I had forgotten about the MAX_RECURSION_LIMIT.  It would
probably be better to set the limit lower on Linux only, right?  If
so, what's the cleanest was to make the value depend on the platform.

Jeremy



From mal at lemburg.com  Tue Aug 29 21:42:08 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 21:42:08 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
		<39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net>
Message-ID: <39AC1210.18703B0B@lemburg.com>

Jeremy Hylton wrote:
> 
> >>>>> "MAL" == M -A Lemburg <mal at lemburg.com> writes:
> 
>   >> I don't see an obvious solution.  Is there any way to implement
>   >> PyOS_CheckStack on Unix?  I imagine that each platform would have
>   >> its own variant and that there is no hope of getting them
>   >> debugged before 2.0b1.
> 
>   MAL> I've looked around in the include files for Linux but haven't
>   MAL> found any APIs which could be used to check the stack size.
>   MAL> Not even getrusage() returns anything useful for the current
>   MAL> stack size.
> 
> Right.
> 
>   MAL> For the foo() example I found that on my machine the core dump
>   MAL> happens at depth 9821 (counted from 0), so setting the
>   MAL> recursion limit to something around 9000 should fix it at least
>   MAL> for Linux2.
> 
> Right.  I had forgotten about the MAX_RECURSION_LIMIT.  It would
> probably be better to set the limit lower on Linux only, right?  If
> so, what's the cleanest was to make the value depend on the platform.

Perhaps a naive test in the configure script might help. I used
the following script to determine the limit:

import resource
i = 0    
def foo(x):
    global i
    print i,resource.getrusage(resource.RUSAGE_SELF)   
    i = i + 1
    foo(x)
foo(None)

Perhaps a configure script could emulate the stack requirements
of eval_code2() by using declaring a buffer of a certain size.
The script would then run in a similar way as the one
above printing the current stack depth and then dump core at
some point. The configure script would then have to remove the
core file and use the last written  depth number as basis
for setting the MAX_RECURSION_LIMIT.

E.g. for the above Python script I get:

9818 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9819 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9820 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
9821 (4.2199999999999998, 0.48999999999999999, 0, 0, 0, 0, 1432, 627, 0, 0, 0, 0, 0, 0, 0, 0)
Segmentation fault (core dumped)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Tue Aug 29 22:09:46 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:09:46 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
Message-ID: <Pine.LNX.4.10.10008291508500.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Greg Wilson wrote:
> The problem with using ellipsis is that there's no obvious way to include
> a stride --- how do you hit every second (or n'th) element, rather than
> every element?

As explained in the examples i posted,

    1, 3 .. 20

could produce

    (1, 3, 5, 7, 9, 11, 13, 15, 17, 19)


-- ?!ng




From thomas at xs4all.net  Tue Aug 29 21:49:12 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 21:49:12 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Aug 29, 2000 at 02:42:41PM -0400
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <20000829214912.O500@xs4all.nl>

On Tue, Aug 29, 2000 at 02:42:41PM -0400, Jeremy Hylton wrote:

> Is there any way to implement PyOS_CheckStack on Unix?  I imagine that
> each platform would have its own variant and that there is no hope of
> getting them debugged before 2.0b1.

I can think of three mechanisms: using getrusage() and getrlimit() to find out
the current stacksize and the stack limit is most likely to give accurate
numbers, but is only available on most UNIX systems, not all of them. (I
hear there are systems that don't implement getrusage/getrlimit ;-)

int PyOS_CheckStack(void)
{
    struct rlimit rlim;
    struct rusage rusage;

    if (getrusage(RUSAGE_SELF, &rusage) != 0)
        /* getrusage failed, ignore or raise error ? */
    if (getrlimit(RLIMIT_STACK, &rlim) != 0)
        /* ditto */
    return rlim.rlim_cur > rusage.ru_isrss + PYOS_STACK_MARGIN;
}

(Note that it's probably necessary to repeat the getrlimit as well as the
getrusage, because even the 'hard' limit can change -- a Python program can
change the limits using the 'resource' module.) There are currently no
autoconf checks for rlimit/rusage, but we can add those without problem.
(and enable the resource automagically module while we're at it ;)

If that fails, I don't think there is a way to get the stack limit (unless
it's in platform-dependant ways) but there might be a way to get the
approximate size of the stack by comparing the address of a local variable
with the stored address of a local variable set at the start of Python.
Something like

static long stack_start_addr

[... in some init function ...]
    int dummy;
    stack_start_addr = (long) &dummy;
[ or better yet, use a real variable from that function, but one that won't
get optimized away (or you might lose that optimization) ]

#define PY_STACK_LIMIT 0x200000 /* 2Mbyte */

int PyOS_CheckStack(void)
{
    int dummy;
    return abs(stack_start_addr - (long)&dummy) < PY_STACK_LIMIT;
}

This is definately sub-optimal! With the fixed stack-limit, which might both
be too high and too low. Note that the abs() is necessary to accomodate both
stacks that grow downwards and those that grow upwards, though I'm
hard-pressed at the moment to name a UNIX system with an upwards growing
stack. And this solution is likely to get bitten in the unshapely behind by
optimizing, too-smart-for-their-own-good compilers, possibly requiring a
'volatile' qualifier to make them keep their hands off of it.

But the final solution, using alloca() like the Windows check does, is even
less portable... alloca() isn't available on some systems (more than
getrlimit isn't available on, I think, but the two sets are likely to
intersect) and I've heard rumours that on some systems it's even an alias
for malloc(), leading to memory leaks and other weird behaviour.

I'm thinking that a combination of #1 and #2 is best, where #1 is used when
getrlimit/getrusage are available, but #2 if they are not. However, I'm not
sure if either works, so it's a bit soon for that kind of thoughts :-)

> Does stackless raise exceptions cleanly on each of these bugs?  That
> would be an argument worth mentioning in the PEP, eh?

No, I don't think it does. Stackless gets bitten much later by recursive
behaviour, though, and just retains the current 'recursion depth' counter,
possibly set a bit higher. (I'm not sure, but I'm sure a true stackophobe
will clarify ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Tue Aug 29 21:52:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 15:52:32 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
Message-ID: <14764.5248.979275.341242@anthem.concentric.net>

>>>>> "M" == M  <mal at lemburg.com> writes:

    |     print i,resource.getrusage(resource.RUSAGE_SELF)   

My experience echos yours here MAL -- I've never seen anything 
from getrusage() that would be useful in this context. :/

A configure script test would be useful, but you'd have to build a
minimal Python interpreter first to run the script, wouldn't you?

-Barry



From bwarsaw at beopen.com  Tue Aug 29 21:53:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Tue, 29 Aug 2000 15:53:32 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <Pine.LNX.4.10.10008290718120.18546-100000@akbar.nevex.com>
	<Pine.LNX.4.10.10008291508500.302-100000@server1.lfw.org>
Message-ID: <14764.5308.529148.181749@anthem.concentric.net>

>>>>> "KY" == Ka-Ping Yee <ping at lfw.org> writes:

    KY> As explained in the examples i posted,

    KY>     1, 3 .. 20

What would

    1, 3, 7 .. 99

do? :)

-Barry



From ping at lfw.org  Tue Aug 29 22:20:03 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:20:03 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.5308.529148.181749@anthem.concentric.net>
Message-ID: <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:
> 
> What would
> 
>     1, 3, 7 .. 99
> 
> do? :)

    ValueError: too many elements on left side of ".." operator

or

    ValueError: at most two elements permitted on left side of ".."

You get the idea.


-- ?!ng




From prescod at prescod.net  Tue Aug 29 22:00:55 2000
From: prescod at prescod.net (Paul)
Date: Tue, 29 Aug 2000 15:00:55 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.5308.529148.181749@anthem.concentric.net>
Message-ID: <Pine.LNX.4.21.0008291457410.6330-100000@amati.techno.com>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:

> 
> >>>>> "KY" == Ka-Ping Yee <ping at lfw.org> writes:
> 
>     KY> As explained in the examples i posted,
> 
>     KY>     1, 3 .. 20
> 
> What would
> 
>     1, 3, 7 .. 99

consider:

rangeRecognizers.register( primeHandler )
rangeRecognizers.register( fibHandler )
rangeRecognizers.register( compositeHandler )
rangeRecognizers.register( randomHandler )

(you want to fall back on random handler last so it needs to be registered
last)

 Paul Prescod





From thomas at xs4all.net  Tue Aug 29 22:02:27 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:02:27 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.5248.979275.341242@anthem.concentric.net>; from bwarsaw@beopen.com on Tue, Aug 29, 2000 at 03:52:32PM -0400
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net>
Message-ID: <20000829220226.P500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:52:32PM -0400, Barry A. Warsaw wrote:

> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     |     print i,resource.getrusage(resource.RUSAGE_SELF)   
> 
> My experience echos yours here MAL -- I've never seen anything 
> from getrusage() that would be useful in this context. :/

Ack. indeed. Nevermind my longer post then, getrusage() is usageless. (At
least on Linux.)

> A configure script test would be useful, but you'd have to build a
> minimal Python interpreter first to run the script, wouldn't you?

Nah, as long as you can test how many recursions it would take to run out of
stack... But it's still not optimal: we're doing a check at compiletime (or
rather, configure-time) on a limit which can change during the course of a
single process, nevermind a single installation ;P And I don't really like
doing a configure test that's just a program that tries to run out of
memory... it might turn out troublesome for systems with decent sized
stacks.

(getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
maximum number of recursions from that.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Tue Aug 29 22:05:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:05:13 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>; from thomas@xs4all.net on Tue, Aug 29, 2000 at 10:02:27PM +0200
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net> <20000829220226.P500@xs4all.nl>
Message-ID: <20000829220513.Q500@xs4all.nl>

On Tue, Aug 29, 2000 at 10:02:27PM +0200, Thomas Wouters wrote:

> Ack. indeed. Nevermind my longer post then, getrusage() is usageless. (At
> least on Linux.)

And on BSDI, too.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 29 22:05:32 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:05:32 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>
References: <14764.5308.529148.181749@anthem.concentric.net>
	<Pine.LNX.4.10.10008291517380.302-100000@server1.lfw.org>
Message-ID: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>

On Tue, 29 Aug 2000, Barry A. Warsaw wrote:
 > What would
 > 
 >     1, 3, 7 .. 99
 > 
 > do? :)

Ka-Ping Yee writes:
 >     ValueError: too many elements on left side of ".." operator
...
 >     ValueError: at most two elements permitted on left side of ".."

  Looks like a SyntaxError to me.  ;)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From mal at lemburg.com  Tue Aug 29 22:10:02 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:10:02 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
		<39AC0E7C.922536AA@lemburg.com>
		<14764.4545.972459.760991@bitdiddle.concentric.net>
		<39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net>
Message-ID: <39AC189A.95846E0@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "M" == M  <mal at lemburg.com> writes:
> 
>     |     print i,resource.getrusage(resource.RUSAGE_SELF)
> 
> My experience echos yours here MAL -- I've never seen anything
> from getrusage() that would be useful in this context. :/
> 
> A configure script test would be useful, but you'd have to build a
> minimal Python interpreter first to run the script, wouldn't you?

I just experimented with this a bit: I can't seem to get
a plain C program to behave like the Python interpreter.

The C program can suck memory in large chunks and consume
great amounts of stack, it just doesn't dump core... (don't
know what I'm doing wrong here).

Yet the Python 2.0 interpreter only uses about 5MB of
memory at the time it dumps core -- seems strange to me,
since the plain C program can easily consume more than 20Megs
and still continues to run.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From fdrake at beopen.com  Tue Aug 29 22:09:29 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:09:29 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<20000829220226.P500@xs4all.nl>
Message-ID: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > (getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
 > maximum number of recursions from that.)

  Still no go -- we can calculate the number of recursions for a
particular call frame size (or expected mix of frame sizes, which is
really the same), but we can't predict recursive behavior inside a C
extension, which is a significant part of the problem (witness the SRE
experience).  That's why PyOS_StackCheck() actually has to do more
than test a counter -- if the counter is low but the call frames are
larger than our estimate, it won't help.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From skip at mojam.com  Tue Aug 29 22:12:57 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:12:57 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <14764.6473.859814.216436@beluga.mojam.com>

    Jeremy> Does anyone have suggestions for how to detect unbounded
    Jeremy> recursion in the Python core on Unix platforms?

On most (all?) processors in common usage, the stack grows down toward the
heap and the heap brows upward, so what you really want to do is detect that
collision.  brk and sbrk are used to manipulate the end of the heap.  A
local variable in the current scope should be able to tell you roughly where
the top of stack is.

Of course, you really can't call brk or sbrk safely.  You have to leave that
to malloc.  You might get some ideas of how to estimate the current end of
the heap by peering at the GNU malloc code.

This might be a good reason to experiment with Vladimir's obmalloc package.
It could easily be modified to remember the largest machine address it
returns via malloc or realloc calls.  That value could be compared with the
current top of stack.  If obmalloc brks() memory back to the system (I've
never looked at it - I'm just guessing) it could lower the saved value to
the last address in the block below the just recycled block.

(I agree this probably won't be done very well before 2.0 release.)

Skip



From tim_one at email.msn.com  Tue Aug 29 22:14:16 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 16:14:16 -0400
Subject: [Python-Dev] SETL (was: Lukewarm about range literals)
In-Reply-To: <200008291253.HAA32332@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMEHCAA.tim_one@email.msn.com>

[Tim]
> ...
> Have always wondered why Python didn't have that [ABC's boolean
> quatifiers] too; I ask that every year, but so far Guido has never
> answered it <wink>.

[Guido]
> I don't recall you asking me that even *once* before now.  Proof,
> please?

That's too time-consuming until DejaNews regains its memory.  I never asked
*directly*, it simply comes up at least once a year on c.l.py (and has since
the old days!), and then I always mention that it comes up every year but
that Guido never jumps into those threads <wink>.  The oldest reference I
can find in DejaNews today is just from January 1st of this year, at the end
of

    http://www.deja.com/getdoc.xp?AN=567219971

There it got mentioned offhandedly.  Much earlier threads were near-PEP'ish
in their development of how this could work in Python.  I'll attach the
earliest one I have in my personal email archive, from a bit over 4 years
ago.  All my personal email much before that got lost in KSR's bankruptcy
bit bucket.

> Anyway, the answer is that I saw diminishing returns from adding more
> keywords and syntax.

Yes, I've channeled that too -- that's why I never bugged you directly
<wink>.



-----Original Message-----
From: python-list-request at cwi.nl [mailto:python-list-request at cwi.nl]
Sent: Saturday, August 03, 1996 4:42 PM
To: Marc-Andre Lemburg; python-list at cwi.nl
Subject: RE: \exists and \forall in Python ?!


> [Marc-Andre Lemburg]
> ... [suggesting "\exists" & "\forall" quantifiers] ...

Python took several ideas from CWI's ABC language, and this is one that
didn't
make the cut.  I'd be interested to hear Guido's thoughts on this!  They're
certainly very nice to have, although I wouldn't say they're of core
importance.  But then a lot of "nice to have but hardly crucial" features
did
survive the cut (like, e.g., "x < y < z" as shorthand for "x < y and y <
z"),
and it's never clear where to draw the line.

In ABC, the additional keywords were "some", "each", "no" and "has", as in
(importing the ABC semantics into a virtual Python):

if some d in range(2,n) has n % d == 0:
    print n, "not prime; it's divisible by", d
else:
    print n, "is prime"

or

if no d in range(2,n) has n % d == 0:
    print n, "is prime"
else:
    print n, "not prime; it's divisible by", d

or

if each d in range(2,n) has n % d == 0:
    print n, "is <= 2; test vacuously true"
else:
    print n, "is not divisible by, e.g.,", d

So "some" is a friendly spelling of "there exists", "no" of "not there
exists", and "each" of "for all".  In addition to testing the condition,
"some" also bound the test vrbls to "the first"  witness if there was one,
and
"no" and "each" to the first counterexample if there was one.  I think ABC
got
that all exactly right, so (a) it's the right model to follow if Python were
to add this, and (b) the (very useful!) business of binding the test vrbls
if
& only if the test succeeds (for "some") or fails (for "no" and "each")
makes
it much harder to fake (comprehensibly & efficiently) via map & reduce
tricks.

side-effects-are-your-friends-ly y'rs  - tim

Tim Peters    tim_one at msn.com, tim at dragonsys.com
not speaking for Dragon Systems Inc.





From skip at mojam.com  Tue Aug 29 22:17:10 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:17:10 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000829220226.P500@xs4all.nl>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<20000829220226.P500@xs4all.nl>
Message-ID: <14764.6726.985174.85964@beluga.mojam.com>

    Thomas> Nah, as long as you can test how many recursions it would take
    Thomas> to run out of stack... But it's still not optimal: we're doing a
    Thomas> check at compiletime (or rather, configure-time) on a limit
    Thomas> which can change during the course of a single process,
    Thomas> nevermind a single installation ;P And I don't really like doing
    Thomas> a configure test that's just a program that tries to run out of
    Thomas> memory... it might turn out troublesome for systems with decent
    Thomas> sized stacks.

Not to mention which you'll get different responses depending on how heavily
the system is using VM, right?  If you are unlucky enough to build on a
memory-rich system then copy the python interpreter over to a memory-starved
system (or just run the interpreter while you have Emacs, StarOffice and
Netscape running), you may well run out of virtual memory a lot sooner than
your configure script thought.

Skip



From skip at mojam.com  Tue Aug 29 22:19:16 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 15:19:16 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC189A.95846E0@lemburg.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<39AC189A.95846E0@lemburg.com>
Message-ID: <14764.6852.672716.587046@beluga.mojam.com>

    MAL> The C program can suck memory in large chunks and consume great
    MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
    MAL> doing wrong here).

Are you overwriting all that memory you malloc with random junk?  If not,
the stack and the heap may have collided but not corrupted each other.

Skip



From ping at lfw.org  Tue Aug 29 22:43:23 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:43:23 -0500 (CDT)
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: <m13Tf69-000wcDC@swing.co.at>
Message-ID: <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Christian Tanzer wrote:
> Triple quoted strings work -- that's what I'm constantly using. The
> downside is, that the docstrings either contain spurious white space
> or it messes up the layout of the code (if you start subsequent lines
> in the first column).

The "inspect" module (see http://www.lfw.org/python/) handles this nicely.

    Python 1.5.2 (#4, Jul 21 2000, 18:28:23) [C] on sunos5
    Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
    >>> import inspect
    >>> class Foo:
    ...     """First line.
    ...        Second line.
    ...            An indented line.
    ...        Some more text."""
    ... 
    >>> inspect.getdoc(Foo)
    'First line.\012Second line.\012    An indented line.\012Some more text.'
    >>> print _
    First line.
    Second line.
        An indented line.
    Some more text.
    >>>        

I suggested "inspect.py" for the standard library quite some time ago
(long before the feature freeze, and before ascii.py, which has since
made it in).  MAL responded pretty enthusiastically
(http://www.python.org/pipermail/python-dev/2000-July/013511.html).
Could i request a little more feedback from others?

It's also quite handy for other purposes.  It can get the source
code for a given function or class:

    >>> func = inspect.getdoc
    >>> inspect.getdoc(func)
    'Get the documentation string for an object.'
    >>> inspect.getfile(func)
    'inspect.py'
    >>> lines, lineno = inspect.getsource(func)
    >>> print string.join(lines)
    def getdoc(object):
         """Get the documentation string for an object."""
         if not hasattr(object, "__doc__"):
             raise TypeError, "arg has no __doc__ attribute"
         if object.__doc__:
             lines = string.split(string.expandtabs(object.__doc__), "\n")
             margin = None
             for line in lines[1:]:
                 content = len(string.lstrip(line))
                 if not content: continue
                 indent = len(line) - content
                 if margin is None: margin = indent
                 else: margin = min(margin, indent)
             if margin is not None:
                 for i in range(1, len(lines)): lines[i] = lines[i][margin:]
             return string.join(lines, "\n")

And it can get the argument spec for a function:

    >>> inspect.getargspec(func)
    (['object'], None, None, None)
    >>> apply(inspect.formatargspec, _)
    '(object)'

Here's a slightly more challenging example:

    >>> def func(a, (b, c), (d, (e, f), (g,)), h=3): pass
    ... 
    >>> inspect.getargspec(func)
    (['a', ['b', 'c'], ['d', ['e', 'f'], ['g']], 'h'], None, None, (3,))
    >>> apply(inspect.formatargspec, _)
    '(a, (b, c), (d, (e, f), (g,)), h=3)'
    >>> 



-- ?!ng




From cgw at fnal.gov  Tue Aug 29 22:22:03 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 15:22:03 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
	<39AC0E7C.922536AA@lemburg.com>
	<14764.4545.972459.760991@bitdiddle.concentric.net>
	<39AC1210.18703B0B@lemburg.com>
	<14764.5248.979275.341242@anthem.concentric.net>
	<20000829220226.P500@xs4all.nl>
	<14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
Message-ID: <14764.7019.100780.127130@buffalo.fnal.gov>

The situation on Linux is damn annoying, because, from a few minutes
of rummaging around in the kernel it's clear that this information
*is* available to the kernel, just not exposed to the user in a useful
way.  The file /proc/<pid>/statm [1] gives as field 5 "drs", which is
"number of pages of data/stack".  If only the data and stack weren't
lumped together in this number, we could actually do something with
it!

[1]: Present on Linux 2.2 only.  See /usr/src/linux/Documentation/proc.txt
for description of this (fairly obscure) file.




From mal at lemburg.com  Tue Aug 29 22:24:00 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:24:00 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
		<39AC0E7C.922536AA@lemburg.com>
		<14764.4545.972459.760991@bitdiddle.concentric.net>
		<39AC1210.18703B0B@lemburg.com>
		<14764.5248.979275.341242@anthem.concentric.net>
		<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com>
Message-ID: <39AC1BE0.FFAA9100@lemburg.com>

Skip Montanaro wrote:
> 
>     MAL> The C program can suck memory in large chunks and consume great
>     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
>     MAL> doing wrong here).
> 
> Are you overwriting all that memory you malloc with random junk?  If not,
> the stack and the heap may have collided but not corrupted each other.

Not random junk, but all 1s:

int recurse(int depth)
{
    char buffer[2048];
    memset(buffer, 1, sizeof(buffer));

    /* Call recursively */
    printf("%d\n",depth);
    recurse(depth + 1);
}

main()
{
    recurse(0);
}

Perhaps I need to go up a bit on the stack to trigger the
collision (i.e. go down two levels, then up one, etc.) ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From ping at lfw.org  Tue Aug 29 22:49:28 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 29 Aug 2000 15:49:28 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>

On Tue, 29 Aug 2000, Fred L. Drake, Jr. wrote:
> Ka-Ping Yee writes:
>  >     ValueError: too many elements on left side of ".." operator
> ...
>  >     ValueError: at most two elements permitted on left side of ".."
> 
>   Looks like a SyntaxError to me.  ;)

I would have called "\xgh" a SyntaxError too, but Guido argued
convincingly that it's consistently ValueError for bad literals.
So i'm sticking with that.  See the thread of replies to

    http://www.python.org/pipermail/python-dev/2000-August/014629.html


-- ?!ng




From thomas at xs4all.net  Tue Aug 29 22:26:53 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:26:53 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6852.672716.587046@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 03:19:16PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <39AC0E7C.922536AA@lemburg.com> <14764.4545.972459.760991@bitdiddle.concentric.net> <39AC1210.18703B0B@lemburg.com> <14764.5248.979275.341242@anthem.concentric.net> <39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com>
Message-ID: <20000829222653.R500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:19:16PM -0500, Skip Montanaro wrote:

>     MAL> The C program can suck memory in large chunks and consume great
>     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
>     MAL> doing wrong here).

Are you sure you are consuming *stack* ?

> Are you overwriting all that memory you malloc with random junk?  If not,
> the stack and the heap may have collided but not corrupted each other.

malloc() does not consume stackspace, it consumes heapspace. Don't bother
using malloc(). You have to allocate huge tracks o' land in 'automatic'
variables, or use alloca() (which isn't portable.) Depending on your arch,
you might need to actually write to every, ooh, 1024th int or so.

{
    int *spam[0x2000];
	(etc)
}

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Tue Aug 29 22:26:43 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Tue, 29 Aug 2000 16:26:43 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>
References: <14764.6028.121193.410374@cj42289-a.reston1.va.home.com>
	<Pine.LNX.4.10.10008291545410.302-100000@server1.lfw.org>
Message-ID: <14764.7299.991437.132621@cj42289-a.reston1.va.home.com>

Ka-Ping Yee writes:
 > I would have called "\xgh" a SyntaxError too, but Guido argued
 > convincingly that it's consistently ValueError for bad literals.

  I understand the idea about bad literals.  I don't think that's what
this is.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From effbot at telia.com  Tue Aug 29 22:40:16 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 22:40:16 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>	<39AC0E7C.922536AA@lemburg.com>	<14764.4545.972459.760991@bitdiddle.concentric.net>	<39AC1210.18703B0B@lemburg.com>	<14764.5248.979275.341242@anthem.concentric.net>	<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com> <39AC1BE0.FFAA9100@lemburg.com>
Message-ID: <00eb01c011f9$59d47a80$766940d5@hagrid>

mal wrote:
> int recurse(int depth)
> {
>     char buffer[2048];
>     memset(buffer, 1, sizeof(buffer));
> 
>     /* Call recursively */
>     printf("%d\n",depth);
>     recurse(depth + 1);
> }
> 
> main()
> {
>     recurse(0);
> }
> 
> Perhaps I need to go up a bit on the stack to trigger the
> collision (i.e. go down two levels, then up one, etc.) ?!

or maybe the optimizer removed your buffer variable?

try printing the buffer address, to see how much memory
you're really consuming here.

     printf("%p %d\n", buffer, depth);

</F>




From thomas at xs4all.net  Tue Aug 29 22:31:08 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 22:31:08 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6473.859814.216436@beluga.mojam.com>; from skip@mojam.com on Tue, Aug 29, 2000 at 03:12:57PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <14764.6473.859814.216436@beluga.mojam.com>
Message-ID: <20000829223108.S500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:12:57PM -0500, Skip Montanaro wrote:

> On most (all?) processors in common usage, the stack grows down toward the
> heap and the heap brows upward, so what you really want to do is detect that
> collision.  brk and sbrk are used to manipulate the end of the heap.  A
> local variable in the current scope should be able to tell you roughly where
> the top of stack is.

I don't think that'll help, because the limit isn't the actual (physical)
memory limit, but mostly just administrative limits. 'limit' or 'limits',
depending on your shell.

> current top of stack.  If obmalloc brks() memory back to the system (I've
> never looked at it - I'm just guessing) it could lower the saved value to
> the last address in the block below the just recycled block.

Last I looked, obmalloc() worked on top of the normal system malloc (or its
replacement if you provide one) and doesn't brk/sbrk itself (thank god --
that would mean nastiness if extention modules or such used malloc, or if
python were embedded into a system using malloc!)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Tue Aug 29 22:39:18 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 22:39:18 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <14764.1057.909517.977904@bitdiddle.concentric.net>	<39AC0E7C.922536AA@lemburg.com>	<14764.4545.972459.760991@bitdiddle.concentric.net>	<39AC1210.18703B0B@lemburg.com>	<14764.5248.979275.341242@anthem.concentric.net>	<39AC189A.95846E0@lemburg.com> <14764.6852.672716.587046@beluga.mojam.com> <39AC1BE0.FFAA9100@lemburg.com> <00eb01c011f9$59d47a80$766940d5@hagrid>
Message-ID: <39AC1F76.41CCED9@lemburg.com>

Fredrik Lundh wrote:
> 
> ...
> 
> or maybe the optimizer removed your buffer variable?
> 
> try printing the buffer address, to see how much memory
> you're really consuming here.
> 
>      printf("%p %d\n", buffer, depth);

I got some more insight using:

int checkstack(int depth)
{
    if (depth <= 0)
	return 0;
    checkstack(depth - 1);
}

int recurse(int depth)
{
    char stack[2048];
    char *heap;
    
    memset(stack, depth % 256, sizeof(stack));
    heap = (char*) malloc(2048);

    /* Call recursively */
    printf("stack %p heap %p depth %d\n", stack, heap, depth);
    checkstack(depth);
    recurse(depth + 1);
    return 0;
}

main()
{
    recurse(0);
}

This print lines like these:
stack 0xbed4b118 heap 0x92a1cb8 depth 9356

... or in other words over 3GB of space between the stack and
the heap. No wonder I'm not seeing any core dumps.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From tim_one at email.msn.com  Tue Aug 29 22:44:18 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 29 Aug 2000 16:44:18 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52981.603640.415652@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEMIHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> One of the original arguments for range literals as I recall was that
> indexing of loops could get more efficient.  The compiler would know
> that [0:100:2] represents a series of integers and could conceivably
> generate more efficient loop indexing code (and so could Python2C and
> other compilers that generated C code).  This argument doesn't seem to
> be showing up here at all.  Does it carry no weight in the face of the
> relative inscrutability of the syntax?

It carries no weight at all *for 2.0* because the patch didn't exploit the
efficiency possibilities.

Which I expect are highly overrated (maybe 3% in a "good case" real-life
loop) anyway.  Even if they aren't, the same argument would apply to any
other new syntax for this too, so in no case is it an argument in favor of
this specific new syntax over alternative new syntaxes.

There are also well-known ways to optimize the current "range" exactly the
way Python works today; e.g., compile two versions of the loop, one assuming
range is the builtin, the other assuming it may be anything, then a quick
runtime test to jump to the right one.  Guido hates that idea just because
it's despicable <wink>, but that's the kind of stuff optimizing compilers
*do*, and if we're going to get excited about efficiency then we need to
consider *all sorts of* despicable tricks like that.

In any case, I've spent 5 hours straight now digging thru Python email, have
more backed up than when I started, and have gotten nothing done today
toward moving 2.0b1 along.  I'd love to talk more about all this, but there
simply isn't the time for it now ...





From cgw at fnal.gov  Tue Aug 29 23:05:21 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Tue, 29 Aug 2000 16:05:21 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.1057.909517.977904@bitdiddle.concentric.net>
References: <14764.1057.909517.977904@bitdiddle.concentric.net>
Message-ID: <14764.9617.203071.639126@buffalo.fnal.gov>

Jeremy Hylton writes:
 > Does anyone have suggestions for how to detect unbounded recursion in
 > the Python core on Unix platforms?

Hey, check this out! - it's not portable in general, but it works for Linux,
which certainly covers a large number of the systems out there in the world.

#!/usr/bin/env python

def getstack():
    for l in open("/proc/self/status").readlines():
        if l.startswith('VmStk'):
            t = l.split()
            return 1024 * int(t[1])


def f():
    print getstack()
    f()

f()


I'm working up a version of this in C; you can do a "getrlimit" to
find the maximum stack size, then read /proc/self/status to get
current stack usage, and compare these values.

As far as people using systems that have a broken getrusage and also
no /proc niftiness, well... get yourself a real operating system ;-)







From effbot at telia.com  Tue Aug 29 23:03:43 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Tue, 29 Aug 2000 23:03:43 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com> <39ABF1B8.426B7A6@lemburg.com>
Message-ID: <012d01c011fe$31d23900$766940d5@hagrid>

mal wrote:
> Here is a pre-release version of mx.DateTime which should fix
> the problem (the new release will use the top-level mx package
> -- it does contain a backward compatibility hack though):
>
> http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
>
> Please let me know if it fixes your problem... I don't use PyApache.

mal, can you update the bug database.  this bug is still listed
as an open bug in the python core...

http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

</F>




From mal at lemburg.com  Tue Aug 29 23:12:45 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 29 Aug 2000 23:12:45 +0200
Subject: [Python-Dev] Problem reloading mx.DateTime in PyApache
References: <Pine.LNX.3.96.1000829131358.21671A-100000@ns1.quickrecord.com> <39ABF1B8.426B7A6@lemburg.com> <012d01c011fe$31d23900$766940d5@hagrid>
Message-ID: <39AC274D.AD9856C7@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > Here is a pre-release version of mx.DateTime which should fix
> > the problem (the new release will use the top-level mx package
> > -- it does contain a backward compatibility hack though):
> >
> > http://starship.python.net/~lemburg/mxDateTime-1.4.0-prerelease.zip
> >
> > Please let me know if it fixes your problem... I don't use PyApache.
> 
> mal, can you update the bug database.  this bug is still listed
> as an open bug in the python core...
> 
> http://sourceforge.net/bugs/?func=detailbug&bug_id=110601&group_id=5470

Hmm, I thought I had already closed it... done.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From barry at scottb.demon.co.uk  Tue Aug 29 23:21:04 2000
From: barry at scottb.demon.co.uk (Barry Scott)
Date: Tue, 29 Aug 2000 22:21:04 +0100
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AC1F76.41CCED9@lemburg.com>
Message-ID: <000e01c011ff$09e3ca70$060210ac@private>

Use the problem as the solution.

The problem is that you get a SIGSEGV after you fall off the end of the stack.
(I'm assuming you always have guard pages between the stack end and other memory
zones. Otherwise you will not get the SEGV).

If you probe ahead of the stack to trigger the SIGSEGV you can use the signal
handler to trap the probe and recover gracefully. Use posix signal handling
everywhere for portability (don't mix posix and not and expect signals to work
BTW).

jmp_buf probe_env;

int CheckStack()	/* untested */
	{
	if( setjmp( &probe_env ) == 0 )
		{
		char buf[32];
		/* need code to deal with direction of stack */
		if( grow_down )
			buf[-65536] = 1;
		else
			buf[65536] = 1;
		return 1; /* stack is fine of 64k */
		}
	else
		{
		return 0; /* will run out of stack soon */
		}
	}

void sigsegv_handler( int )
	{
	longjmp( &probe_env );
	}

			Barry (not just a Windows devo <wink>)





From thomas at xs4all.net  Tue Aug 29 23:43:29 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 29 Aug 2000 23:43:29 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.9617.203071.639126@buffalo.fnal.gov>; from cgw@fnal.gov on Tue, Aug 29, 2000 at 04:05:21PM -0500
References: <14764.1057.909517.977904@bitdiddle.concentric.net> <14764.9617.203071.639126@buffalo.fnal.gov>
Message-ID: <20000829234329.T500@xs4all.nl>

On Tue, Aug 29, 2000 at 04:05:21PM -0500, Charles G Waldman wrote:

> Jeremy Hylton writes:
>  > Does anyone have suggestions for how to detect unbounded recursion in
>  > the Python core on Unix platforms?

> Hey, check this out! - it's not portable in general, but it works for Linux,
> which certainly covers a large number of the systems out there in the world.

'large' in terms of "number of instances", perhaps, but not very large in
terms of total number of operating system types/versions, I think. I know of
two operating systems that implement that info in /proc (FreeBSD and Linux)
and one where it's optional (but default off and probably untested: BSDI.) I
also think that this is a very costly thing to do every ten (or even every
hundred) recursions.... I would go for the auto-vrbl-address-check, in
combination with either a fixed stack limit, or getrlimit() - which does
seem to work. Or perhaps the alloca() check for systems that have it (which
can be checked) and seems to work properly (which can be checked, too, but
not as reliable.)

The vrbl-address check only does a few integer calculations, and we can
forgo the getrlimit() call if we do it somewhere during Python init, and
after every call of resource.setrlimit(). (Or just do it anyway: it's
probably not *that* expensive, and if we don't do it each time, we can still
run into trouble if another extention module sets limits, or if python is
embedded in something that changes limits on the fly.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Wed Aug 30 01:10:25 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 01:10:25 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
In-Reply-To: <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>; from ping@lfw.org on Tue, Aug 29, 2000 at 03:43:23PM -0500
References: <m13Tf69-000wcDC@swing.co.at> <Pine.LNX.4.10.10008291524310.302-100000@server1.lfw.org>
Message-ID: <20000830011025.V500@xs4all.nl>

On Tue, Aug 29, 2000 at 03:43:23PM -0500, Ka-Ping Yee wrote:

> The "inspect" module (see http://www.lfw.org/python/) handles this nicely.

[snip example]

> I suggested "inspect.py" for the standard library quite some time ago
> (long before the feature freeze, and before ascii.py, which has since
> made it in).  MAL responded pretty enthusiastically
> (http://www.python.org/pipermail/python-dev/2000-July/013511.html).
> Could i request a little more feedback from others?

Looks fine to me, would fit nicely in with the other introspective things we
already have (dis, profile, etc) -- but wasn't it going to be added to the
'help' (or what was it) stdlib module ?

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From greg at cosc.canterbury.ac.nz  Wed Aug 30 04:11:41 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 30 Aug 2000 14:11:41 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>

> I doubt there would be much
> problem adding ".." as a token either.

If we're going to use any sort of ellipsis syntax here, I
think it would be highly preferable to use the ellipsis
token we've already got. I can't see any justification for
having two different ellipsis-like tokens in the language,
when there would be no ambiguity in using one for both
purposes.

> What we really want I think is something that evokes the following in the
> mind of the reader
> 
>     for i from START to END incrementing by STEP:

Am I right in thinking that the main motivation here is
to clean up the "for i in range(len(a))" idiom? If so,
what's wrong with a built-in:

  def indices(a):
    return range(len(a))

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From skip at mojam.com  Wed Aug 30 05:43:57 2000
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 29 Aug 2000 22:43:57 -0500 (CDT)
Subject: [Python-Dev] MacPython 2.0?
Message-ID: <14764.33533.218103.763531@beluga.mojam.com>

Has Jack or anyone else been building Mac versions of 2.0 and making them
available somewhere?  I seem to have fallen off the MacPython list and
haven't taken the time to investigate (perhaps I set subscription to NOMAIL
and forgot that crucial point).  I have no compilation tools on my Mac, so
while I'd like to try testing things a little bit there, I am entirely
dependent on others to provide me with something runnable.

Thx,

Skip



From tanzer at swing.co.at  Wed Aug 30 08:23:08 2000
From: tanzer at swing.co.at (Christian Tanzer)
Date: Wed, 30 Aug 2000 08:23:08 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings 
In-Reply-To: Your message of "Tue, 29 Aug 2000 11:41:15 +0200."
             <39AB853B.217402A2@lemburg.com> 
Message-ID: <m13U1HA-000wcEC@swing.co.at>

"M.-A. Lemburg" <mal at lemburg.com> wrote:

> > Triple quoted strings work -- that's what I'm constantly using. The
> > downside is, that the docstrings either contain spurious white space
> > or it messes up the layout of the code (if you start subsequent lines
> > in the first column).
> 
> Just a question of how smart you doc string extraction
> tools are. Have a look at hack.py:

Come on. There are probably hundreds of hacks around to massage
docstrings. I've written one myself. Ka-Ping Yee suggested
inspect.py...

My point was that in such cases it is much better if the language does
it than if everybody does his own kludge. If a change of the Python
parser concerning this point is out of the question, why not have a
standard module providing this functionality (Ka-Ping Yee offered one
<nudge>, <nudge>).

Regards,
Christian

-- 
Christian Tanzer                                         tanzer at swing.co.at
Glasauergasse 32                                       Tel: +43 1 876 62 36
A-1130 Vienna, Austria                                 Fax: +43 1 877 66 92




From mal at lemburg.com  Wed Aug 30 10:35:00 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 10:35:00 +0200
Subject: [Python-Dev] Re: [PEP 224] Attribute Docstrings
References: <m13U1HA-000wcEC@swing.co.at>
Message-ID: <39ACC734.6F436894@lemburg.com>

Christian Tanzer wrote:
> 
> "M.-A. Lemburg" <mal at lemburg.com> wrote:
> 
> > > Triple quoted strings work -- that's what I'm constantly using. The
> > > downside is, that the docstrings either contain spurious white space
> > > or it messes up the layout of the code (if you start subsequent lines
> > > in the first column).
> >
> > Just a question of how smart you doc string extraction
> > tools are. Have a look at hack.py:
> 
> Come on. There are probably hundreds of hacks around to massage
> docstrings. I've written one myself. Ka-Ping Yee suggested
> inspect.py...

That's the point I wanted to make: there's no need to care much
about """-string formatting while writing them as long as you have
tools which do it for you at extraction time.
 
> My point was that in such cases it is much better if the language does
> it than if everybody does his own kludge. If a change of the Python
> parser concerning this point is out of the question, why not have a
> standard module providing this functionality (Ka-Ping Yee offered one
> <nudge>, <nudge>).

Would be a nice addition for Python's stdlib, yes. Maybe for 2.1,
since we are in feature freeze for 2.0...

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From pf at artcom-gmbh.de  Wed Aug 30 10:39:39 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Wed, 30 Aug 2000 10:39:39 +0200 (MEST)
Subject: Memory overcommitment and guessing about stack size (was Re: [Python-Dev] stack check on Unix: any suggestions?)
In-Reply-To: <39AC1BE0.FFAA9100@lemburg.com> from "M.-A. Lemburg" at "Aug 29, 2000 10:24: 0 pm"
Message-ID: <m13U3PH-000Dm9C@artcom0.artcom-gmbh.de>

Hi,

Any attempts to *reliable* predict the amount of virtual memory (stack+heap)
available to a process are *DOOMED TO FAIL* by principle on any unixoid
System.

Some of you might have missed all those repeated threads about virtual memory
allocation and the overcommitment strategy in the various Linux groups.  

M.-A. Lemburg:
> Skip Montanaro wrote:
> > 
> >     MAL> The C program can suck memory in large chunks and consume great
> >     MAL> amounts of stack, it just doesn't dump core... (don't know what I'm
> >     MAL> doing wrong here).
> > 
> > Are you overwriting all that memory you malloc with random junk?  If not,
> > the stack and the heap may have collided but not corrupted each other.
> 
> Not random junk, but all 1s:
[...]

For anyone interested in more details, I attach an email written by
Linus Thorvalds in the thread 'Re: Linux is 'creating' memory ?!'
on 'comp.os.linux.developmen.apps' on Mar 20th 1995, since I was
unable to locate this article on Deja (you know).

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

From martin at loewis.home.cs.tu-berlin.de  Wed Aug 30 11:12:58 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 11:12:58 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>

> Does anyone have suggestions for how to detect unbounded recursion
> in the Python core on Unix platforms?

I just submitted patch 101352, at

http://sourceforge.net/patch/?func=detailpatch&patch_id=101352&group_id=5470

This patch works on the realistic assumption that reliable stack usage
is not available through getrusage on most systems, so it uses an
estimate instead. The upper stack boundary is determined on thread
creation; the lower stack boundary inside the check. This must allow
for initial stack frames (main, _entry, etc), and for pages allocated
by on the stack by the system. At least on Linux, argv and env pages
count towards the stack limit.

If some systems are known to return good results from getrusage, that
should be used instead.

I have tested this patch on a Linux box to detect recursion in both
the example of bug 112943, as well as the foo() recursion; the latter
would crash with stock CVS python only when I reduced the stack limit
from 8MB to 1MB.

Since the patch uses a heuristic to determine stack exhaustion, it is
probably possible to find cases where it does not work. I.e. it might
diagnose exhaustion, where it could run somewhat longer (rather,
deeper), or it fails to diagnose exhaustion when it is really out of
stack. It is also likely that there are better heuristics. Overall, I
believe this patch is an improvement.

While this patch claims to support all of Unix, it only works where
getrlimit(RLIMIT_STACK) works. Unix(tm) does guarantee this API; it
should work on *BSD and many other Unices as well.

Comments?

Martin



From mal at lemburg.com  Wed Aug 30 11:56:31 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 11:56:31 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
Message-ID: <39ACDA4F.3EF72655@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > Does anyone have suggestions for how to detect unbounded recursion
> > in the Python core on Unix platforms?
> 
> I just submitted patch 101352, at
> 
> http://sourceforge.net/patch/?func=detailpatch&patch_id=101352&group_id=5470
> 
> This patch works on the realistic assumption that reliable stack usage
> is not available through getrusage on most systems, so it uses an
> estimate instead. The upper stack boundary is determined on thread
> creation; the lower stack boundary inside the check. This must allow
> for initial stack frames (main, _entry, etc), and for pages allocated
> by on the stack by the system. At least on Linux, argv and env pages
> count towards the stack limit.
> 
> If some systems are known to return good results from getrusage, that
> should be used instead.
> 
> I have tested this patch on a Linux box to detect recursion in both
> the example of bug 112943, as well as the foo() recursion; the latter
> would crash with stock CVS python only when I reduced the stack limit
> from 8MB to 1MB.
> 
> Since the patch uses a heuristic to determine stack exhaustion, it is
> probably possible to find cases where it does not work. I.e. it might
> diagnose exhaustion, where it could run somewhat longer (rather,
> deeper), or it fails to diagnose exhaustion when it is really out of
> stack. It is also likely that there are better heuristics. Overall, I
> believe this patch is an improvement.
> 
> While this patch claims to support all of Unix, it only works where
> getrlimit(RLIMIT_STACK) works. Unix(tm) does guarantee this API; it
> should work on *BSD and many other Unices as well.
> 
> Comments?

See my comments in the patch manager... the patch looks fine
except for two things: getrlimit() should be tested for
usability in the configure script and the call frequency
of PyOS_CheckStack() should be lowered to only use it for
potentially recursive programs.

Apart from that, this looks like the best alternative so far :-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nowonder at nowonder.de  Wed Aug 30 13:58:51 2000
From: nowonder at nowonder.de (Peter Schneider-Kamp)
Date: Wed, 30 Aug 2000 11:58:51 +0000
Subject: [Python-Dev] Lukewarm about range literals
References: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>
Message-ID: <39ACF6FB.66BAB739@nowonder.de>

Greg Ewing wrote:
> 
> Am I right in thinking that the main motivation here is
> to clean up the "for i in range(len(a))" idiom? If so,
> what's wrong with a built-in:
> 
>   def indices(a):
>     return range(len(a))

As far as I know adding a builtin indices() has been
rejected as an idea.

Peter
-- 
Peter Schneider-Kamp          ++47-7388-7331
Herman Krags veg 51-11        mailto:peter at schneider-kamp.de
N-7050 Trondheim              http://schneider-kamp.de



From effbot at telia.com  Wed Aug 30 12:27:12 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Wed, 30 Aug 2000 12:27:12 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com>
Message-ID: <000d01c0126c$dfe700c0$766940d5@hagrid>

mal wrote:
> See my comments in the patch manager... the patch looks fine
> except for two things: getrlimit() should be tested for
> usability in the configure script and the call frequency
> of PyOS_CheckStack() should be lowered to only use it for
> potentially recursive programs.

the latter would break windows and mac versions of Python,
where Python can run on very small stacks (not to mention
embedded systems...)

for those platforms, CheckStack is designed to work with an
8k safety margin (PYOS_STACK_MARGIN)

:::

one way to address this is to introduce a scale factor, so that
you can add checks based on the default 8k limit, but auto-
magically apply them less often platforms where the safety
margin is much larger...

/* checkstack, but with a "scale" factor */
#if windows or mac
/* default safety margin */
#define PYOS_CHECKSTACK(v, n)\
    (((v) % (n) == 0) && PyOS_CheckStack())
#elif linux
/* at least 10 times the default safety margin */
#define PYOS_CHECKSTACK(v, n)\
    (((v) % ((n)*10) == 0) && PyOS_CheckStack())
#endif

 if (PYOS_CHECKSTACK(tstate->recursion_depth, 10)
    ...

</F>




From mal at lemburg.com  Wed Aug 30 12:42:39 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 30 Aug 2000 12:42:39 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid>
Message-ID: <39ACE51F.3AEC75AB@lemburg.com>

Fredrik Lundh wrote:
> 
> mal wrote:
> > See my comments in the patch manager... the patch looks fine
> > except for two things: getrlimit() should be tested for
> > usability in the configure script and the call frequency
> > of PyOS_CheckStack() should be lowered to only use it for
> > potentially recursive programs.
> 
> the latter would break windows and mac versions of Python,
> where Python can run on very small stacks (not to mention
> embedded systems...)
> 
> for those platforms, CheckStack is designed to work with an
> 8k safety margin (PYOS_STACK_MARGIN)

Ok, I don't mind calling it every ten levels deep, but I'd
rather not have it start at level 0. The reason is
that many programs probably don't make much use of
recursion anyway and have a maximum call depth of around
10-50 levels (Python programs usually using shallow class hierarchies).
These programs should not be bothered by calling PyOS_CheckStack()
all the time. Recursive programs will easily reach the 100 mark -- 
those should call PyOS_CheckStack often enough to notice the 
stack problems.

So the check would look something like this:

if (tstate->recursion_depth >= 50 &&
    tstate->recursion_depth%10 == 0 &&
    PyOS_CheckStack()) {
                PyErr_SetString(PyExc_MemoryError, "Stack overflow");
                return NULL;
        }

> :::
> 
> one way to address this is to introduce a scale factor, so that
> you can add checks based on the default 8k limit, but auto-
> magically apply them less often platforms where the safety
> margin is much larger...
> 
> /* checkstack, but with a "scale" factor */
> #if windows or mac
> /* default safety margin */
> #define PYOS_CHECKSTACK(v, n)\
>     (((v) % (n) == 0) && PyOS_CheckStack())
> #elif linux
> /* at least 10 times the default safety margin */
> #define PYOS_CHECKSTACK(v, n)\
>     (((v) % ((n)*10) == 0) && PyOS_CheckStack())
> #endif
> 
>  if (PYOS_CHECKSTACK(tstate->recursion_depth, 10)
>     ...

I'm not exactly sure how large the safety margin is with
Martin's patch, but this seems a good idea.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From moshez at math.huji.ac.il  Wed Aug 30 12:49:59 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Wed, 30 Aug 2000 13:49:59 +0300 (IDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14764.6265.460762.479910@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008301348150.2545-100000@sundial>

On Tue, 29 Aug 2000, Fred L. Drake, Jr. wrote:

> 
> Thomas Wouters writes:
>  > (getrlimit *does* work, so if we have getrlimit, we can 'calculate' the
>  > maximum number of recursions from that.)
> 
>   Still no go -- we can calculate the number of recursions for a
> particular call frame size (or expected mix of frame sizes, which is
> really the same), but we can't predict recursive behavior inside a C
> extension, which is a significant part of the problem (witness the SRE
> experience).  That's why PyOS_StackCheck() actually has to do more
> than test a counter -- if the counter is low but the call frames are
> larger than our estimate, it won't help.

Can my trick (which works only if Python has control of the main) of
comparing addresses of local variables against addresses of local 
variables from main() and against the stack limit be used? 99% of the
people are using the plain Python interpreter with extensions, so it'll
solve 99% of the problem?
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From jack at oratrix.nl  Wed Aug 30 13:30:01 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:30:01 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions? 
In-Reply-To: Message by Jeremy Hylton <jeremy@beopen.com> ,
	     Tue, 29 Aug 2000 14:42:41 -0400 (EDT) , <14764.1057.909517.977904@bitdiddle.concentric.net> 
Message-ID: <20000830113002.44CE7303181@snelboot.oratrix.nl>

My SGI has getrlimit(RLIMIT_STACK) which should do the trick. But maybe this 
is an sgi-ism? Otherwise RLIMIT_VMEM and subtracting brk() may do the trick.

While thinking about this, though, I suddenly realised that my (new, faster) 
Mac implementation of PyOS_CheckStack will fail miserably in any other than 
the main thread, something I'll have to fix shortly.

Unix code will also have to differentiate between running on the main stack 
and a sub-thread stack, probably. And I haven't looked at the way 
PyOS_CheckStack is implemented on Windows, but it may well also share this 
problem.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jack at oratrix.nl  Wed Aug 30 13:38:55 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:38:55 +0200
Subject: [Python-Dev] MacPython 2.0? 
In-Reply-To: Message by Skip Montanaro <skip@mojam.com> ,
	     Tue, 29 Aug 2000 22:43:57 -0500 (CDT) , <14764.33533.218103.763531@beluga.mojam.com> 
Message-ID: <20000830113855.B1F2F303181@snelboot.oratrix.nl>

> Has Jack or anyone else been building Mac versions of 2.0 and making them
> available somewhere?  I seem to have fallen off the MacPython list and
> haven't taken the time to investigate (perhaps I set subscription to NOMAIL
> and forgot that crucial point).  I have no compilation tools on my Mac, so
> while I'd like to try testing things a little bit there, I am entirely
> dependent on others to provide me with something runnable.

I'm waiting for Guido to release a 2.0 and then I'll quickly follow suit. I 
have almost everything in place for the first alfa/beta.

But, if you're willing to be my guineapig I'd be happy to build you a 
distribution of the current state of things tonight or tomorrow, let me know.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From jack at oratrix.nl  Wed Aug 30 13:53:32 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 30 Aug 2000 13:53:32 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions? 
In-Reply-To: Message by "M.-A. Lemburg" <mal@lemburg.com> ,
	     Wed, 30 Aug 2000 12:42:39 +0200 , <39ACE51F.3AEC75AB@lemburg.com> 
Message-ID: <20000830115332.5CA4A303181@snelboot.oratrix.nl>

A completely different way to go about getting the stacksize on Unix is by 
actually committing the space once in a while. Something like (typed in as I'm 
making it up):

STACK_INCREMENT=128000

prober() {
    char space[STACK_INCREMENT];

    space[0] = 1;
    /* or maybe for(i=0;i<STACK_INCREMENT; i+=PAGESIZE) or so */
    space[STACK_INCREMENT-1] = 1;
}

jmp_buf buf;
catcher() {
    longjmp(buf);
    return 1;
}

PyOS_CheckStack() {
    static char *known_safe;
    char *here;

    if (we-are-in-a-thread())
	go do different things;
    if ( &here > known_safe )
	return 1;
    keep-old-SIGSEGV-handler;
    if ( setjmp(buf) )
	return 0;
    signal(SIGSEGV, catcher);
    prober();
    restore-old-SIGSEGV-handler;
    known_safe = &here - (STACK_INCREMENT - PYOS_STACK_MARGIN);
    return 1;
}
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From thomas at xs4all.net  Wed Aug 30 14:25:42 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 14:25:42 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000830113002.44CE7303181@snelboot.oratrix.nl>; from jack@oratrix.nl on Wed, Aug 30, 2000 at 01:30:01PM +0200
References: <jeremy@beopen.com> <20000830113002.44CE7303181@snelboot.oratrix.nl>
Message-ID: <20000830142542.A12695@xs4all.nl>

On Wed, Aug 30, 2000 at 01:30:01PM +0200, Jack Jansen wrote:

> My SGI has getrlimit(RLIMIT_STACK) which should do the trick. But maybe this 
> is an sgi-ism? Otherwise RLIMIT_VMEM and subtracting brk() may do the trick.

No, getrlimit(RLIMIT_STACK, &rlim) is the way to go. 'getrlimit' isn't
available everywhere, but the RLIMIT_STACK constant is universal, as far as
I know. And we can use autoconf to figure out if it's available.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Wed Aug 30 15:30:23 2000
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 30 Aug 2000 15:30:23 +0200
Subject: [Python-Dev] Lukewarm about range literals
References: <200008300211.OAA17125@s454.cosc.canterbury.ac.nz>
Message-ID: <04d101c01286$7444d6c0$0900a8c0@SPIFF>

greg wrote:
> If we're going to use any sort of ellipsis syntax here, I
> think it would be highly preferable to use the ellipsis
> token we've already got. I can't see any justification for
> having two different ellipsis-like tokens in the language,
> when there would be no ambiguity in using one for both
> purposes.

footnote: "..." isn't really token:

>>> class Spam:
...     def __getitem__(self, index):
...         print index
...
>>> spam = Spam()
>>> spam[...]
Ellipsis
>>> spam[. . .]
Ellipsis
>>> spam[.
... .
... .
... ]
Ellipsis

(etc)

</F>




From amk1 at erols.com  Wed Aug 30 15:26:20 2000
From: amk1 at erols.com (A.M. Kuchling)
Date: Wed, 30 Aug 2000 09:26:20 -0400
Subject: [Python-Dev] Cookie.py security
Message-ID: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>

[CC'ed to python-dev and Tim O'Malley]

The Cookie module recently added to 2.0 provides 3 classes of Cookie:
SimpleCookie, which treats cookie values as simple strings, 
SerialCookie, which treats cookie values as pickles and unpickles them,
and SmartCookie which figures out if the value is a pickle or not.

Unpickling untrusted data is unsafe.  (Correct?)  Therefore,
SerialCookie and SmartCookie really shouldn't be used, and Moshe's
docs for the module say so.

Question: should SerialCookie and SmartCookie be removed?  If they're
not there, people won't accidentally use them because they didn't read
the docs and missed the warning.

Con: breaks backward compatibility with the existing cookie module and
forks the code.  

(Are marshals safer than pickles?  What if SerialCookie used marshal
instead?)

--amk




From fdrake at beopen.com  Wed Aug 30 16:09:16 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 30 Aug 2000 10:09:16 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>

A.M. Kuchling writes:
 > (Are marshals safer than pickles?  What if SerialCookie used marshal
 > instead?)

  A bit safer, I think, but this maintains the backward compatibility
issue.
  If it is useful to change the API, this is the best time to do it,
but we'd probably want to rename the module as well.  Shared
maintenance is also an issue -- Tim's opinion is very valuable here!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From trentm at ActiveState.com  Wed Aug 30 18:18:29 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 30 Aug 2000 09:18:29 -0700
Subject: [Python-Dev] NetBSD compilation bug - I need help (was: Re: Python bug)
In-Reply-To: <14764.32658.941039.258537@bitdiddle.concentric.net>; from jeremy@beopen.com on Tue, Aug 29, 2000 at 11:29:22PM -0400
References: <14764.32658.941039.258537@bitdiddle.concentric.net>
Message-ID: <20000830091829.C14776@ActiveState.com>

On Tue, Aug 29, 2000 at 11:29:22PM -0400, Jeremy Hylton wrote:
> You have one open Python bug that is assigned to you and given a
> priority seven or higher.  I would like to resolve this bugs before
> the 2.0b1 release.
> 
> The bug is:
> 112289 | NetBSD1.4.2 build issue 
>

Sorry to have let this one get a little stale. I can give it a try. A couple
of questions:

1. Who reported this bug? He talked about providing more information and I
would like to speak with him. I cannot find his email address.
2. Does anybody have a NetBSD1.4.2 (or close) machine that I can get shell
access to? Do you know if they have such a machine at SourceForge that users
can get shell access to? Or failing that can someone with such a machine give
me the full ./configure and make output and maybe run this command:
   find /usr/include -name "*" -type f | xargs -l grep -nH _TELL64
and give me the output.


If I come up blank on both of these then I can't really expect to fix this
bug.


Thanks,
Trent


-- 
Trent Mick
TrentM at ActiveState.com



From pf at artcom-gmbh.de  Wed Aug 30 18:37:16 2000
From: pf at artcom-gmbh.de (Peter Funk)
Date: Wed, 30 Aug 2000 18:37:16 +0200 (MEST)
Subject: os.remove() behaviour on empty directories (was Re: [Python-Dev] If you thought there were too many PEPs...)
In-Reply-To: <200008271828.NAA14847@cj20424-a.reston1.va.home.com> from Guido van Rossum at "Aug 27, 2000  1:28:46 pm"
Message-ID: <m13UArU-000Dm9C@artcom0.artcom-gmbh.de>

Hi,

effbot:
> > btw, Python's remove/unlink implementation is slightly
> > broken -- they both map to unlink, but that's not the
> > right way to do it:
> > 
> > from SUSv2:
> > 
> >     int remove(const char *path);
> > 
> >     If path does not name a directory, remove(path)
> >     is equivalent to unlink(path). 
> > 
> >     If path names a directory, remove(path) is equi-
> >     valent to rmdir(path). 
> > 
> > should I fix this?

BDFL:
> That's a new one -- didn't exist when I learned Unix.

Yes, this 'remove()' has been added relatively late to Unix.  It didn't
existed for example in SCO XENIX 386 (the first "real" OS available
for relatively inexpensive IBM-PC arch boxes long before the advent
of Linux).

Changing the behaviour of Pythons 'os.remove()' on Unices might break 
some existing code (although such code is not portable to WinXX anyway):

pf at artcom0:ttyp3 ~ 7> mkdir emptydir
pf at artcom0:ttyp3 ~ 8> python
Python 1.5.2 (#1, Jul 23 1999, 06:38:16)  [GCC egcs-2.91.66 19990314/Linux (egcs- on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import os
>>> try:
...     os.remove('emptydir')
... except OSError:
...     print 'emptydir is a directory'
... 
emptydir is a directory
>>> 

> I guess we can fix this in 2.1.

Please don't do this without a heavy duty warning in a section about
expected upgrade problems.  

This change might annoy people, who otherwise don't care about
portability and use Python on Unices only.  I imagine people using
something like this:

    def cleanup_junkfiles(targetdir)
        for n in os.listdir(targetdir):
            try:
                os.remove(n)
            except OSError:
                pass

Regards, Peter
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)



From thomas at xs4all.net  Wed Aug 30 19:39:48 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 19:39:48 +0200
Subject: [Python-Dev] Threads & autoconf
Message-ID: <20000830193948.C12695@xs4all.nl>

I'm trying to clean up the autoconf (and README) mess wrt. threads a bit,
but I think I need some hints ;) I can't figure out why there is a separate
--with-dec-threads option... Is there a reason it can't be autodetected like
we do for other thread systems ? Does DEC Unix do something very different
but functional when leaving out the '-threads' option (which is the only
thing -dec- adds) or is it just "hysterical raisins" ? 

And then the systems that need different library/compiler flags/settings...
I suspect noone here has one of those machines ? It'd be nice if we could
autodetect this without trying every combination of flags/libs in autoconf
:P (But then, if we could autodetect, I assume it would've been done long
ago... right ? :)

Do we know if those systems still need those separate flags/libs ? Should we
leave a reference to them in the README, or add a separate README.threads
file with more extensive info about threads and how to disable them ? (I
think README is a bit oversized, but that's probably just me.) And are we
leaving threads on by default ? If not, the README will have to be
re-adjusted again :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gward at mems-exchange.org  Wed Aug 30 19:52:36 2000
From: gward at mems-exchange.org (Greg Ward)
Date: Wed, 30 Aug 2000 13:52:36 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>; from ping@lfw.org on Tue, Aug 29, 2000 at 12:09:39AM -0500
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org>
Message-ID: <20000830135235.A8465@ludwig.cnri.reston.va.us>

On 29 August 2000, Ka-Ping Yee said:
> I think these examples are beautiful.  There is no reason why we couldn't
> fit something like this into Python.  Imagine this:
> 
>     - The ".." operator produces a tuple (or generator) of integers.
>       It should probably have precedence just above "in".
>     
>     - "a .. b", where a and b are integers, produces the sequence
>       of integers (a, a+1, a+2, ..., b).
> 
>     - If the left argument is a tuple of two integers, as in
>       "a, b .. c", then we get the sequence of integers from
>       a to c with step b-a, up to and including c if c-a happens
>       to be a multiple of b-a (exactly as in Haskell).

I guess I haven't been paying much attention, or I would have squawked
at the idea of using *anything* other than ".." for a literal range.

> If this operator existed, we could then write:
> 
>     for i in 2, 4 .. 20:
>         print i
> 
>     for i in 1 .. 10:
>         print i*i

Yup, beauty.  +1 on this syntax.  I'd vote to scuttle the [1..10] patch
and wait for an implementation of The Right Syntax, as illustrated by Ping.


>     for i in 0 ..! len(a):
>         a[i] += 1

Ugh.  I agree with everythone else on this: why not "0 .. len(a)-1"?

        Greg



From thomas at xs4all.net  Wed Aug 30 20:04:03 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 20:04:03 +0200
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <20000830135235.A8465@ludwig.cnri.reston.va.us>; from gward@mems-exchange.org on Wed, Aug 30, 2000 at 01:52:36PM -0400
References: <LNBBLJKPBEHFEDALKOLCMEJBHCAA.tim_one@email.msn.com> <Pine.LNX.4.10.10008282352450.30080-100000@server1.lfw.org> <20000830135235.A8465@ludwig.cnri.reston.va.us>
Message-ID: <20000830200402.E12695@xs4all.nl>

On Wed, Aug 30, 2000 at 01:52:36PM -0400, Greg Ward wrote:
> I'd vote to scuttle the [1..10] patch
> and wait for an implementation of The Right Syntax, as illustrated by Ping.

There *is* no [1..10] patch. There is only the [1:10] patch. See the PEP ;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From martin at loewis.home.cs.tu-berlin.de  Wed Aug 30 20:32:30 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 20:32:30 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39ACE51F.3AEC75AB@lemburg.com> (mal@lemburg.com)
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com>
Message-ID: <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>

> So the check would look something like this:
> 
> if (tstate->recursion_depth >= 50 &&
>     tstate->recursion_depth%10 == 0 &&
>     PyOS_CheckStack()) {
>                 PyErr_SetString(PyExc_MemoryError, "Stack overflow");
>                 return NULL;
>         }

That sounds like a good solution to me. A recursion depth of 50 should
be guaranteed on most systems supported by Python.

> I'm not exactly sure how large the safety margin is with
> Martin's patch, but this seems a good idea.

I chose 3% of the rlimit, which must accomodate the space above the
known start of stack plus a single page. That number was chosen
arbitarily; on my Linux system, the stack limit is 8MB, so 3% give
200k. Given the maximum limitation of environment pages and argv
pages, I felt that this is safe enough. OTOH, if you've used more than
7MB of stack, it is likely that the last 200k won't help, either.

Regards,
Martin




From martin at loewis.home.cs.tu-berlin.de  Wed Aug 30 20:37:56 2000
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 30 Aug 2000 20:37:56 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <200008301837.UAA00743@loewis.home.cs.tu-berlin.de>

> My SGI has getrlimit(RLIMIT_STACK) which should do the trick

It tells you how much stack you've got; it does not tell you how much
of that is actually in use.

> Unix code will also have to differentiate between running on the
> main stack and a sub-thread stack, probably.

My patch computes (or, rather, estimates) a start-of-stack for each
thread, and then saves that in the thread context.

> And I haven't looked at the way PyOS_CheckStack is implemented on
> Windows

It should work for multiple threads just fine. It tries to allocate 8k
on the current stack, and then catches the error if any.

Regards,
Martin




From timo at timo-tasi.org  Wed Aug 30 20:51:52 2000
From: timo at timo-tasi.org (timo at timo-tasi.org)
Date: Wed, 30 Aug 2000 14:51:52 -0400
Subject: [Python-Dev] Re: Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>; from A.M. Kuchling on Wed, Aug 30, 2000 at 09:26:20AM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <20000830145152.A24581@illuminatus.timo-tasi.org>

hola.

On Wed, Aug 30, 2000 at 09:26:20AM -0400, A.M. Kuchling wrote:
> Question: should SerialCookie and SmartCookie be removed?  If they're
> not there, people won't accidentally use them because they didn't read
> the docs and missed the warning.
> 
> Con: breaks backward compatibility with the existing cookie module and
> forks the code.  

I had a thought about this - kind of a intermediate solution.

Right now, the shortcut 'Cookie.Cookie()' returns an instance of the
SmartCookie, which uses Pickle.  Most extant examples of using the
Cookie module use this shortcut.

We could change 'Cookie.Cookie()' to return an instance of SimpleCookie,
which does not use Pickle.  Unfortunately, this may break existing code
(like Mailman), but there is a lot of code out there that it won't break.

Also, people could still use the SmartCookie and SerialCookie classes,
but not they would be more likely to read them in the documentation
because they are "outside the beaten path".




From timo at timo-tasi.org  Wed Aug 30 21:09:13 2000
From: timo at timo-tasi.org (timo at timo-tasi.org)
Date: Wed, 30 Aug 2000 15:09:13 -0400
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>; from Fred L. Drake, Jr. on Wed, Aug 30, 2000 at 10:09:16AM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.5516.877559.786344@cj42289-a.reston1.va.home.com>
Message-ID: <20000830150913.B24581@illuminatus.timo-tasi.org>

hola.

On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
> 
> A.M. Kuchling writes:
>  > (Are marshals safer than pickles?  What if SerialCookie used marshal
>  > instead?)
> 
>   A bit safer, I think, but this maintains the backward compatibility
> issue.

Is this true?
  Marshal is backwards compatible to Pickle?

If it is true, that'd be kinda cool.

>   If it is useful to change the API, this is the best time to do it,
> but we'd probably want to rename the module as well.  Shared
> maintenance is also an issue -- Tim's opinion is very valuable here!

I agree -- if this is the right change, then now is the right time.

If a significant change is warranted, then the name change is probably
the right way to signal this change.  I'd vote for 'httpcookie.py'.

I've been thinking about the shared maintenance issue, too.  The right
thing is for the Cookie.py (or renamed version thereof) to be the 
official version.  I would probably keep the latest version up on
my web site but mark it as 'deprecated' once Python 2.0 gets released.

thoughts..?

e



From thomas at xs4all.net  Wed Aug 30 21:22:22 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 30 Aug 2000 21:22:22 +0200
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830150913.B24581@illuminatus.timo-tasi.org>; from timo@timo-tasi.org on Wed, Aug 30, 2000 at 03:09:13PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.5516.877559.786344@cj42289-a.reston1.va.home.com> <20000830150913.B24581@illuminatus.timo-tasi.org>
Message-ID: <20000830212222.F12695@xs4all.nl>

On Wed, Aug 30, 2000 at 03:09:13PM -0400, timo at timo-tasi.org wrote:
> hola.
> On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
> > A.M. Kuchling writes:
> >  > (Are marshals safer than pickles?  What if SerialCookie used marshal
> >  > instead?)

> >   A bit safer, I think, but this maintains the backward compatibility
> > issue.

> Is this true?
>   Marshal is backwards compatible to Pickle?

No, what Fred meant is that it maintains the backward compatibility *issue*,
not compatibility itself. It's still a problem for people who want to read
cookies made by the 'old' version, or otherwise want to read in 'old'
cookies.

I think it would be possible to provide a 'safe' unpickle, that only
unpickles primitives, for example, but that might *still* maintain the
backwards compatibility issue, even if it's less of an issue then. And it's
a bloody lot of work, too :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at beopen.com  Wed Aug 30 23:45:28 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Wed, 30 Aug 2000 17:45:28 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830150913.B24581@illuminatus.timo-tasi.org>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<14765.5516.877559.786344@cj42289-a.reston1.va.home.com>
	<20000830150913.B24581@illuminatus.timo-tasi.org>
Message-ID: <14765.32888.769808.560154@cj42289-a.reston1.va.home.com>

On Wed, Aug 30, 2000 at 10:09:16AM -0400, Fred L. Drake, Jr. wrote:
 >   A bit safer, I think, but this maintains the backward compatibility
 > issue.

timo at timo-tasi.org writes:
 > Is this true?
 >   Marshal is backwards compatible to Pickle?
 > 
 > If it is true, that'd be kinda cool.

  Would be, but my statement wasn't clear: it maintains the *issue*,
not compatibility.  ;(  The data formats are not interchangable in any
way.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From tim_one at email.msn.com  Thu Aug 31 00:54:25 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Wed, 30 Aug 2000 18:54:25 -0400
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <14763.52415.747655.334938@beluga.mojam.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>

[Skip Montanaro]
> ...
> What we really want I think is something that evokes the following in the
> mind of the reader
>
>     for i from START to END incrementing by STEP:
>
> without gobbling up all those keywords.

Note that they needn't be keywords, though, any more than "as" became a
keyword in the new "import x as y".  I love the Haskell notation in Haskell
because it fits so nicely with "infinite" lists there too.  I'm not sure
about in Python -- 100s of languages have straightforward integer index
generation, and Python's range(len(seq)) is hard to see as much more than
gratuitous novelty when viewed against that background.

    for i = 1 to 10:           #  1 to 10 inclusive
    for i = 10 to 1 by -1:     #  10 down to 1 inclusive
    for i = 1 upto 10:         #  1 to 9 inclusive
    for i = 10 upto 1 by -1:   #  10 down to 2 inclusive

are all implementable right now without new keywords, and would pretty much
*have* to be "efficient" from the start because they make no pretense at
being just one instance of an infinitely extensible object iteration
protocol.  They are what they are, and that's it -- simplicity isn't
*always* a bad thing <wink>.

>     for i in [START..END,STEP]:
>     for i in [START:END:STEP]:
>     for i in [START..END:STEP]:

The difference in easy readability should squawk for itself.

>     for i in 0 ..! len(a):
>         a[i] += 1

Looks like everybody hates that, and that's understandable, but I can't
imagine why

     for in 0 .. len(a)-1:

isn't *equally* hated!  Requiring "-1" in the most common case is simply bad
design.  Check out the Python-derivative CORBAscript, where Python's "range"
was redefined to *include* the endpoint.  Virtually every program I've seen
in it bristles with ugly

    for i in range(len(a)-1)

lines.  Yuck.

but-back-to-2.0-ly y'rs  - tim





From jeremy at beopen.com  Thu Aug 31 01:34:14 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 19:34:14 -0400 (EDT)
Subject: [Python-Dev] Release deadline looming (patches by Aug. 31)
Message-ID: <14765.39414.944199.794554@bitdiddle.concentric.net>

[Apologies for the short notice here; this was lost in a BeOpen mail
server for about 24 hours.]

We are still on schedule to release 2.0b1 on Sept. 5 (Tuesday).  There
are a few outstanding items that we need to resolve.  In order to
leave time for the admistrivia necessary to produce a release, we will
need to have a code freeze soon.

Guido says that typically, all the patches should be in two days
before the release.  The two-day deadline may be earlier than
expected, because Monday is a holiday in the US and at BeOpen.  So two
days before the release is midnight Thursday.

That's right.  All patches need to be completed by Aug. 31 at
midnight.  If this deadline is missed, the change won't make it into
2.0b1.

If you've got bugs assigned to you with a priority higher than 5,
please try to take a look at them before the deadline.

Jeremy



From jeremy at beopen.com  Thu Aug 31 03:21:23 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 21:21:23 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
Message-ID: <14765.45843.401319.187156@bitdiddle.concentric.net>

>>>>> "AMK" == A M Kuchling <amk1 at erols.com> writes:

  AMK> (Are marshals safer than pickles?  What if SerialCookie used
  AMK> marshal instead?)

I would guess that pickle makes attacks easier: It has more features,
e.g. creating instances of arbitrary classes (provided that the attacker
knows what classes are available).

But neither marshal nor pickle is safe.  It is possible to cause a
core dump by passing marshal invalid data.  It may also be possible to
launch a stack overflow attack -- not sure.

Jeremy



From gstein at lyra.org  Thu Aug 31 03:53:10 2000
From: gstein at lyra.org (Greg Stein)
Date: Wed, 30 Aug 2000 18:53:10 -0700
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.45843.401319.187156@bitdiddle.concentric.net>; from jeremy@beopen.com on Wed, Aug 30, 2000 at 09:21:23PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <20000830185310.I3278@lyra.org>

On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
>...
> But neither marshal nor pickle is safe.  It is possible to cause a
> core dump by passing marshal invalid data.  It may also be possible to
> launch a stack overflow attack -- not sure.

I believe those core dumps were fixed. Seems like I remember somebody doing
some work on that.

??


Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From greg at cosc.canterbury.ac.nz  Thu Aug 31 03:47:10 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 13:47:10 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <04d101c01286$7444d6c0$0900a8c0@SPIFF>
Message-ID: <200008310147.NAA17316@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <fredrik at pythonware.com>:

> footnote: "..." isn't really token:

Whatever it is technically, it's an existing part of the
language, and it seems redundant and confusing to introduce
another very similar one.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From jeremy at beopen.com  Thu Aug 31 03:55:24 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Wed, 30 Aug 2000 21:55:24 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830185310.I3278@lyra.org>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<14765.45843.401319.187156@bitdiddle.concentric.net>
	<20000830185310.I3278@lyra.org>
Message-ID: <14765.47884.801312.292059@bitdiddle.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

  GS> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
  >> ...  But neither marshal nor pickle is safe.  It is possible to
  >> cause a core dump by passing marshal invalid data.  It may also
  >> be possible to launch a stack overflow attack -- not sure.

  GS> I believe those core dumps were fixed. Seems like I remember
  GS> somebody doing some work on that.

  GS> ??

Aha!  I hadn't notice that patch sneaking in.  I brought it up with
Guido a few months ago and he didn't want to make changes to marshal
because, IIRC, marshal exists only because .pyc files need it.

Jeremy



From greg at cosc.canterbury.ac.nz  Thu Aug 31 03:59:34 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 13:59:34 +1200 (NZST)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.32888.769808.560154@cj42289-a.reston1.va.home.com>
Message-ID: <200008310159.NAA17320@s454.cosc.canterbury.ac.nz>

"Fred L. Drake, Jr." <fdrake at beopen.com>:

> it maintains the *issue*, not compatibility.  ;( 

A confusing choice of word! Usually one only talks about
"maintaining" something that one *want* maintained...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 31 04:33:36 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:33:36 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>
Message-ID: <200008310233.OAA17325@s454.cosc.canterbury.ac.nz>

Tim Peters <tim_one at email.msn.com>:

> I can't imagine why
> 
>     for in 0 .. len(a)-1:
> 
> isn't *equally* hated!  Requiring "-1" in the most common case is simply bad
> design.

I agree with that. I didn't mean to suggest that I thought it was
a good idea.

The real problem is in defining a..b to include b, which gives
you a construct that is intuitive but not very useful in the
context of the rest of the language.

On the other hand, if a..b *doesn't* include b, it's more
useful, but less intuitive.

(By "intuitive" here, I mean "does what you would expect based
on your experience with similar notations in other programming
languages or in mathematics".)

I rather like the a:b idea, because it ties in with the half-open 
property of slices. Unfortunately, it gives the impression that
you should be able to say

   a = [1,2,3,4,5,6]
   b = 2:5
   c = a[b]

and get c == [3,4,5].

>    for i = 1 to 10:           #  1 to 10 inclusive

Endpoint problem again. You would be forever saying

   for i = 0 to len(a)-1:

I do like the idea of keywords, however. All we need to do
is find a way of spelling

   for i = 0 uptobutnotincluding len(a):

without running out of breath.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 31 04:37:00 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:37:00 +1200 (NZST)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <39ACF6FB.66BAB739@nowonder.de>
Message-ID: <200008310237.OAA17328@s454.cosc.canterbury.ac.nz>

Peter Schneider-Kamp <nowonder at nowonder.de>:

> As far as I know adding a builtin indices() has been
> rejected as an idea.

But why? I know it's been suggested, but I don't remember seeing any
convincing arguments against it. Or much discussion at all.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Aug 31 04:57:07 2000
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 31 Aug 2000 14:57:07 +1200 (NZST)
Subject: [Python-Dev] Pragmas: Just say "No!"
In-Reply-To: <Pine.LNX.4.10.10008291316590.23391-100000@akbar.nevex.com>
Message-ID: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz>

Greg Wilson <gvwilson at nevex.com>:

> Pragmas are a way to embed programs for the
> parser in the file being parsed.

I hope the BDFL has the good sense to run screaming from
anything that has the word "pragma" in it. As this discussion
demonstrates, it's far too fuzzy and open-ended a concept --
nobody can agree on what sort of thing a pragma is supposed
to be.

INTERVIEWER: Tell us how you came to be drawn into the
world of pragmas.

COMPILER WRITER: Well, it started off with little things. Just
a few boolean flags, a way to turn asserts on and off, debug output,
that sort of thing. I thought, what harm can it do? It's not like
I'm doing anything you couldn't do with command line switches,
right? Then it got a little bit heavier, integer values for
optimisation levels, even the odd string or two. Before I
knew it I was doing the real hard stuff, constant expressions,
conditionals, the whole shooting box. Then one day when I put
in a hook for making arbitrary calls into the interpreter, that
was when I finally realised I had a problem...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From trentm at ActiveState.com  Thu Aug 31 06:34:44 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Wed, 30 Aug 2000 21:34:44 -0700
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000830185310.I3278@lyra.org>; from gstein@lyra.org on Wed, Aug 30, 2000 at 06:53:10PM -0700
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net> <20000830185310.I3278@lyra.org>
Message-ID: <20000830213444.C20461@ActiveState.com>

On Wed, Aug 30, 2000 at 06:53:10PM -0700, Greg Stein wrote:
> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
> >...
> > But neither marshal nor pickle is safe.  It is possible to cause a
> > core dump by passing marshal invalid data.  It may also be possible to
> > launch a stack overflow attack -- not sure.
> 
> I believe those core dumps were fixed. Seems like I remember somebody doing
> some work on that.
> 
> ??

Nope, I think that there may have been a few small patches but the
discussions to fix some "brokeness" in marshal did not bear fruit:

http://www.python.org/pipermail/python-dev/2000-June/011132.html


Oh, I take that back. Here is patch that supposedly fixed some core dumping:

http://www.python.org/pipermail/python-checkins/2000-June/005997.html
http://www.python.org/pipermail/python-checkins/2000-June/006029.html


Trent


-- 
Trent Mick
TrentM at ActiveState.com



From bwarsaw at beopen.com  Thu Aug 31 06:50:20 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 00:50:20 -0400 (EDT)
Subject: [Python-Dev] Lukewarm about range literals
References: <LNBBLJKPBEHFEDALKOLCCEPOHCAA.tim_one@email.msn.com>
	<200008310233.OAA17325@s454.cosc.canterbury.ac.nz>
Message-ID: <14765.58380.529345.814715@anthem.concentric.net>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> I do like the idea of keywords, however. All we need to do
    GE> is find a way of spelling

    GE>    for i = 0 uptobutnotincluding len(a):

    GE> without running out of breath.

for i until len(a):

-Barry



From nhodgson at bigpond.net.au  Thu Aug 31 08:21:06 2000
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Thu, 31 Aug 2000 16:21:06 +1000
Subject: [Python-Dev] Pragmas: Just say "No!"
References: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz>
Message-ID: <005301c01313$a66a3ae0$8119fea9@neil>

Greg Ewing:
> Greg Wilson <gvwilson at nevex.com>:
>
> > Pragmas are a way to embed programs for the
> > parser in the file being parsed.
>
> I hope the BDFL has the good sense to run screaming from
> anything that has the word "pragma" in it. As this discussion
> demonstrates, it's far too fuzzy and open-ended a concept --
> nobody can agree on what sort of thing a pragma is supposed
> to be.

   It is a good idea, however, to claim a piece of syntactic turf as early
as possible so that if/when it is needed, it is unlikely to cause problems
with previously written code. My preference would be to introduce a reserved
word 'directive' for future expansion here. 'pragma' has connotations of
'ignorable compiler hint' but most of the proposed compiler directives will
cause incorrect behaviour if ignored.

   Neil





From m.favas at per.dem.csiro.au  Thu Aug 31 08:11:31 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 31 Aug 2000 14:11:31 +0800
Subject: [Python-Dev] Threads & autoconf
Message-ID: <39ADF713.53E6B37D@per.dem.csiro.au>

[Thomas}
>I'm trying to clean up the autoconf (and README) mess wrt. threads a bit,
>but I think I need some hints ;) I can't figure out why there is a separate
>--with-dec-threads option... Is there a reason it can't be autodetected like
>we do for other thread systems ? Does DEC Unix do something very different
>but functional when leaving out the '-threads' option (which is the only
>thing -dec- adds) or is it just "hysterical raisins" ?

Yes, DEC Unix does do something very different without the "-threads"
option to the "cc" line that finally builds the python executable - the
following are unresolved:

cc   python.o \
          ../libpython2.0.a -L/usr/local/lib -ltk8.0 -ltcl8.0 -lX11   
-ldb     
 -L/usr/local/lib -lz  -lnet  -lpthreads -lm  -o python 
ld:
Unresolved:
_PyGC_Insert
_PyGC_Remove
__pthread_mutex_init
__pthread_mutex_destroy
__pthread_mutex_lock
__pthread_mutex_unlock
__pthread_cond_init
__pthread_cond_destroy
__pthread_cond_signal
__pthread_cond_wait
__pthread_create
__pthread_detach
make[1]: *** [link] Error 1

So, it is still needed. It should be possible, though, to detect that
the system is OSF1 during configure and set this without having to do
"--with-dec-threads". I think DEEC/Compaq/Tru 64 Unix is the only
current Unix that reports itself as OSF1. If there are other legacy
systems that do, and don't need "-threads", they could do "configure
--without-dec-threads" <grin>.

Mark
 
-- 
Email - m.favas at per.dem.csiro.au        Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074            CSIRO Exploration &
Mining
Fax   - +61 8 9333 6121                          Private Bag No 5
                                                 Wembley, Western
Australia 6913



From effbot at telia.com  Thu Aug 31 08:41:20 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 08:41:20 +0200
Subject: [Python-Dev] Cookie.py security
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <004301c01316$7ef57e40$766940d5@hagrid>

jeremy wrote:
> I would guess that pickle makes attacks easier: It has more features,
> e.g. creating instances of arbitrary classes (provided that the attacker
> knows what classes are available).

well, if not else, he's got the whole standard library to
play with...

:::

(I haven't looked at the cookie code, so I don't really know
what I'm talking about here)

cannot you force the user to pass in a list of valid classes to
the cookie constructor, and use a subclass of pickle.Unpickler
to get a little more control over what's imported:

    class myUnpickler(Unpicker):
        def __init__(self, data, classes):
            self.__classes = classes
            Unpickler.__init__(self, StringIO.StringIO(data))
        def find_class(self, module, name):
            for cls in self.__classes__:
                if cls.__module__ == module and cls.__name__ == name:
                    return cls
            raise SystemError, "failed to import class"

> But neither marshal nor pickle is safe.  It is possible to cause a
> core dump by passing marshal invalid data.  It may also be possible to
> launch a stack overflow attack -- not sure.

</F>




From fdrake at beopen.com  Thu Aug 31 09:09:33 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 03:09:33 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <200008310702.AAA32318@slayer.i.sourceforge.net>
References: <200008310702.AAA32318@slayer.i.sourceforge.net>
Message-ID: <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Fix grouping: this is how I intended it, misguided as I was in boolean
 > operator associativity.

  And to think I spent time digging out my reference material to make
sure I didn't change anything!
  This is why compilers have warnings like that!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From thomas at xs4all.net  Thu Aug 31 09:22:13 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 09:22:13 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>; from fdrake@beopen.com on Thu, Aug 31, 2000 at 03:09:33AM -0400
References: <200008310702.AAA32318@slayer.i.sourceforge.net> <14766.1197.987441.118202@cj42289-a.reston1.va.home.com>
Message-ID: <20000831092213.G12695@xs4all.nl>

On Thu, Aug 31, 2000 at 03:09:33AM -0400, Fred L. Drake, Jr. wrote:

> Thomas Wouters writes:
>  > Fix grouping: this is how I intended it, misguided as I was in boolean
>  > operator associativity.

>   And to think I spent time digging out my reference material to make
> sure I didn't change anything!

Well, if you'd dug out the PEP, you'd have known what way the parentheses
were *intended* to go :-) 'HASINPLACE' is a macro that does a
Py_HasFeature() for the _inplace_ struct members, and those struct members
shouldn't be dereferenced if HASINPLACE is false :)

>   This is why compilers have warnings like that!

Definately ! Now if only there was a permanent way to add -Wall.... hmm...
Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From m.favas at per.dem.csiro.au  Thu Aug 31 09:23:43 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 31 Aug 2000 15:23:43 +0800
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
Message-ID: <39AE07FF.478F413@per.dem.csiro.au>

(Tru64 Unix) - test_gettext fails with the message:
IOError: [Errno 0] Bad magic number: './xx/LC_MESSAGES/gettext.mo'

This is because the magic number is read in by the code in
Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
buf[:4])[0]), and checked against LE_MAGIC (defined as 950412DE) and
BE_MAGIC (calculated as FFFFFFFFDE120495 using
struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0]) These format strings
work for machines where a Python integer is the same size as a C int,
but not for machines where a Python integer is larger than a C int. The
problem arises because the LE_MAGIC number is negative if a 32-bit int,
but positive if Python integers are 64-bit. Replacing the "i" in the
code that generates BE_MAGIC and reads in "magic" by "I" makes the test
work for me, but there's other uses of "i" and "ii" when the rest of the
.mo file is processed that I'm unsure about with different inputs.

Mark
-- 
Email - m.favas at per.dem.csiro.au        Postal - Mark C Favas
Phone - +61 8 9333 6268, 041 892 6074            CSIRO Exploration &
Mining
Fax   - +61 8 9333 6121                          Private Bag No 5
                                                 Wembley, Western
Australia 6913



From tim_one at email.msn.com  Thu Aug 31 09:24:35 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 03:24:35 -0400
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>


-----Original Message-----
From: python-list-admin at python.org
[mailto:python-list-admin at python.org]On Behalf Of Sachin Desai
Sent: Thursday, August 31, 2000 2:49 AM
To: python-list at python.org
Subject: test_largefile cause kernel panic in Mac OS X DP4



Has anyone experienced this. I updated my version of python to the latest
source from the CVS repository and successfully built it. Upon executing a
"make test", my machine ended up in a kernel panic when the test being
executed was "test_largefile".

My configuration is:
    Powerbook G3
    128M RAM
    Mac OS X DP4

I guess my next step is to log a bug with Apple.




-- 
http://www.python.org/mailman/listinfo/python-list




From fdrake at beopen.com  Thu Aug 31 09:37:24 2000
From: fdrake at beopen.com (Fred L. Drake, Jr.)
Date: Thu, 31 Aug 2000 03:37:24 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <20000831092213.G12695@xs4all.nl>
References: <200008310702.AAA32318@slayer.i.sourceforge.net>
	<14766.1197.987441.118202@cj42289-a.reston1.va.home.com>
	<20000831092213.G12695@xs4all.nl>
Message-ID: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com>

Thomas Wouters writes:
 > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
 > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

  I'd be happy with this.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at beopen.com>
BeOpen PythonLabs Team Member




From moshez at math.huji.ac.il  Thu Aug 31 09:45:19 2000
From: moshez at math.huji.ac.il (Moshe Zadka)
Date: Thu, 31 Aug 2000 10:45:19 +0300 (IDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects
 abstract.c,2.49,2.50
In-Reply-To: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com>
Message-ID: <Pine.GSO.4.10.10008311045010.20952-100000@sundial>

On Thu, 31 Aug 2000, Fred L. Drake, Jr. wrote:

> 
> Thomas Wouters writes:
>  > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
>  > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)
> 
>   I'd be happy with this.

For 2.1, I suggest going for -Werror too.
--
Moshe Zadka <moshez at math.huji.ac.il>
There is no IGLU cabal.
http://advogato.org/person/moshez




From thomas at xs4all.net  Thu Aug 31 10:06:01 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 10:06:01 +0200
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects abstract.c,2.49,2.50
In-Reply-To: <Pine.GSO.4.10.10008311045010.20952-100000@sundial>; from moshez@math.huji.ac.il on Thu, Aug 31, 2000 at 10:45:19AM +0300
References: <14766.2868.120933.306616@cj42289-a.reston1.va.home.com> <Pine.GSO.4.10.10008311045010.20952-100000@sundial>
Message-ID: <20000831100601.H12695@xs4all.nl>

On Thu, Aug 31, 2000 at 10:45:19AM +0300, Moshe Zadka wrote:
> > Thomas Wouters writes:
> >  > Definately ! Now if only there was a permanent way to add -Wall.... hmm...
> >  > Hey, I got it ! What about we set it by default, if the compiler is gcc ? :)

> For 2.1, I suggest going for -Werror too.

No, don't think so. -Werror is severe: it would cause compile-failures on
systems not quite the same as ours. For instance, when using
Linux-2.4.0-test-kernels (bleeding edge ;) I consistently get a warning
about a redefine in <sys/resource.h>. That isn't Python's fault, and we
can't do anything about it, but with -Werror it would cause
compile-failures. The warning is annoying, but not fatal.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From bwarsaw at beopen.com  Thu Aug 31 12:47:34 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 06:47:34 -0400 (EDT)
Subject: [Python-Dev] test_gettext.py fails on 64-bit architectures
References: <39AE07FF.478F413@per.dem.csiro.au>
Message-ID: <14766.14278.609327.610929@anthem.concentric.net>

>>>>> "MF" == Mark Favas <m.favas at per.dem.csiro.au> writes:

    MF> This is because the magic number is read in by the code in
    MF> Lib/gettext.py as FFFFFFFF950412DE (hex) (using unpack('<i',
    MF> buf[:4])[0]), and checked against LE_MAGIC (defined as
    MF> 950412DE) and BE_MAGIC (calculated as FFFFFFFFDE120495 using
    MF> struct.unpack('>i',struct.pack('<i', LE_MAGIC))[0])

I was trying to be too clever.  Just replace the BE_MAGIC value with
0xde120495, as in the included patch.

    MF> Replacing the "i" in the code that generates BE_MAGIC and
    MF> reads in "magic" by "I" makes the test work for me, but
    MF> there's other uses of "i" and "ii" when the rest of the .mo
    MF> file is processed that I'm unsure about with different inputs.

Should be fine, I think.  With < and > leading characters, those
format strings should select `standard' sizes:

    Standard size and alignment are as follows: no alignment is
    required for any type (so you have to use pad bytes); short is 2
    bytes; int and long are 4 bytes. float and double are 32-bit and
    64-bit IEEE floating point numbers, respectively.

Please run the test again with this patch and let me know.
-Barry

Index: gettext.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/gettext.py,v
retrieving revision 1.4
diff -u -r1.4 gettext.py
--- gettext.py	2000/08/30 03:29:58	1.4
+++ gettext.py	2000/08/31 10:40:41
@@ -125,7 +125,7 @@
 class GNUTranslations(NullTranslations):
     # Magic number of .mo files
     LE_MAGIC = 0x950412de
-    BE_MAGIC = struct.unpack('>i', struct.pack('<i', LE_MAGIC))[0]
+    BE_MAGIC = 0xde120495
 
     def _parse(self, fp):
         """Override this method to support alternative .mo formats."""




From mal at lemburg.com  Thu Aug 31 14:33:28 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 14:33:28 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
Message-ID: <39AE5098.36746F4B@lemburg.com>

"Martin v. Loewis" wrote:
> 
> > So the check would look something like this:
> >
> > if (tstate->recursion_depth >= 50 &&
> >     tstate->recursion_depth%10 == 0 &&
> >     PyOS_CheckStack()) {
> >                 PyErr_SetString(PyExc_MemoryError, "Stack overflow");
> >                 return NULL;
> >         }
> 
> That sounds like a good solution to me. A recursion depth of 50 should
> be guaranteed on most systems supported by Python.

Jeremy: Could get at least this patch into 2.0b1 ?
 
> > I'm not exactly sure how large the safety margin is with
> > Martin's patch, but this seems a good idea.
> 
> I chose 3% of the rlimit, which must accomodate the space above the
> known start of stack plus a single page. That number was chosen
> arbitarily; on my Linux system, the stack limit is 8MB, so 3% give
> 200k. Given the maximum limitation of environment pages and argv
> pages, I felt that this is safe enough. OTOH, if you've used more than
> 7MB of stack, it is likely that the last 200k won't help, either.

Looks like I don't have any limits set on my dev-machine...
Linux has no problems offering me 3GB of (virtual) stack space
even though it only has 64MB real memory and 200MB swap
space available ;-)

I guess the proposed user settable recursion depth limit is the
best way to go. Testing for the right limit is rather easy by
doing some trial and error processing using Python.

At least for my Linux installation a limit of 9000 seems
reasonable. Perhaps everybody on the list could do a quick
check on their platform ?

Here's a sample script:

i = 0
def foo(x):
    global i
    print i
    i = i + 1
    foo(x)

foo(None)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From gstein at lyra.org  Thu Aug 31 14:48:04 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 05:48:04 -0700
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE5098.36746F4B@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 02:33:28PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com>
Message-ID: <20000831054804.A3278@lyra.org>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
>...
> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?
> 
> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

10k iterations on my linux box

-g

-- 
Greg Stein, http://www.lyra.org/



From thomas at xs4all.net  Thu Aug 31 14:46:45 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 14:46:45 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE5098.36746F4B@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 02:33:28PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com>
Message-ID: <20000831144645.I12695@xs4all.nl>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:

> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?

On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
set it higher even without help from root, and much higher with help) I can
go as high as 8k recursions of the simple python-function type, and 5k
recursions of one involving a C call (like a recursive __str__()).

I don't remember ever seeing a system with less than 2Mbyte stack, except
for seriously memory-deprived systems. I do know that the 2Mbyte stacklimit
on BSDI is enough to cause 'pine' (sucky but still popular mailprogram) much
distress when handling large mailboxes, so we usually set the limit higher
anyway.

Mutt-forever-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mal at lemburg.com  Thu Aug 31 15:32:41 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 15:32:41 +0200
Subject: [Python-Dev] Pragmas: Just say "No!"
References: <200008310257.OAA17332@s454.cosc.canterbury.ac.nz> <005301c01313$a66a3ae0$8119fea9@neil>
Message-ID: <39AE5E79.C2C91730@lemburg.com>

Neil Hodgson wrote:
> 
> Greg Ewing:
> > Greg Wilson <gvwilson at nevex.com>:
> >
> > > Pragmas are a way to embed programs for the
> > > parser in the file being parsed.
> >
> > I hope the BDFL has the good sense to run screaming from
> > anything that has the word "pragma" in it. As this discussion
> > demonstrates, it's far too fuzzy and open-ended a concept --
> > nobody can agree on what sort of thing a pragma is supposed
> > to be.
> 
>    It is a good idea, however, to claim a piece of syntactic turf as early
> as possible so that if/when it is needed, it is unlikely to cause problems
> with previously written code. My preference would be to introduce a reserved
> word 'directive' for future expansion here. 'pragma' has connotations of
> 'ignorable compiler hint' but most of the proposed compiler directives will
> cause incorrect behaviour if ignored.

The objectives the "pragma" statement follows should be clear
by now. If it's just the word itself that's bugging you, then
we can have a separate discussion on that. Perhaps "assume"
or "declare" would be a better candidates.

We need some kind of logic of this sort in Python. Otherhwise
important features like source code encoding will not be
possible.

As I said before, I'm not advertising adding compiler
programs to Python, just a simple way of passing information
for the compiler.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From nascheme at enme.ucalgary.ca  Thu Aug 31 15:53:21 2000
From: nascheme at enme.ucalgary.ca (Neil Schemenauer)
Date: Thu, 31 Aug 2000 07:53:21 -0600
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <14765.45843.401319.187156@bitdiddle.concentric.net>; from Jeremy Hylton on Wed, Aug 30, 2000 at 09:21:23PM -0400
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com> <14765.45843.401319.187156@bitdiddle.concentric.net>
Message-ID: <20000831075321.A3099@keymaster.enme.ucalgary.ca>

On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
> I would guess that pickle makes attacks easier: It has more features,
> e.g. creating instances of arbitrary classes (provided that the attacker
> knows what classes are available).

marshal can handle code objects.  That seems pretty scary to me.  I
would vote for not including these unsecure classes in the standard
distribution.  Software that expects them should include their own
version of Cookie.py or be fixed.

  Neil



From mal at lemburg.com  Thu Aug 31 15:58:55 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 15:58:55 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <20000831144645.I12695@xs4all.nl>
Message-ID: <39AE649F.A0E818C1@lemburg.com>

Thomas Wouters wrote:
> 
> On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
> 
> > At least for my Linux installation a limit of 9000 seems
> > reasonable. Perhaps everybody on the list could do a quick
> > check on their platform ?
> 
> On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
> set it higher even without help from root, and much higher with help) I can
> go as high as 8k recursions of the simple python-function type, and 5k
> recursions of one involving a C call (like a recursive __str__()).

Ok, this give us a 5000 limit as default... anyone with less ;-)

(Note that with the limit being user settable making a lower limit
 the default shouldn't hurt anyone.)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Thu Aug 31 16:06:23 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 16:06:23 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AE649F.A0E818C1@lemburg.com>; from mal@lemburg.com on Thu, Aug 31, 2000 at 03:58:55PM +0200
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <20000831144645.I12695@xs4all.nl> <39AE649F.A0E818C1@lemburg.com>
Message-ID: <20000831160623.J12695@xs4all.nl>

On Thu, Aug 31, 2000 at 03:58:55PM +0200, M.-A. Lemburg wrote:
> Thomas Wouters wrote:

> > On BSDI, which has a 2Mbyte default stack limit (but soft limit: users can
> > set it higher even without help from root, and much higher with help) I can
> > go as high as 8k recursions of the simple python-function type, and 5k
> > recursions of one involving a C call (like a recursive __str__()).

> Ok, this give us a 5000 limit as default... anyone with less ;-)

I would suggest going for something a lot less than 5000, tho, to account
for 'large' frames. Say, 2000 or so, max.

> (Note that with the limit being user settable making a lower limit
>  the default shouldn't hurt anyone.)

Except that it requires yet another step ... ;P It shouldn't hurt anyone if
it isn't *too* low. However, I have no clue how 'high' it would have to be
for, for instance, Zope, or any of the other 'large' Python apps.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Thu Aug 31 16:20:45 2000
From: jack at oratrix.nl (Jack Jansen)
Date: Thu, 31 Aug 2000 16:20:45 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions? 
Message-ID: <20000831142046.20C21303181@snelboot.oratrix.nl>

I'm confused now: how is this counting-stack-limit different from the maximum 
recursion depth we already have?

The whole point of PyOS_StackCheck is to do an _actual_ check of whether 
there's space left for the stack so we can hopefully have an orderly cleanup 
before we hit the hard limit.

If computing it is too difficult because getrlimit isn't available or doesn't 
do what we want we should probe it, as the windows code does or my example 
code posted yesterday does. Note that the testing only has to be done every 
*first* time the stack goes past a certain boundary: the probing can remember 
the deepest currently known valid stack location, and everything that is 
shallower is okay from that point on (making PyOS_StackCheck a subroutine call 
and a compare in the normal case).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From mal at lemburg.com  Thu Aug 31 16:44:09 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 16:44:09 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <39AE6F39.2DAEB3E9@lemburg.com>

Jack Jansen wrote:
> 
> I'm confused now: how is this counting-stack-limit different from the maximum
> recursion depth we already have?
> 
> The whole point of PyOS_StackCheck is to do an _actual_ check of whether
> there's space left for the stack so we can hopefully have an orderly cleanup
> before we hit the hard limit.
> 
> If computing it is too difficult because getrlimit isn't available or doesn't
> do what we want we should probe it, as the windows code does or my example
> code posted yesterday does. Note that the testing only has to be done every
> *first* time the stack goes past a certain boundary: the probing can remember
> the deepest currently known valid stack location, and everything that is
> shallower is okay from that point on (making PyOS_StackCheck a subroutine call
> and a compare in the normal case).

getrlimit() will not always work: in case there is no limit
imposed on the stack, it will return huge numbers (e.g. 2GB)
which wouldn't make any valid assumption possible. 

Note that you can't probe for this since you can not be sure whether
the OS overcommits memory or not. Linux does this heavily and
I haven't yet even found out why my small C program happily consumes
20MB of memory without segfault at recursion level 60000 while Python
already segfaults at recursion level 9xxx with a memory footprint
of around 5MB.

So, at least for Linux, the only safe way seems to make the
limit a user option and to set a reasonably low default.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From cgw at fnal.gov  Thu Aug 31 16:50:01 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 09:50:01 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <20000831142046.20C21303181@snelboot.oratrix.nl>
References: <20000831142046.20C21303181@snelboot.oratrix.nl>
Message-ID: <14766.28825.35228.221474@buffalo.fnal.gov>

Jack Jansen writes:
 > I'm confused now: how is this counting-stack-limit different from
 > the maximum recursion depth we already have?

Because on Unix the maximum allowable stack space is not fixed (it can
be controlled by "ulimit" or "setrlimit"), so a hard-coded maximum
recursion depth is not appropriate.

 > The whole point of PyOS_StackCheck is to do an _actual_ check of
 > whether before we hit the hard limit.

 > If computing it is too difficult because getrlimit isn't available
 > or doesn't do what we want we should probe it

getrlimit is available and works fine.  It's getrusage that is
problematic.

I seriously think that instead of trying to slip this in `under the
wire' we should defer for 2.0b1 and try to do it right for either the
next 2.0x.  Getting this stuff right on Unix, portably, is tricky.
There may be a lot of different tricks required to make this work
right on different flavors of Unix.






From guido at beopen.com  Thu Aug 31 17:58:49 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 10:58:49 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 14:33:28 +0200."
             <39AE5098.36746F4B@lemburg.com> 
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>  
            <39AE5098.36746F4B@lemburg.com> 
Message-ID: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>

> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

Please try this again on various platforms with this version:

    i = 0
    class C:
      def __getattr__(self, name):
	  global i
	  print i
	  i += 1
	  return self.name # common beginners' mistake

    C() # This tries to get __init__, triggering the recursion

I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
have no idea what units).

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 31 18:07:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:07:16 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 16:20:45 +0200."
             <20000831142046.20C21303181@snelboot.oratrix.nl> 
References: <20000831142046.20C21303181@snelboot.oratrix.nl> 
Message-ID: <200008311607.LAA15693@cj20424-a.reston1.va.home.com>

> I'm confused now: how is this counting-stack-limit different from
> the maximum recursion depth we already have?
> 
> The whole point of PyOS_StackCheck is to do an _actual_ check of
> whether there's space left for the stack so we can hopefully have an
> orderly cleanup before we hit the hard limit.
> 
> If computing it is too difficult because getrlimit isn't available
> or doesn't do what we want we should probe it, as the windows code
> does or my example code posted yesterday does. Note that the testing
> only has to be done every *first* time the stack goes past a certain
> boundary: the probing can remember the deepest currently known valid
> stack location, and everything that is shallower is okay from that
> point on (making PyOS_StackCheck a subroutine call and a compare in
> the normal case).

The point is that there's no portable way to do PyOS_CheckStack().
Not even for Unix.  So we use a double strategy:

(1) Use a user-settable recursion limit with a conservative default.
This can be done portably.  It is set low by default so that under
reasonable assumptions it will stop runaway recursion long before the
stack is actually exhausted.  Note that Emacs Lisp has this feature
and uses a default of 500.  I would set it to 1000 in Python.  The
occasional user who is fond of deep recursion can set it higher and
tweak his ulimit -s to provide the actual stack space if necessary.

(2) Where implementable, use actual stack probing with
PyOS_CheckStack().  This provides an additional safeguard for e.g. (1)
extensions allocating lots of C stack space during recursion; (2)
users who set the recursion limit too high; (3) long-running server
processes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)




From cgw at fnal.gov  Thu Aug 31 17:14:02 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 10:14:02 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
	<39ACDA4F.3EF72655@lemburg.com>
	<000d01c0126c$dfe700c0$766940d5@hagrid>
	<39ACE51F.3AEC75AB@lemburg.com>
	<200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
	<39AE5098.36746F4B@lemburg.com>
	<200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <14766.30266.156124.961607@buffalo.fnal.gov>

Guido van Rossum writes:
 > 
 > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
 > have no idea what units).

That would be Kb.  But -c controls core-file size, not stack.  
You wanted -s.  ulimit -a shows all the limits.



From guido at beopen.com  Thu Aug 31 18:23:21 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:23:21 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 10:14:02 EST."
             <14766.30266.156124.961607@buffalo.fnal.gov> 
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>  
            <14766.30266.156124.961607@buffalo.fnal.gov> 
Message-ID: <200008311623.LAA15877@cj20424-a.reston1.va.home.com>

> That would be Kb.  But -c controls core-file size, not stack.  
> You wanted -s.  ulimit -a shows all the limits.

Typo.  I did use ulimit -s.  ulimit -a confirms that it's 8192 kbytes.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From mal at lemburg.com  Thu Aug 31 17:24:58 2000
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 31 Aug 2000 17:24:58 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de>  
	            <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <39AE78CA.809E660A@lemburg.com>

Guido van Rossum wrote:
> 
> > Here's a sample script:
> >
> > i = 0
> > def foo(x):
> >     global i
> >     print i
> >     i = i + 1
> >     foo(x)
> >
> > foo(None)
> 
> Please try this again on various platforms with this version:
> 
>     i = 0
>     class C:
>       def __getattr__(self, name):
>           global i
>           print i
>           i += 1
>           return self.name # common beginners' mistake
> 
>     C() # This tries to get __init__, triggering the recursion
> 
> I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> have no idea what units).

8192 refers to kB, i.e. 8 MB.

I get 6053 on SuSE Linux 6.2 without resource stack limit set.

Strange enough if I put the above inside a script, the class
isn't instantiated. The recursion only starts when I manually
trigger C() in interactive mode or do something like
'print C()'. Is this a bug or a feature ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Business:                                      http://www.lemburg.com/
Python Pages:                           http://www.lemburg.com/python/



From Vladimir.Marangozov at inrialpes.fr  Thu Aug 31 17:32:29 2000
From: Vladimir.Marangozov at inrialpes.fr (Vladimir Marangozov)
Date: Thu, 31 Aug 2000 17:32:29 +0200 (CEST)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com> from "Guido van Rossum" at Aug 31, 2000 10:58:49 AM
Message-ID: <200008311532.RAA04028@python.inrialpes.fr>

Guido van Rossum wrote:
> 
> Please try this again on various platforms with this version:
> 
>     i = 0
>     class C:
>       def __getattr__(self, name):
> 	  global i
> 	  print i
> 	  i += 1
> 	  return self.name # common beginners' mistake
> 
>     C() # This tries to get __init__, triggering the recursion
> 

            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Are you sure?

Although strange, this is not the case and instantiating C succeeds
(try "python rec.py", where rec.py is the above code).

A closer look at the code shows that Instance_New goes on calling
getattr2 which calls class_lookup, which returns NULL, etc, etc,
but the presence of __getattr__ is not checked in this path.

-- 
       Vladimir MARANGOZOV          | Vladimir.Marangozov at inrialpes.fr
http://sirac.inrialpes.fr/~marangoz | tel:(+33-4)76615277 fax:76615252



From trentm at ActiveState.com  Thu Aug 31 17:28:21 2000
From: trentm at ActiveState.com (Trent Mick)
Date: Thu, 31 Aug 2000 08:28:21 -0700
Subject: [Python-Dev] FW: test_largefile cause kernel panic in Mac OS X DP4
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Thu, Aug 31, 2000 at 03:24:35AM -0400
References: <LNBBLJKPBEHFEDALKOLCEEBJHDAA.tim_one@email.msn.com>
Message-ID: <20000831082821.B3569@ActiveState.com>

Tim (or anyone with python-list logs), can you forward this to Sachin (who
reported the bug).

On Thu, Aug 31, 2000 at 03:24:35AM -0400, Tim Peters wrote:
> 
> 
> -----Original Message-----
> From: python-list-admin at python.org
> [mailto:python-list-admin at python.org]On Behalf Of Sachin Desai
> Sent: Thursday, August 31, 2000 2:49 AM
> To: python-list at python.org
> Subject: test_largefile cause kernel panic in Mac OS X DP4
> 
> 
> 
> Has anyone experienced this. I updated my version of python to the latest
> source from the CVS repository and successfully built it. Upon executing a
> "make test", my machine ended up in a kernel panic when the test being
> executed was "test_largefile".
> 
> My configuration is:
>     Powerbook G3
>     128M RAM
>     Mac OS X DP4
> 
> I guess my next step is to log a bug with Apple.
> 

I added this test module. It would be nice to have a little bit more
information seeing as I have never played on a Mac (OS X acts like BSD under
the hood, right?)

1. Can you tell me, Sachin, *where* in test_largefile it is failing? The file
   is python/dist/src/Lib/test/test_largefile.py. Try running it directly:
   > python Lib/test/test_largefile.py
2. If it dies before it produces any output can you tell me if it died on
   line 18:
      f.seek(2147483649L)
   which, I suppose is possible. Maybe this is not a good way to determine if
   the system has largefile support.


Jeremy, Tim, Guido, 
As with the NetBSD compile bug. I won't have time to fix this by the freeze
today unless it is I get info from the people who encountered these bugs and
it is *really* easy to fix.


Trent
    

-- 
Trent Mick
TrentM at ActiveState.com



From guido at beopen.com  Thu Aug 31 18:30:48 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 11:30:48 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: Your message of "Thu, 31 Aug 2000 17:24:58 +0200."
             <39AE78CA.809E660A@lemburg.com> 
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>  
            <39AE78CA.809E660A@lemburg.com> 
Message-ID: <200008311630.LAA16022@cj20424-a.reston1.va.home.com>

> > Please try this again on various platforms with this version:
> > 
> >     i = 0
> >     class C:
> >       def __getattr__(self, name):
> >           global i
> >           print i
> >           i += 1
> >           return self.name # common beginners' mistake
> > 
> >     C() # This tries to get __init__, triggering the recursion
> > 
> > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> > have no idea what units).
> 
> 8192 refers to kB, i.e. 8 MB.
> 
> I get 6053 on SuSE Linux 6.2 without resource stack limit set.
> 
> Strange enough if I put the above inside a script, the class
> isn't instantiated. The recursion only starts when I manually
> trigger C() in interactive mode or do something like
> 'print C()'. Is this a bug or a feature ?

Aha.  I was wrong -- it's happening in repr(), not during
construction.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at fnal.gov  Thu Aug 31 17:50:38 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 10:50:38 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311630.LAA16022@cj20424-a.reston1.va.home.com>
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
	<39ACDA4F.3EF72655@lemburg.com>
	<000d01c0126c$dfe700c0$766940d5@hagrid>
	<39ACE51F.3AEC75AB@lemburg.com>
	<200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
	<39AE5098.36746F4B@lemburg.com>
	<200008311558.KAA15649@cj20424-a.reston1.va.home.com>
	<39AE78CA.809E660A@lemburg.com>
	<200008311630.LAA16022@cj20424-a.reston1.va.home.com>
Message-ID: <14766.32462.663536.177308@buffalo.fnal.gov>

Guido van Rossum writes:
 > > > Please try this again on various platforms with this version:
 > > > 
 > > >     i = 0
 > > >     class C:
 > > >       def __getattr__(self, name):
 > > >           global i
 > > >           print i
 > > >           i += 1
 > > >           return self.name # common beginners' mistake
 > > > 
 > > >     C() # This tries to get __init__, triggering the recursion
 > > > 
 > > > I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
 > > > have no idea what units).

I get a core dump after 4824 iterations on a not-quite-Red-Hat box,
with an 8MB stack limit.

What about the idea that was suggested to use a sigsegv catcher?  Or
reading info from /proc (yes, there is a lot of overhead here, but if
we do in infrequently enough we might just get away with it.  It could
be a configure-time option disable by default).  I still think there
are even more tricks possible here, and we should pursue this after
2.0b1.  I volunteer to help work on it ;-)






From bwarsaw at beopen.com  Thu Aug 31 17:53:19 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 11:53:19 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de>
	<39ACDA4F.3EF72655@lemburg.com>
	<000d01c0126c$dfe700c0$766940d5@hagrid>
	<39ACE51F.3AEC75AB@lemburg.com>
	<200008301832.UAA00688@loewis.home.cs.tu-berlin.de>
	<39AE5098.36746F4B@lemburg.com>
	<20000831054804.A3278@lyra.org>
Message-ID: <14766.32623.705548.109625@anthem.concentric.net>

>>>>> "GS" == Greg Stein <gstein at lyra.org> writes:

    GS> 10k iterations on my linux box

9143 on mine.

I'll note that Emacs has a similar concept, embodied in
max-lisp-eval-depth.  The documentation for this variable clearly
states that its purpose is to avoid infinite recursions that would
overflow the C stack and crash Emacs.  On my XEmacs 21.1.10,
max-lisp-eval-depth is 500.  Lisp tends to be more recursive than
Python, but it's also possible that there are fewer ways to `hide'
lots of C stack between Lisp function calls.

So random.choice(range(500, 9143)) seems about right to me <wink>.

-Barry



From jeremy at beopen.com  Thu Aug 31 17:56:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 11:56:20 -0400 (EDT)
Subject: [Python-Dev] Cookie.py security
In-Reply-To: <20000831075321.A3099@keymaster.enme.ucalgary.ca>
References: <E13U7si-0000E5-00@207-172-111-203.s203.tnt1.ann.va.dialup.rcn.com>
	<14765.45843.401319.187156@bitdiddle.concentric.net>
	<20000831075321.A3099@keymaster.enme.ucalgary.ca>
Message-ID: <14766.32804.933498.914265@bitdiddle.concentric.net>

>>>>> "NS" == Neil Schemenauer <nascheme at enme.ucalgary.ca> writes:

  NS> On Wed, Aug 30, 2000 at 09:21:23PM -0400, Jeremy Hylton wrote:
  >> I would guess that pickle makes attacks easier: It has more
  >> features, e.g. creating instances of arbitrary classes (provided
  >> that the attacker knows what classes are available).

  NS> marshal can handle code objects.  That seems pretty scary to me.
  NS> I would vote for not including these unsecure classes in the
  NS> standard distribution.  Software that expects them should
  NS> include their own version of Cookie.py or be fixed.

If a server is going to use cookies that contain marshal or pickle
data, they ought to be encrypted or protected by a secure hash.

Jeremy



From effbot at telia.com  Thu Aug 31 19:47:45 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 19:47:45 +0200
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
Message-ID: <008701c01373$95ced1e0$766940d5@hagrid>

iirc, I've been bitten by this a couple of times too
(before I switched to asyncore...)

any special reason why the input socket is unbuffered
by default?

</F>

----- Original Message ----- 
From: "Andy Bond" <bond at dstc.edu.au>
Newsgroups: comp.lang.python
Sent: Thursday, August 31, 2000 8:41 AM
Subject: SocketServer and makefile()


> I've been working with BaseHTTPServer which in turn uses SocketServer to
> write a little web server.  It is used to accept PUT requests of 30MB chunks
> of data.  I was having a problem where data was flowing at the rate of
> something like 64K per second over a 100MB network.  Weird.  Further tracing
> showed that the rfile variable from SocketServer (used to suck in data to
> the http server) was created using makefile on the original socket
> descriptor.  It was created with an option of zero for buffering (see
> SocketServer.py) which means unbuffered.
> 
> Now some separate testing with socket.py showed that I could whip a 30MB
> file across using plain sockets and send/recv but if I made the receivor use
> makefile on the socket and then read, it slowed down to my 1 sec per 64K.
> If I specify a buffer (something big but less than 64K ... IP packet size?)
> then I am back in speedy territory.  The unbuffered mode seems almost like
> it is sending the data 1 char at a time AND this is the default mode used in
> SocketServer and subsequently BaseHTTPServer ...
> 
> This is on solaris 7, python 1.5.2.  Anyone else found this to be a problem
> or am I doing something wrong?
> 
> andy




From jeremy at beopen.com  Thu Aug 31 20:34:23 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 14:34:23 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
Message-ID: <14766.42287.968420.289804@bitdiddle.concentric.net>

Is the test for linuxaudiodev supposed to play the Spanish Inquistion
.au file?  I just realized that the test does absolutely nothing on my
machine.  (I guess I need to get my ears to raise an exception if they
don't hear anything.)

I can play the .au file and I use a variety of other audio tools
regularly.  Is Peter still maintaining it or can someone else offer
some assistance?

Jeremy



From guido at beopen.com  Thu Aug 31 21:57:17 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 14:57:17 -0500
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: Your message of "Thu, 31 Aug 2000 14:34:23 -0400."
             <14766.42287.968420.289804@bitdiddle.concentric.net> 
References: <14766.42287.968420.289804@bitdiddle.concentric.net> 
Message-ID: <200008311957.OAA22338@cj20424-a.reston1.va.home.com>

> Is the test for linuxaudiodev supposed to play the Spanish Inquistion
> .au file?  I just realized that the test does absolutely nothing on my
> machine.  (I guess I need to get my ears to raise an exception if they
> don't hear anything.)

Correct.

> I can play the .au file and I use a variety of other audio tools
> regularly.  Is Peter still maintaining it or can someone else offer
> some assistance?

Does your machine have a sound card & speakers?  Mine doesn't.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From cgw at fnal.gov  Thu Aug 31 21:04:15 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 14:04:15 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.42287.968420.289804@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
Message-ID: <14766.44079.900005.766299@buffalo.fnal.gov>

The problem is that the test file 

audiotest.au: Sun/NeXT audio data: 8-bit ISDN u-law, mono, 8000 Hz

and the linuxaudiodev module seems to be (implicitly) expecting ".wav" format
(Microsoft RIFF).

If you open a .wav file and write it to the linuxaudiodev object, it works

There is a function in linuxaudiodev to set audio format - there
doesn't seem to be much documentation, the source has:

if (!PyArg_ParseTuple(args, "iiii:setparameters",
                          &rate, &ssize, &nchannels, &fmt))
        return NULL;
  
 and when I do

x = linuxaudiodev.open('w')
x.setparameters(8000, 1, 8, linuxaudiodev.AFMT_MU_LAW )

I get:
linuxaudiodev.error: (0, 'Error')

Also tried '1' for the sample size, thinking it might be in bytes.

The sample size really ought to be implicit in the format.  

The code in linuxaudiodev.c looks sort of dubious to me.  This model
is a little too simple for the variety of audio hardware and software
on Linux systems.  I have some homebrew audio stuff I've written which
I think works better, but it's nowhere near ready for distribution.
Maybe I'll clean it up and submit it for inclusion post-1.6

In the meanwhile, you could ship a .wav file for use on Linux (and
Windows?) machines.  (Windows doesn't usually like .au either)







From jeremy at beopen.com  Thu Aug 31 21:11:18 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 15:11:18 -0400 (EDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <200008311957.OAA22338@cj20424-a.reston1.va.home.com>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
	<200008311957.OAA22338@cj20424-a.reston1.va.home.com>
Message-ID: <14766.44502.812468.677142@bitdiddle.concentric.net>

>>>>> "GvR" == Guido van Rossum <guido at beopen.com> writes:

  >> I can play the .au file and I use a variety of other audio tools
  >> regularly.  Is Peter still maintaining it or can someone else
  >> offer some assistance?

  GvR> Does your machine have a sound card & speakers?  Mine doesn't.

Yes.  (I bought the Cambridge Soundworks speakers that were on my old
machine from CNRI.)

Jeremy



From gstein at lyra.org  Thu Aug 31 21:18:26 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 12:18:26 -0700
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: <008701c01373$95ced1e0$766940d5@hagrid>; from effbot@telia.com on Thu, Aug 31, 2000 at 07:47:45PM +0200
References: <008701c01373$95ced1e0$766940d5@hagrid>
Message-ID: <20000831121826.F11297@lyra.org>

I ran into this same problem on the client side.

The server does a makefile() so that it can do readline() to fetch the HTTP
request line and then the MIME headers. The *problem* is that if you do
something like:

    f = sock.makefile()
    line = f.readline()
    data = sock.recv(1000)

You're screwed if you have buffering enabled. "f" will read in a bunch of
data -- past the end of the line. That data now sits inside f's buffer and
is not available to the sock.recv() call.

If you forget about sock and just stick to f, then you'd be okay. But
SocketServer and/or BaseHTTPServer doesn't -- it uses both objects to do the
reading.

Solution? Don't use rfile for reading, but go for the socket itself. Or
revamp the two classes to forget about the socket once the files (wfile and
rfile) are created. The latter might not be possible, tho.

Dunno why the unbuffered reading would be slow. I'd think it would still
read large chunks at a time when you request it.

Cheers,
-g

On Thu, Aug 31, 2000 at 07:47:45PM +0200, Fredrik Lundh wrote:
> iirc, I've been bitten by this a couple of times too
> (before I switched to asyncore...)
> 
> any special reason why the input socket is unbuffered
> by default?
> 
> </F>
> 
> ----- Original Message ----- 
> From: "Andy Bond" <bond at dstc.edu.au>
> Newsgroups: comp.lang.python
> Sent: Thursday, August 31, 2000 8:41 AM
> Subject: SocketServer and makefile()
> 
> 
> > I've been working with BaseHTTPServer which in turn uses SocketServer to
> > write a little web server.  It is used to accept PUT requests of 30MB chunks
> > of data.  I was having a problem where data was flowing at the rate of
> > something like 64K per second over a 100MB network.  Weird.  Further tracing
> > showed that the rfile variable from SocketServer (used to suck in data to
> > the http server) was created using makefile on the original socket
> > descriptor.  It was created with an option of zero for buffering (see
> > SocketServer.py) which means unbuffered.
> > 
> > Now some separate testing with socket.py showed that I could whip a 30MB
> > file across using plain sockets and send/recv but if I made the receivor use
> > makefile on the socket and then read, it slowed down to my 1 sec per 64K.
> > If I specify a buffer (something big but less than 64K ... IP packet size?)
> > then I am back in speedy territory.  The unbuffered mode seems almost like
> > it is sending the data 1 char at a time AND this is the default mode used in
> > SocketServer and subsequently BaseHTTPServer ...
> > 
> > This is on solaris 7, python 1.5.2.  Anyone else found this to be a problem
> > or am I doing something wrong?
> > 
> > andy
> 
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev

-- 
Greg Stein, http://www.lyra.org/



From cgw at fnal.gov  Thu Aug 31 21:11:30 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 14:11:30 -0500 (CDT)
Subject: Silly correction to: [Python-Dev] linuxaudiodev test does nothing
Message-ID: <14766.44514.531109.440309@buffalo.fnal.gov>

I wrote:

 >  x.setparameters(8000, 1, 8, linuxaudiodev.AFMT_MU_LAW )

where I meant:

 > x.setparameters(8000, 8, 1, linuxaudiodev.AFMT_MU_LAW )

In fact I tried just about every combination of arguments, closing and
reopening the device each time, but still no go.

I also wrote:

 > Maybe I'll clean it up and submit it for inclusion post-1.6

where of course I meant to say post-2.0b1





From effbot at telia.com  Thu Aug 31 21:46:54 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 21:46:54 +0200
Subject: [Python-Dev] one last SRE headache
Message-ID: <023301c01384$39b2bdc0$766940d5@hagrid>

can anyone tell me how Perl treats this pattern?

    r'((((((((((a))))))))))\41'

in SRE, this is currently a couple of nested groups, surrounding
a single literal, followed by a back reference to the fourth group,
followed by a literal "1" (since there are less than 41 groups)

in PRE, it turns out that this is a syntax error; there's no group 41.

however, this test appears in the test suite under the section "all
test from perl", but they're commented out:

# Python does not have the same rules for \\41 so this is a syntax error
#    ('((((((((((a))))))))))\\41', 'aa', FAIL),
#    ('((((((((((a))))))))))\\41', 'a!', SUCCEED, 'found', 'a!'),

if I understand this correctly, Perl treats as an *octal* escape
(chr(041) == "!").

now, should I emulate PRE, Perl, or leave it as it is...

</F>

PS. in case anyone wondered why I haven't seen this before, it's
because I just discovered that the test suite masks syntax errors
under some circumstances...




From guido at beopen.com  Thu Aug 31 22:48:16 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 15:48:16 -0500
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: Your message of "Thu, 31 Aug 2000 12:18:26 MST."
             <20000831121826.F11297@lyra.org> 
References: <008701c01373$95ced1e0$766940d5@hagrid>  
            <20000831121826.F11297@lyra.org> 
Message-ID: <200008312048.PAA23324@cj20424-a.reston1.va.home.com>

> I ran into this same problem on the client side.
> 
> The server does a makefile() so that it can do readline() to fetch the HTTP
> request line and then the MIME headers. The *problem* is that if you do
> something like:
> 
>     f = sock.makefile()
>     line = f.readline()
>     data = sock.recv(1000)
> 
> You're screwed if you have buffering enabled. "f" will read in a bunch of
> data -- past the end of the line. That data now sits inside f's buffer and
> is not available to the sock.recv() call.
> 
> If you forget about sock and just stick to f, then you'd be okay. But
> SocketServer and/or BaseHTTPServer doesn't -- it uses both objects to do the
> reading.
> 
> Solution? Don't use rfile for reading, but go for the socket itself. Or
> revamp the two classes to forget about the socket once the files (wfile and
> rfile) are created. The latter might not be possible, tho.

I was about to say that you have it backwards, and that you should
only use rfile & wfile, when I realized that CGIHTTPServer.py needs
this!  The subprocess needs to be able to read the rest of the socket,
for POST requests.  So you're right.

Solution?  The buffer size should be an instance or class variable.
Then SocketServer can set it to buffered by default, and CGIHTTPServer
can set it to unbuffered.

> Dunno why the unbuffered reading would be slow. I'd think it would still
> read large chunks at a time when you request it.

System call overhead?  I had the same complaint about Windows, where
apparently winsock makes you pay more of a performance penalty than
Unix does in the same case.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From akuchlin at mems-exchange.org  Thu Aug 31 21:46:03 2000
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Thu, 31 Aug 2000 15:46:03 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <023301c01384$39b2bdc0$766940d5@hagrid>; from effbot@telia.com on Thu, Aug 31, 2000 at 09:46:54PM +0200
References: <023301c01384$39b2bdc0$766940d5@hagrid>
Message-ID: <20000831154603.A15688@kronos.cnri.reston.va.us>

On Thu, Aug 31, 2000 at 09:46:54PM +0200, Fredrik Lundh wrote:
>can anyone tell me how Perl treats this pattern?
>    r'((((((((((a))))))))))\41'

>if I understand this correctly, Perl treats as an *octal* escape
>(chr(041) == "!").

Correct.  From perlre:

       You may have as many parentheses as you wish.  If you have more
       than 9 substrings, the variables $10, $11, ... refer to the
       corresponding substring.  Within the pattern, \10, \11,
       etc. refer back to substrings if there have been at least that
       many left parentheses before the backreference.  Otherwise (for
       backward compatibility) \10 is the same as \010, a backspace,
       and \11 the same as \011, a tab.  And so on.  (\1 through \9
       are always backreferences.)  

In other words, if there were 41 groups, \41 would be a backref to
group 41; if there aren't, it's an octal escape.  This magical
behaviour was deemed not Pythonic, so pre uses a different rule: it's
always a character inside a character class ([\41] isn't a syntax
error), and outside a character class it's a character if there are
exactly 3 octal digits; otherwise it's a backref.  So \41 is a backref
to group 41, but \041 is the literal character ASCII 33.

--amk




From gstein at lyra.org  Thu Aug 31 22:04:18 2000
From: gstein at lyra.org (Greg Stein)
Date: Thu, 31 Aug 2000 13:04:18 -0700
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: <200008312048.PAA23324@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 31, 2000 at 03:48:16PM -0500
References: <008701c01373$95ced1e0$766940d5@hagrid> <20000831121826.F11297@lyra.org> <200008312048.PAA23324@cj20424-a.reston1.va.home.com>
Message-ID: <20000831130417.K11297@lyra.org>

On Thu, Aug 31, 2000 at 03:48:16PM -0500, Guido van Rossum wrote:
> I wrote:
>...
> > Solution? Don't use rfile for reading, but go for the socket itself. Or
> > revamp the two classes to forget about the socket once the files (wfile and
> > rfile) are created. The latter might not be possible, tho.
> 
> I was about to say that you have it backwards, and that you should
> only use rfile & wfile, when I realized that CGIHTTPServer.py needs
> this!  The subprocess needs to be able to read the rest of the socket,
> for POST requests.  So you're right.

Ooh! I hadn't considered that case. Yes: you can't transfer the contents of
a FILE's buffer to the CGI, but you can pass a file descriptor (the socket).

> Solution?  The buffer size should be an instance or class variable.
> Then SocketServer can set it to buffered by default, and CGIHTTPServer
> can set it to unbuffered.

Seems reasonable.

> > Dunno why the unbuffered reading would be slow. I'd think it would still
> > read large chunks at a time when you request it.
> 
> System call overhead?  I had the same complaint about Windows, where
> apparently winsock makes you pay more of a performance penalty than
> Unix does in the same case.

Shouldn't be. There should still be an rfile.read(1000) in that example app
(with the big transfers). That read() should be quite fast -- the buffering
should have almost no effect.

So... what is the underlying problem?

[ IOW, there are two issues: the sock vs file thing; and why rfile is so
  darn slow; I have no insights on the latter. ]

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



From effbot at telia.com  Thu Aug 31 22:08:23 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 22:08:23 +0200
Subject: [Python-Dev] one last SRE headache
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>
Message-ID: <027f01c01387$3ae9fde0$766940d5@hagrid>

amk wrote:
> outside a character class it's a character if there are exactly
> 3 octal digits; otherwise it's a backref.  So \41 is a backref
> to group 41, but \041 is the literal character ASCII 33.

so what's the right way to parse this?

read up to three digits, check if they're a valid octal
number, and treat them as a decimal group number if
not?

</F>




From guido at beopen.com  Thu Aug 31 23:10:19 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 16:10:19 -0500
Subject: [Python-Dev] Fw: SocketServer and makefile() [from comp.lang.python]
In-Reply-To: Your message of "Thu, 31 Aug 2000 13:04:18 MST."
             <20000831130417.K11297@lyra.org> 
References: <008701c01373$95ced1e0$766940d5@hagrid> <20000831121826.F11297@lyra.org> <200008312048.PAA23324@cj20424-a.reston1.va.home.com>  
            <20000831130417.K11297@lyra.org> 
Message-ID: <200008312110.QAA23506@cj20424-a.reston1.va.home.com>

> > > Dunno why the unbuffered reading would be slow. I'd think it would still
> > > read large chunks at a time when you request it.
> > 
> > System call overhead?  I had the same complaint about Windows, where
> > apparently winsock makes you pay more of a performance penalty than
> > Unix does in the same case.
> 
> Shouldn't be. There should still be an rfile.read(1000) in that example app
> (with the big transfers). That read() should be quite fast -- the buffering
> should have almost no effect.
> 
> So... what is the underlying problem?
> 
> [ IOW, there are two issues: the sock vs file thing; and why rfile is so
>   darn slow; I have no insights on the latter. ]

Should, shouldn't...

It's a quality of implementation issue in stdio.  If stdio, when
seeing a large read on an unbuffered file, doesn't do something smart
but instead calls getc() for each character, that would explain this.
It's dumb, but not illegal.

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From guido at beopen.com  Thu Aug 31 23:12:29 2000
From: guido at beopen.com (Guido van Rossum)
Date: Thu, 31 Aug 2000 16:12:29 -0500
Subject: [Python-Dev] one last SRE headache
In-Reply-To: Your message of "Thu, 31 Aug 2000 22:08:23 +0200."
             <027f01c01387$3ae9fde0$766940d5@hagrid> 
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>  
            <027f01c01387$3ae9fde0$766940d5@hagrid> 
Message-ID: <200008312112.QAA23526@cj20424-a.reston1.va.home.com>

> amk wrote:
> > outside a character class it's a character if there are exactly
> > 3 octal digits; otherwise it's a backref.  So \41 is a backref
> > to group 41, but \041 is the literal character ASCII 33.
> 
> so what's the right way to parse this?
> 
> read up to three digits, check if they're a valid octal
> number, and treat them as a decimal group number if
> not?

Suggestion:

If there are fewer than 3 digits, it's a group.

If there are exactly 3 digits and you have 100 or more groups, it's a
group -- too bad, you lose octal number support.  Use \x. :-)

If there are exactly 3 digits and you have at most 99 groups, it's an
octal escape.

(Can you even have more than 99 groups in SRE?)

--Guido van Rossum (home page: http://www.pythonlabs.com/~guido/)



From m.favas at per.dem.csiro.au  Thu Aug 31 22:17:14 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 04:17:14 +0800
Subject: [Fwd: [Python-Dev] test_gettext.py fails on 64-bit architectures]
Message-ID: <39AEBD4A.55ABED9E@per.dem.csiro.au>


-- 
Email  - m.favas at per.dem.csiro.au        Mark C Favas
Phone  - +61 8 9333 6268, 0418 926 074   CSIRO Exploration & Mining
Fax    - +61 8 9383 9891                 Private Bag No 5, Wembley
WGS84  - 31.95 S, 115.80 E               Western Australia 6913
-------------- next part --------------
An embedded message was scrubbed...
From: Mark Favas <m.favas at per.dem.csiro.au>
Subject: Re: [Python-Dev] test_gettext.py fails on 64-bit architectures
Date: Fri, 01 Sep 2000 04:16:01 +0800
Size: 2964
URL: <http://mail.python.org/pipermail/python-dev/attachments/20000901/b5f46724/attachment-0001.eml>

From effbot at telia.com  Thu Aug 31 22:33:11 2000
From: effbot at telia.com (Fredrik Lundh)
Date: Thu, 31 Aug 2000 22:33:11 +0200
Subject: [Python-Dev] one last SRE headache
References: <023301c01384$39b2bdc0$766940d5@hagrid> <20000831154603.A15688@kronos.cnri.reston.va.us>              <027f01c01387$3ae9fde0$766940d5@hagrid>  <200008312112.QAA23526@cj20424-a.reston1.va.home.com>
Message-ID: <028d01c0138a$b2de46a0$766940d5@hagrid>

guido wrote:
> Suggestion:
> 
> If there are fewer than 3 digits, it's a group.
> 
> If there are exactly 3 digits and you have 100 or more groups, it's a
> group -- too bad, you lose octal number support.  Use \x. :-)
> 
> If there are exactly 3 digits and you have at most 99 groups, it's an
> octal escape.

I had to add one rule:

    If it starts with a zero, it's always an octal number.
    Up to two more octal digits are accepted after the
    leading zero.

but this still fails on this pattern:

    r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'

where the last part is supposed to be a reference to
group 11, followed by a literal '9'.

more ideas?

> (Can you even have more than 99 groups in SRE?)

yes -- the current limit is 100 groups.  but that's an
artificial limit, and it should be removed.

</F>




From m.favas at per.dem.csiro.au  Thu Aug 31 22:32:52 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 04:32:52 +0800
Subject: [Python-Dev] stack check on Unix: any suggestions?
Message-ID: <39AEC0F4.746656E2@per.dem.csiro.au>

On Thu, Aug 31, 2000 at 02:33:28PM +0200, M.-A. Lemburg wrote:
>...
> At least for my Linux installation a limit of 9000 seems
> reasonable. Perhaps everybody on the list could do a quick
> check on their platform ?
> 
> Here's a sample script:
> 
> i = 0
> def foo(x):
>     global i
>     print i
>     i = i + 1
>     foo(x)
> 
> foo(None)

On my DEC/Compaq/OSF1/Tru64 Unix box with the default stacksize of 2048k
I get 6225 iterations before seg faulting...
-- 
Mark



From ping at lfw.org  Thu Aug 31 23:04:26 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 16:04:26 -0500 (CDT)
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <028d01c0138a$b2de46a0$766940d5@hagrid>
Message-ID: <Pine.LNX.4.10.10008311559180.10613-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Fredrik Lundh wrote:
> I had to add one rule:
> 
>     If it starts with a zero, it's always an octal number.
>     Up to two more octal digits are accepted after the
>     leading zero.

Fewer rules are better.  Let's not arbitrarily rule out
the possibility of more than 100 groups.

The octal escapes are a different kind of animal than the
backreferences: for a backreference, there is *actually*
a backslash followed by a number in the regular expression;
but we already have a reasonable way to put funny characters
into regular expressions.

That is, i propose *removing* the translation of octal
escapes from the regular expression engine.  That's the
job of the string literal:

    r'\011'    is a backreference to group 11

    '\\011'    is a backreference to group 11

    '\011'     is a tab character

This makes automatic construction of regular expressions
a tractable problem.  We don't want to introduce so many
exceptional cases that an attempt to automatically build
regular expressions will turn into a nightmare of special
cases.
    

-- ?!ng




From jeremy at beopen.com  Thu Aug 31 22:47:39 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 16:47:39 -0400 (EDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <39AEC0F4.746656E2@per.dem.csiro.au>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
Message-ID: <14766.50283.758598.632542@bitdiddle.concentric.net>

I've just checked in Misc/find_recursionlimit.py that uses recursion
through various __ methods (.e.g __repr__) to generate infinite
recursion.  These tend to use more C stack frames that a simple
recursive function.

I've set the Python recursion_limit down to 2500, which is safe for
all tests in find_recursionlimit on my Linux box.  The limit can be
bumped back up, so I'm happy to have it set low by default.

Does anyone have a platform where this limit is no low enough?

Jeremy



From ping at lfw.org  Thu Aug 31 23:07:32 2000
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 31 Aug 2000 16:07:32 -0500 (CDT)
Subject: [Python-Dev] Lukewarm about range literals
In-Reply-To: <200008310237.OAA17328@s454.cosc.canterbury.ac.nz>
Message-ID: <Pine.LNX.4.10.10008311604500.10613-100000@server1.lfw.org>

On Thu, 31 Aug 2000, Greg Ewing wrote:
> Peter Schneider-Kamp <nowonder at nowonder.de>:
> 
> > As far as I know adding a builtin indices() has been
> > rejected as an idea.
> 
> But why? I know it's been suggested, but I don't remember seeing any
> convincing arguments against it. Or much discussion at all.

I submitted a patch to add indices() and irange() previously.  See:

http://sourceforge.net/patch/?func=detailpatch&patch_id=101129&group_id=5470

Guido rejected it:

    gvanrossum: 2000-Aug-17 12:16
        I haven't seen the debate! But I'm asked to pronounce
        anyway, and I just don't like this. Learn to write code
        that doesn't need the list index!

    tim_one: 2000-Aug-15 15:08
        Assigned to Guido for Pronouncement.  The debate's been
        debated, close it out one way or the other.

    ping: 2000-Aug-09 03:00
        There ya go.  I have followed the style of the builtin_range()
        function, and docstrings are included.


-- ?!ng




From bwarsaw at beopen.com  Thu Aug 31 22:55:32 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 16:55:32 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules Makefile.pre.in,1.63,1.64
References: <200008311656.JAA20666@slayer.i.sourceforge.net>
Message-ID: <14766.50756.893007.253356@anthem.concentric.net>

>>>>> "Fred" == Fred L Drake <fdrake at users.sourceforge.net> writes:

    Fred> If Setup is older than Setup.in, issue a bold warning that
    Fred> the Setup may need to be checked to make sure all the latest
    Fred> information is present.

    Fred> This closes SourceForge patch #101275.

Not quite.  When I run make in the top-level directory, I see this
message:

-------------------------------------------
./Setup.in is newer than Setup;
check to make sure you have all the updates
you need in your Setup file.
-------------------------------------------

I have to hunt around in my compile output to notice that, oh, make
cd'd into Modules so it must be talking about /that/ Setup file.
"Then why did it say ./Setup.in"? :)

The warning should say Modules/Setup.in is newer than Modules/Setup.

-Barry



From cgw at fnal.gov  Thu Aug 31 22:59:12 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 15:59:12 -0500 (CDT)
Subject: [Python-Dev] linuxaudiodev test does nothing
In-Reply-To: <14766.44502.812468.677142@bitdiddle.concentric.net>
References: <14766.42287.968420.289804@bitdiddle.concentric.net>
	<200008311957.OAA22338@cj20424-a.reston1.va.home.com>
	<14766.44502.812468.677142@bitdiddle.concentric.net>
Message-ID: <14766.50976.102853.695767@buffalo.fnal.gov>

Jeremy Hylton writes:
 >   >> I can play the .au file and I use a variety of other audio tools
 >   >> regularly.  Is Peter still maintaining it or can someone else
 >   >> offer some assistance?

The Linux audio programming docs do clearly state:

>    There are three parameters which affect quality (and memory/bandwidth requirements) of sampled audio
>    data. These parameters are the following:		    
>
>           Sample format (sometimes called as number of bits) 
>           Number of channels (mono/stereo) 
>           Sampling rate (speed) 
>
>           NOTE!  
>              It is important to set these parameters always in the above order. Setting speed before
>              number of channels doesn't work with all devices.  

linuxaudiodev.c does this:
    ioctl(self->x_fd, SOUND_PCM_WRITE_RATE, &rate)
    ioctl(self->x_fd, SNDCTL_DSP_SAMPLESIZE, &ssize)
    ioctl(self->x_fd, SNDCTL_DSP_STEREO, &stereo)
    ioctl(self->x_fd, SNDCTL_DSP_SETFMT, &audio_types[n].a_fmt)

which is exactly the reverse order of what is recommended!

Alas, even after fixing this, I *still* can't get linuxaudiodev to
play the damned .au file.  It works fine for the .wav formats.

I'll continue hacking on this as time permits.



From m.favas at per.dem.csiro.au  Thu Aug 31 23:04:48 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 05:04:48 +0800
Subject: [Python-Dev] stack check on Unix: any suggestions?
References: <39AEC0F4.746656E2@per.dem.csiro.au> <14766.50283.758598.632542@bitdiddle.concentric.net>
Message-ID: <39AEC870.3E1CDAFD@per.dem.csiro.au>

Compaq/DEC/OSF1/Tru64 Unix, default stacksize 2048k:
I get "Limit of 2100 is fine" before stack overflow and segfault.
(On Guido's test script, I got 3532 before crashing, and 6225 on MAL's
test).

Mark

Jeremy Hylton wrote:
> 
> I've just checked in Misc/find_recursionlimit.py that uses recursion
> through various __ methods (.e.g __repr__) to generate infinite
> recursion.  These tend to use more C stack frames that a simple
> recursive function.
> 
> I've set the Python recursion_limit down to 2500, which is safe for
> all tests in find_recursionlimit on my Linux box.  The limit can be
> bumped back up, so I'm happy to have it set low by default.
> 
> Does anyone have a platform where this limit is no low enough?



From bwarsaw at beopen.com  Thu Aug 31 23:14:59 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 17:14:59 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
Message-ID: <14766.51923.685753.319113@anthem.concentric.net>

I wonder if find_recursionlimit.py shouldn't go in Tools and perhaps
be run as a separate rule in the Makefile (with a corresponding
cleanup of the inevitable core file, and a printing of the last
reasonable value returned).  Or you can write a simple Python wrapper
around find_recursionlimit.py that did the parenthetical tasks.

-Barry



From jeremy at beopen.com  Thu Aug 31 23:22:20 2000
From: jeremy at beopen.com (Jeremy Hylton)
Date: Thu, 31 Aug 2000 17:22:20 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
In-Reply-To: <14766.51923.685753.319113@anthem.concentric.net>
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
	<14766.51923.685753.319113@anthem.concentric.net>
Message-ID: <14766.52364.742061.188332@bitdiddle.concentric.net>

>>>>> "BAW" == Barry A Warsaw <bwarsaw at beopen.com> writes:

  BAW> I wonder if find_recursionlimit.py shouldn't go in Tools and
  BAW> perhaps be run as a separate rule in the Makefile (with a
  BAW> corresponding cleanup of the inevitable core file, and a
  BAW> printing of the last reasonable value returned).  Or you can
  BAW> write a simple Python wrapper around find_recursionlimit.py
  BAW> that did the parenthetical tasks.

Perhaps.  It did not imagine we would use the results to change the
recursion limit at compile time or run time automatically.  It seemed
a bit hackish, so I put it in Misc.  Maybe Tools would be better, but
that would require an SF admin request (right?).

Jeremy



From skip at mojam.com  Thu Aug 31 23:32:58 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 31 Aug 2000 16:32:58 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.50283.758598.632542@bitdiddle.concentric.net>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
Message-ID: <14766.53002.467504.523298@beluga.mojam.com>

    Jeremy> Does anyone have a platform where this limit is no low enough?

Yes, apparently I do.  My laptop is configured so:

     Pentium III
     128MB RAM
     211MB swap
     Mandrake Linux 7.1

It spits out 2400 as the last successful test, even fresh after a reboot
with no swap space in use and lots of free memory and nothing else running
besides boot-time daemons.

Skip



From bwarsaw at beopen.com  Thu Aug 31 23:43:54 2000
From: bwarsaw at beopen.com (Barry A. Warsaw)
Date: Thu, 31 Aug 2000 17:43:54 -0400 (EDT)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Misc find_recursionlimit.py,NONE,1.1
References: <200008311924.MAA03080@slayer.i.sourceforge.net>
	<14766.51923.685753.319113@anthem.concentric.net>
	<14766.52364.742061.188332@bitdiddle.concentric.net>
Message-ID: <14766.53658.752985.58503@anthem.concentric.net>

>>>>> "JH" == Jeremy Hylton <jeremy at beopen.com> writes:

    JH> Perhaps.  It did not imagine we would use the results to
    JH> change the recursion limit at compile time or run time
    JH> automatically.  It seemed a bit hackish, so I put it in Misc.
    JH> Maybe Tools would be better, but that would require an SF
    JH> admin request (right?).

Yes, to move the ,v file, but there hasn't been enough revision
history to worry about it.  Just check it in someplace in Tools and
cvsrm it from Misc.



From cgw at fnal.gov  Thu Aug 31 23:45:15 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 16:45:15 -0500
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.53002.467504.523298@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
Message-ID: <200008312145.QAA10295@buffalo.fnal.gov>

Skip Montanaro writes:
 >      211MB swap
 >      Mandrake Linux 7.1
 > 
 > It spits out 2400 as the last successful test, even fresh after a reboot
 > with no swap space in use and lots of free memory and nothing else running
 > besides boot-time daemons.

I get the exact same value.  Of course the amount of other stuff
running makes no differemce, you get the core dump because you've hit
the RLIMIT for stack usage, not because you've exhausted memory.
Amount of RAM in the machine, or swap space in use has nothing to do
with it.  Do "ulimit -s unlimited" and see what happens...

There can be no universally applicable default value here because
different people will have different rlimits depending on how their
sysadmins chose to set this up.




From cgw at fnal.gov  Thu Aug 31 23:52:29 2000
From: cgw at fnal.gov (Charles G Waldman)
Date: Thu, 31 Aug 2000 16:52:29 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.54008.173276.72324@beluga.mojam.com>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
	<14766.54008.173276.72324@beluga.mojam.com>
Message-ID: <14766.54173.228568.55862@buffalo.fnal.gov>

Skip Montanaro writes:

 > Makes no difference:

Allright, I'm confused,  I'll shut up now ;-)



From skip at mojam.com  Thu Aug 31 23:52:33 2000
From: skip at mojam.com (Skip Montanaro)
Date: Thu, 31 Aug 2000 16:52:33 -0500 (CDT)
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <14766.53381.634928.615048@buffalo.fnal.gov>
References: <39AEC0F4.746656E2@per.dem.csiro.au>
	<14766.50283.758598.632542@bitdiddle.concentric.net>
	<14766.53002.467504.523298@beluga.mojam.com>
	<14766.53381.634928.615048@buffalo.fnal.gov>
Message-ID: <14766.54177.584090.198596@beluga.mojam.com>


    Charles> I get the exact same value.  Of course the amount of other
    Charles> stuff running makes no differemce, you get the core dump
    Charles> because you've hit the RLIMIT for stack usage, not because
    Charles> you've exhausted memory.  Amount of RAM in the machine, or swap
    Charles> space in use has nothing to do with it.  Do "ulimit -s
    Charles> unlimited" and see what happens...

Makes no difference:

    % ./python
    Python 2.0b1 (#81, Aug 31 2000, 15:53:42)  [GCC 2.95.3 19991030 (prerelease)] on linux2
    Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
    Copyright 1995-2000 Corporation for National Research Initiatives (CNRI)
    >>>
    % ulimit -a
    core file size (blocks)     0
    data seg size (kbytes)      unlimited
    file size (blocks)          unlimited
    max locked memory (kbytes)  unlimited
    max memory size (kbytes)    unlimited
    open files                  1024
    pipe size (512 bytes)       8
    stack size (kbytes)         unlimited
    cpu time (seconds)          unlimited
    max user processes          2048
    virtual memory (kbytes)     unlimited
    % ./python Misc/find_recursionlimit.py
    ...
    Limit of 2300 is fine
    recurse
    add
    repr
    init
    getattr
    getitem
    Limit of 2400 is fine
    recurse
    add
    repr
    Segmentation fault

Skip



From tim_one at email.msn.com  Thu Aug 31 23:55:56 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:55:56 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <023301c01384$39b2bdc0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEIHDAA.tim_one@email.msn.com>

The PRE documentation expresses the true intent:

    \number
    Matches the contents of the group of the same number. Groups
    are numbered starting from 1. For example, (.+) \1 matches 'the the'
    or '55 55', but not 'the end' (note the space after the group). This
    special sequence can only be used to match one of the first 99 groups.
    If the first digit of number is 0, or number is 3 octal digits long,
    it will not be interpreted as a group match, but as the character with
    octal value number. Inside the "[" and "]" of a character class, all
    numeric escapes are treated as characters

This was discussed at length when we decided to go the Perl-compatible
route, and Perl's rules for backreferences were agreed to be just too ugly
to emulate.  The meaning of \oo in Perl depends on how many groups precede
it!  In this case, there are fewer than 41 groups, so Perl says "octal
escape"; but if 41 or more groups had preceded, it would mean
"backreference" instead(!).  Simply unbearably ugly and error-prone.

> -----Original Message-----
> From: python-dev-admin at python.org [mailto:python-dev-admin at python.org]On
> Behalf Of Fredrik Lundh
> Sent: Thursday, August 31, 2000 3:47 PM
> To: python-dev at python.org
> Subject: [Python-Dev] one last SRE headache
>
>
> can anyone tell me how Perl treats this pattern?
>
>     r'((((((((((a))))))))))\41'
>
> in SRE, this is currently a couple of nested groups, surrounding
> a single literal, followed by a back reference to the fourth group,
> followed by a literal "1" (since there are less than 41 groups)
>
> in PRE, it turns out that this is a syntax error; there's no group 41.
>
> however, this test appears in the test suite under the section "all
> test from perl", but they're commented out:
>
> # Python does not have the same rules for \\41 so this is a syntax error
> #    ('((((((((((a))))))))))\\41', 'aa', FAIL),
> #    ('((((((((((a))))))))))\\41', 'a!', SUCCEED, 'found', 'a!'),
>
> if I understand this correctly, Perl treats as an *octal* escape
> (chr(041) == "!").
>
> now, should I emulate PRE, Perl, or leave it as it is...
>
> </F>
>
> PS. in case anyone wondered why I haven't seen this before, it's
> because I just discovered that the test suite masks syntax errors
> under some circumstances...
>
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://www.python.org/mailman/listinfo/python-dev





From m.favas at per.dem.csiro.au  Thu Aug 31 23:56:25 2000
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Fri, 01 Sep 2000 05:56:25 +0800
Subject: [Python-Dev] Syntax error in Makefile for "make install"
Message-ID: <39AED489.F953E9EE@per.dem.csiro.au>

Makefile in the libainstall target of "make install" uses the following
construct:
                @if [ "$(MACHDEP)" == "beos" ] ; then \
This "==" is illegal in all the /bin/sh's I have lying around, and leads
to make failing with:
/bin/sh: test: unknown operator ==
make: *** [libainstall] Error 1

-- 
Mark



From tim_one at email.msn.com  Thu Aug 31 23:01:10 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:01:10 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <200008312112.QAA23526@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEJHDAA.tim_one@email.msn.com>

> Suggestion:
>
> If there are fewer than 3 digits, it's a group.

Unless it begins with a 0 (that's what's documented today -- read the docs
<wink>).

> If there are exactly 3 digits and you have 100 or more groups, it's a
> group -- too bad, you lose octal number support.  Use \x. :-)

The docs say you can't use backreferences for groups higher than 99.

> If there are exactly 3 digits and you have at most 99 groups, it's an
> octal escape.

If we make the meaning depend on the number of preceding groups, we may as
well emulate *all* of Perl's ugliness here.





From thomas at xs4all.net  Thu Aug 31 23:38:59 2000
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 31 Aug 2000 23:38:59 +0200
Subject: [Python-Dev] stack check on Unix: any suggestions?
In-Reply-To: <200008311558.KAA15649@cj20424-a.reston1.va.home.com>; from guido@beopen.com on Thu, Aug 31, 2000 at 10:58:49AM -0500
References: <200008300912.LAA13470@loewis.home.cs.tu-berlin.de> <39ACDA4F.3EF72655@lemburg.com> <000d01c0126c$dfe700c0$766940d5@hagrid> <39ACE51F.3AEC75AB@lemburg.com> <200008301832.UAA00688@loewis.home.cs.tu-berlin.de> <39AE5098.36746F4B@lemburg.com> <200008311558.KAA15649@cj20424-a.reston1.va.home.com>
Message-ID: <20000831233859.K12695@xs4all.nl>

On Thu, Aug 31, 2000 at 10:58:49AM -0500, Guido van Rossum wrote:

>     C() # This tries to get __init__, triggering the recursion

> I get 5788 iterations on Red Hat Linux 6.2 (ulimit -c says 8192; I
> have no idea what units).

That's odd... On BSDI, with a 2Mbyte stacklimit (ulimit -s says 2048) I get
almost as many recursions: 5136. That's very much not what I would expect...
With a stack limit of 8192, I can go as high as 19997 recursions! I wonder
why that is...

Wait a minute... The Linux SEGV isn't stacksize related at all! Observe:

centurion:~ > limit stacksize 8192
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 65536
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 2048
centurion:~ > python teststack.py | tail -3
5134
5135
5136
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 128
centurion:~ > python teststack.py | tail -3
Segmentation fault (core dumped) 

centurion:~ > limit stacksize 1024
centurion:~ > python teststack.py | tail -3
2677
2678
26Segmentation fault (core dumped) 

centurion:~ > limit stacksize 1500
centurion:~ > python teststack.py | tail -3
3496
3497
349Segmentation fault (core dumped) 

I don't have time to pursue this, however. I'm trying to get my paid work
finished tomorrow, so that I can finish my *real* work over the weekend:
augassign docs & some autoconf changes :-) 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim_one at email.msn.com  Thu Aug 31 23:07:37 2000
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 31 Aug 2000 17:07:37 -0400
Subject: [Python-Dev] one last SRE headache
In-Reply-To: <028d01c0138a$b2de46a0$766940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEELHDAA.tim_one@email.msn.com>

[/F]
> I had to add one rule:
>
>     If it starts with a zero, it's always an octal number.
>     Up to two more octal digits are accepted after the
>     leading zero.
>
> but this still fails on this pattern:
>
>     r'(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)\119'
>
> where the last part is supposed to be a reference to
> group 11, followed by a literal '9'.

But 9 isn't an octal digit, so it fits w/ your new rule just fine.  \117
here instead would be an octal escape.